text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Large Scale Training with VISSL Training (mixed precision, LARC, ZeRO etc)
In this tutorial, show configuration settings that users can set for training large models.
You can make a copy of this tutorial by `File -> Open in playground mode` and make changes there. DO NOT request access to this tutorial.
# Using LARC
LARC (Large Batch Training of Convolutional Networks) is a technique proposed by **Yang You, Igor Gitman, Boris Ginsburg** in https://arxiv.org/abs/1708.03888 for improving the convergence of large batch size trainings.
LARC uses the ratio between gradient and parameter magnitudes is used to calculate an adaptive local learning rate for each individual parameter.
See the [LARC paper](<https://arxiv.org/abs/1708.03888>) for calculation of learning rate. In practice, it modifies the gradients of parameters as a proxy
for modifying the learning rate of the parameters.
## How to enable LARC
VISSL supports the LARC implementation from [NVIDIA's Apex LARC](https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py). To use LARC, users need to set config option
:code:`OPTIMIZER.use_larc=True`. VISSL exposes LARC parameters that users can tune. Full list of LARC parameters exposed by VISSL:
```yaml
OPTIMIZER:
name: "sgd"
use_larc: False # supported for SGD only for now
larc_config:
clip: False
eps: 1e-08
trust_coefficient: 0.001
```
**NOTE:** LARC is currently supported for SGD optimizer only in VISSL.
# Using Apex
In order to use Apex, VISSL provides `anaconda` and `pip` packages of Apex (compiled with Optimzed C++ extensions/CUDA kernels). The Apex
packages are provided for all versions of `CUDA (9.2, 10.0, 10.1, 10.2, 11.0), PyTorch >= 1.4 and Python >=3.6 and <=3.9`.
Follow VISSL's instructions to [install apex in pip](https://github.com/facebookresearch/vissl/blob/master/INSTALL.md#step-2-install-pytorch-opencv-and-apex-pip) and instructions to [install apex in conda](https://github.com/facebookresearch/vissl/blob/master/INSTALL.md#step-3-install-apex-conda>).
# Using Mixed Precision
Many self-supervised approaches leverage mixed precision training by default for better training speed and reducing the model memory requirement.
For this, we use [NVIDIA Apex Library with AMP](https://nvidia.github.io/apex/amp.html#o1-mixed-precision-recommended-for-typical-use).
Users can tune the AMP level to the levels supported by NVIDIA. See [this for details on Apex amp levels](https://nvidia.github.io/apex/amp.html#opt-levels).
To use Mixed precision training, one needs to set the following parameters in configuration file:
```yaml
MODEL:
AMP_PARAMS:
USE_AMP: True
# Use O1 as it is robust and stable than O3. If you want to use O3, we recommend
# the following setting:
# {"opt_level": "O3", "keep_batchnorm_fp32": True, "master_weights": True, "loss_scale": "dynamic"}
AMP_ARGS: {"opt_level": "O1"}
```
# Using ZeRO
**ZeRO: Memory Optimizations Toward Training Trillion Parameter Models** is a technique developed by **Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He** in [this paper](https://arxiv.org/abs/1910.02054).
When training models with billions of parameters, GPU memory becomes a bottleneck. ZeRO can offer 4x to 8x memory reductions in memory thus allowing to fit larger models in memory.
## How ZeRO works?
Memory requirement of a model can be broken down roughly into:
1. activations memory
2. model parameters
3. parameters momentum buffers (optimizer state)
4. parameters gradients
ZeRO *shards* the optimizer state and the parameter gradients onto different devices and reduces the memory needed per device.
## How to use ZeRO in VISSL?
VISSL uses [FAIRScale](https://github.com/facebookresearch/fairscale)_ library which implements ZeRO in PyTorch.
Using VISSL in ZeRO involves no code changes and can simply be done by setting some configuration options in the yaml files.
In order to use ZeRO, user needs to set `OPTIMIZER.name=zero` and nest the desired optimizer (for example SGD) settings in `OPTIMIZER.base_optimizer`.
An example for using ZeRO with LARC and SGD optimization:
```yaml
OPTIMIZER:
name: zero
base_optimizer:
name: sgd
use_larc: False
larc_config:
clip: False
trust_coefficient: 0.001
eps: 0.00000001
weight_decay: 0.000001
momentum: 0.9
nesterov: False
```
**NOTE**: ZeRO works seamlessly with LARC and mixed precision training. Using ZeRO with activation checkpointing is not yet enabled primarily due to manual gradient reduction need for activation checkpointing.
# Using Stateful Data Sampler
## Issue with PyTorch DataSampler for large data training
PyTorch default [torch.utils.data.distributed.DistributedSampler](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py#L12) is the default sampler used for many trainings. However, it becomes limiting to use this sampler in case of large batch size trainings for 2 reasons:
- Using PyTorch `DataSampler`, each trainer shuffles the full data (assuming shuffling is used) and then each trainer gets a view of this shuffled data. If the dataset is large (100 millions, 1 billion or more), generating very large permutation
on each trainer can lead to large CPU memory consumption per machine. Hence, it becomes difficult to use the PyTorch default `DataSampler` when user wants to train on large data and for several epochs (for example: 10 epochs of 100M images).
- When using PyTorch `DataSampler` and the training is resumed, the sampler will serve the full dataset. However, in case of large data trainings (like 1 billion images or more), one mostly trains for 1 epoch only.
In such cases, when the training resumes from the middle of the epoch, the sampler will serve the full 1 billion images which is not what we want.
To solve both the above issues, VISSL provides a custom samplier `StatefulDistributedSampler` which inherits from the PyTorch `DistributedSampler` and fixes the above issues in following manner:
- Sampler creates the view of the data per trainer and then shuffles only the data that trainer is supposed to view. This keeps the CPU memory requirement expected.
- Sampler adds a member `start_iter` which tracks what iteration number of the given epoch model is at. When the training is used, the `start_iter` will be properly set to the last iteration number and the sampler will serve only the remainder of data.
## How to use VISSL custom DataSampler
Using VISSL provided custom samplier `StatefulDistributedSampler` is extremely easy and involves simply setting the correct configuration options as below:
```yaml
DATA:
TRAIN:
USE_STATEFUL_DISTRIBUTED_SAMPLER: True
TEST:
USE_STATEFUL_DISTRIBUTED_SAMPLER: True
```
**NOTE**: Users can use `StatefulDistributedSampler` for only training dataset and use PyTorch default :code:`DataSampler` if desired i.e. it is not mandatory to use the same sampler type for all data splits.
# Activation Checkpointing
Activation checkpointing is a very powerful technique to reduce the memory requirement of a model. This is especially useful when training very large models with billions of parameters.
## How it works?
Activation checkpointing trades compute for memory. It discards intermediate activations during the forward pass, and recomputes them during the backward pass. In
our experiments, using activation checkpointing, we observe negligible compute overhead in memory-bound settings while getting big memory savings.
In summary, This technique offers 2 benefits:
- saves gpu memory that can be used to fit large models
- allows increasing training batch size for a given model
We recommend users to read the documentation available [here](https://pytorch.org/docs/stable/checkpoint.html) for further details on activation checkpointing.
## How to use activation checkpointing in VISSL?
VISSL integrates activation checkpointing implementation directly from PyTorch available [here](https://pytorch.org/docs/stable/checkpoint.html).
Using activation checkpointing in VISSL is extremely easy and doable with simple settings in the configuration file. The settings required are as below:
```yaml
MODEL:
ACTIVATION_CHECKPOINTING:
# whether to use activation checkpointing or not
USE_ACTIVATION_CHECKPOINTING: True
# how many times the model should be checkpointed. User should tune this parameter
# and find the number that offers best memory saving and compute tradeoff.
NUM_ACTIVATION_CHECKPOINTING_SPLITS: 8
DISTRIBUTED:
# if True, does the gradient reduction in DDP manually. This is useful during the
# activation checkpointing and sometimes saving the memory from the pytorch gradient
# buckets.
MANUAL_GRADIENT_REDUCTION: True
```
|
github_jupyter
|
# Funciones generadoras
Por regla general, cuando queremos crear una lista de algún tipo, lo que hacemos es crear la lista vacía, y luego con un bucle varios elementos e ir añadiendolos a la lista si cumplen una condición:
```
[numero for numero in [0,1,2,3,4,5,6,7,8,9,10] if numero % 2 == 0 ]
```
También vimos cómo era posible utilizar la función **range()** para generar dinámicamente la lista en la memoria, es decir, no teníamos que crearla en el propio código, sino que se interpretaba sobre la marcha:
```
[numero for numero in range(0,11) if numero % 2 == 0 ]
```
La verdad es que **range()** es una especie de función generadora. Por regla general las funciones devolvuelven un valor con **return**, pero la preculiaridad de los generadores es que van *cediendo* valores sobre la marcha, en tiempo de ejecución.
La función generadora **range(0,11)**, empieza cediendo el **0**, luego se procesa el for comprobando si es par y lo añade a la lista, en la siguiente iteración se cede el **1**, se procesa el for se comprueba si es par, en la siguiente se cede el **2**, etc.
Con esto se logra ocupar el mínimo de espacio en la memoria y podemos generar listas de millones de elementos sin necesidad de almacenarlos previamente.
Veamos a ver cómo crear una función generadora de pares:
```
def pares(n):
for numero in range(n+1):
if numero % 2 == 0:
yield numero
pares(10)
```
Como vemos, en lugar de utilizar el **return**, la función generadora utiliza el **yield**, que significa ceder. Tomando un número busca todos los pares desde 0 hasta el número+1 sirviéndonos de un range().
Sin embargo, fijaros que al imprimir el resultado, éste nos devuelve un objeto de tipo generador.
De la misma forma que recorremos un **range()** podemos utilizar el bucle for para recorrer todos los elementos que devuelve el generador:
```
for numero in pares(10):
print(numero)
```
Utilizando comprensión de listas también podemos crear una lista al vuelo:
```
[numero for numero in pares(10)]
```
Sin embargo el gran potencial de los generadores no es simplemente crear listas, de hecho como ya hemos visto, el propio resultado no es una lista en sí mismo, sino una secuencia iterable con un montón de características únicas.
## Iteradores
Por tanto las funciones generadoras devuelven un objeto que suporta un protocolo de iteración. ¿Qué nos permite hacer? Pues evidentemente controlar el proceso de generación. Teniendo en cuenta que cada vez que la función generadora cede un elemento, queda suspendida y se retoma el control hasta que se le pide generar el siguiente valor.
Así que vamos a tomar nuestro ejemplo de pares desde otra perspectiva, como si fuera un iterador manual, así veremos exactamente a lo que me refiero:
```
pares = pares(3)
```
Bien, ahora tenemos un iterador de pares con todos los números pares entre el 0 y el 3. Vamos a conseguir el primer número par:
```
next(pares)
```
Como vemos la función integrada **next()** nos permite acceder al siguiente elemento de la secuencia. Pero no sólo eso, si volvemos a ejecutarla...
```
next(pares)
```
Ahora devuelve el segundo! ¿No os recuerdo esto al puntero de los ficheros? Cuando leíamos una línea, el puntero pasaba a la siguiente y así sucesivamente. Pues aquí igual.
¿Y qué pasaría si intentamos acceder al siguiente, aún sabiendo que entre el 0 y el 3 sólo tenemos los pares 0 y 2?
```
next(pares)
```
Pues que nos da un error porque se ha acabado la secuencia, así que tomad nota y capturad la excepción si váis a utilizarlas sin saber exactamente cuantos elementos os devolverá el generador.
Así que la pregunta que nos queda es ¿sólo es posible iterar secuencias generadas al vuelo? Vamos a probar con una lista:
```
lista = [1,2,3,4,5]
next(lista)
```
¿Quizá con una cadena?
```
cadena = "Hola"
next(cadena)
```
Pues no, no podemos iterar ninguna colección como si fuera una secuencia. Sin embargo, hay una función muy interesante que nos permite covertir las cadenas y algunas colecciones a iteradores, la función **iter()**:
```
lista = [1,2,3,4,5]
lista_iterable = iter(lista)
print( next(lista_iterable) )
print( next(lista_iterable) )
print( next(lista_iterable) )
print( next(lista_iterable) )
print( next(lista_iterable) )
cadena = "Hola"
cadena_iterable = iter(cadena)
print( next(cadena_iterable) )
print( next(cadena_iterable) )
print( next(cadena_iterable) )
print( next(cadena_iterable) )
```
Muy bien, ahora ya sabemos qué son las funciones generadores, cómo utilizarlas, y también como como convertir algunos objetos a iteradores. Os sugiero probar por vuestra cuenta más colecciones a ver si encontráis alguna más que se pueda iterar.
|
github_jupyter
|
<img src="images/QISKit-c copy.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="250 px" align="left">
# Hadamard Action: Approach 2
## Jupyter Notebook 2/3 for the Teach Me QISKIT Tutorial Competition
- Connor Fieweger
<img src="images/hadamard_action.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="750 px" align="left">
Another approach to showing equivalence of the presented circuit diagrams is to represent the operators on the qubits as matrices and the qubit states as column vectors. The output is found by applying the matrix that represents the action of the circuit onto the initial state column vector to find the final state column vector. Since the numpy library already enables making linear algebra computations such as these, we'll use that to employ classical programming in order to understand this quantum circuit.
```
import numpy as np
```
## Circuit i)
For i), the initial state of the input is represented by the tensor product of the two input qubits in the initial register. This is given by:
$$\Psi = \psi_1 \otimes \psi_2$$
Where each $\psi$ can be either 0 or 1
This results in the following input states for $\Psi$: |00>, |01>, |10>, or |11>. Represented by column vectors, these are:
$$\text{|00>} = \left(\begin{array}{c}
1 \\
0 \\
0 \\
0
\end{array}\right)$$
$$\text{|01>} = \left(\begin{array}{c}
0 \\
1 \\
0 \\
0
\end{array}\right)$$
$$\text{|10>} = \left(\begin{array}{c}
0 \\
0 \\
1 \\
0
\end{array}\right)$$
$$\text{|11>} = \left(\begin{array}{c}
0 \\
0 \\
0 \\
1
\end{array}\right)$$
```
# These column vectors can be stored in numpy arrays so that we can operate
# on them with the circuit diagram's corresponding matrix (which is to be evaluated)
# as follows:
zero_zero = np.array([[1],[0],[0],[0]])
zero_one = np.array([[0],[1],[0],[0]])
one_zero = np.array([[0],[0],[1],[0]])
one_one = np.array([[0],[0],[0],[1]])
Psi = {'zero_zero': zero_zero, 'zero_one': zero_one, 'one_zero': one_zero, 'one_one': one_one}
# ^We can conveniently store all possible input states in a dictionary and then print to check the representations:
for key, val in Psi.items():
print(key, ':', '\n', val)
```
The action of the circuit gates on this state is simply the CNOT operator. For a control qubit on line 1 and a subject qubit on line 2, CNOT is given by the 4x4 matrix (as discussed in the appendix notebook):
$$CNOT_1 = \left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}\right]$$
This matrix is the operator that describes the effect of the circuit on the initial state. By taking $CNOT_1$|$\Psi$> = |$\Psi$'>, then, the final state for i) can be found.
```
# storing CNOT as a numpy array:
CNOT_1 = np.matrix([[1, 0, 0, 0],[0, 1, 0, 0],[0, 0, 0, 1],[0, 0, 1, 0]])
print(CNOT_1)
print('FINAL STATE OF i):')
#Apply CNOT to each possible state for |Psi> to find --> |Psi'>
for key, val in Psi.items():
print(key, 'becomes..\n', CNOT_1*val)
```
As one can see, the first set of two states (00, 01) has stayed the same, while the second (10, 11) has flipped to (11, 10). This is readily understood when considering the defining logic of the CNOT gate - the subject qubit on line 2 is flipped if the control qubit on line 1 in the state |1> (this is also referred to as the control qubit being 'on'). Summatively, then, the action of i) is given by: [|00>,|01>,|10>,|11>] --> [|00>,|01>,|11>,|10>].
## Circuit ii)
For ii), a similar examination of the input states and the result when the circuit operation matrix is applied to these states can be done. The input states are the same as those in i), so we can just use the variable 'Psi' that we stored earlier. In order to find the matrix representation of the circuit, a little more depth in considering the matrix that represents the gates is required as follows:
First, consider the parallel application of the Hadamard gate 'H' to each wire. In order to represent this as an operator on the two-qubit-tensor-space state ('$\Psi$'), one needs to take the tensor product of the single-qubit-space's ('$\psi$') Hadamard with itself: $H \otimes H = H^{\otimes 2}$
As discussed in the appendix notebook, this is given by:
$$\text{H}^{\otimes 2} = \frac{1}{2}\left[\begin{array}{cccc}
1 & 1 & 1 & 1 \\
1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1 \\
1 & -1 & -1 & 1
\end{array}\right]$$
This is then the first matrix to consider when finding the matrix that represents the action of circuit ii).
```
# storing this in a numpy array:
H_2 = .5*np.matrix([[1, 1, 1, 1],[1, -1, 1, -1],[1, 1, -1, -1],[1, -1, -1, 1]])
print('H_2:')
print(H_2)
```
The next operation on the qubits is a CNOT controlled by line 2. This is given by the 4x4 matrix (also in the appendix notebook):
$$CNOT_2 = \left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0
\end{array}\right]$$
This is then the second matrix to consider in finding the matrix that represents the action of circuit ii).
```
# storing this in a numpy array:
CNOT_2 = np.matrix([[1, 0, 0, 0],[0, 0, 0, 1],[0, 0, 1, 0],[0, 1, 0, 0]])
```
Finally, the set of parallel hadamard matrices as given by $H^{\otimes 2}$ is again applied to the two-qubit-space. With this, all matrices that contribute to the circuit's action have been found. Applying each operator to the state as one reads the circuit diagram from left to right, one finds: $(H^{\otimes 2})(CNOT_2)(H^{\otimes} 2)\Psi$ = $\Psi$ '. The $(H^{\otimes 2})(CNOT_2)(H^{\otimes} 2)$ term can be evaluated through matrix multiplication to a single 4x4 matrix that represents the entire circuit as an operator, let's call it 'A'.
```
A = H_2*CNOT_2*H_2
print(A)
```
This representation should look familiar, no?
```
print(CNOT_1)
```
Just to double check,;
```
for key, val in Psi.items():
print(key, 'becomes...\n', A*val)
```
The action of i) and ii) are evidently the same then $\square$.
|
github_jupyter
|
# Introduction to Python
An introduction to Python for middle and high school students using Python 3 syntax.

## Getting started
We're assuming that you already have Python 3.6 or higher installed. If not, go to Python.org to download the latest for your operating system. Verify that you have the correct version by opening a terminal or command prompt and running
```
$ python --version
Python 3.6.0
```
# Your First Program: Hello, World!
Open the Interactive DeveLopment Environment (IDLE) and write the famous Hello World program. Open IDLE and you'll be in an interactive shell.
```
print('Hello, World!')
```
Choose *File > New Window*. An empty window will appear with *Untitled* in the menu bar. Enter the following code into the new shell window. Choose *File > Save*. Save as `hello.py`, which is known as a python `module`. Choose *Run > Run module* to run the file
## Calculating with Python
Mathematical operators:
* Addition: +
* Subtraction: -
* Multiplication: *
Try these
* `3 * 4`
```
3*4
```
Division:
* Floating point `/`
* Integer `//`
Try these:
* `5/4`
* `1/0`
* `3//4`
* `5//4`
```
3//4
# Exponents
2**3
# Modulus
5%4
```
### Type function
Theres lots more available via the standard library and third party packages. To see the type of the result, use the *type* function. For example `type(3//4))` returns `int`
## Order of Operations
Python reads left to right. Higher precedence operators are applied before lower precedence operators. Operators below are listed lowest precedence at the top.
| Operator | Description |
|----------------------------------------------|-----------------------------------------------------------|
| or | Boolean OR |
| and | Boolean AND |
| not | Boolean NOT |
| in, not in, is, is not, `<`, `<=`, `>`, `>=`, `!=`, `==` | Comparison, including membership tests and identity tests |
| `+`, `-` | Addition and Subtraction |
| `*`, `/`, `//`, `%` | Multiplication, division, integer division, remainder |
| `**` | Exponentiation |
Calculate the result of `5 + 1 * 4`.
We override the precendence using parenthesis which are evaluated from the innermost out.
Calculate the result of `(5+1) * 4`.
> Remember that multiplication and division always go before
addition and subtraction, unless parentheses are used to control
the order of operations.
```
(2 + 2) ** 3
```
## Variables
Variables are like labels so that we can refer to things by a recognizable name.
```
fred = 10 + 5
type(fred)
fred = 10 / 5
type(fred)
fred * 55 + fred
joe = fred * 55
joe
joe
fred
joe = fred
fred = joe
```
### Valid varible names
Variables begin with a letter followed by container letters, numbers and underscores
* `jim`
* `other_jim`
* `other_jim99`
### Invalid variable names: don't meet requiremenets
* `symbol$notallowed`
* `5startswithnumber`
### Invalid variable names: reserved words
| Reserved words | | | | |
|----------------|----------|--------|----------|-------|
| None | continue | for | lambda | try |
| True | def | from | nonlocal | while |
| and | del | global | not | with |
| as | elif | if | or | yield |
| break | except | in | raise | |
### Referring to a previous result
You can use the `_` variable to refer to the result of a previous calculation when working in the shell.
```
ends_with_9 = 9
a = 6
b = 4
my_var = 7
num_apples * 65
doesntexist
```
## User Input
We can get keyboard input using the `input` function
```
name = input("What's your name? ")
print("Hi ", name)
```
## Strings
* Strings are immutable objects in python, meaning they can't be modified once created, but they can be used to create new strings.
* Strings should be surrounded with a single quote `'` or double quote `"`. The general rule is to use the single quote unless you plan to use something called *interpolation*
### Formatting
Strings support templating and formatting.
```
id("bar")
fred = "bar"
id(fred)
"this string is %s" % ('formatted')
"this string is also {message}. The message='{message}' can be used more than once".format(message='formatted')
# Called string concatenation
"this string is "+ 'concatenated'
## Conditionals
`if (condition):`
`elif (condition):`
`else (optional condition):`
aa = False
if aa:
print('a is true')
else:
print ('aa is not true')
aa = 'wasdf'
if aa == 'aa':
print('first condition')
elif aa == 'bb':
print('second condition')
else:
print('default condition')
```
## Data Structures
* Lists `[]`
Lists are orderings of things where each thing corresponds to an index starting at 0.
Example `[1, 2, 3]` where 1 is at index 0, 2 is at index 1 and 3 is at index 2.
* Tuples `()`
Tuples are like lists, only you can't
* Dictionaries `{}`
key value pairs
## Comprehension
Lists can be constructed using comprehension logic
```
[(a, a*2) for a in range(10)]
```
We can use conditionals as well
```
[(a, a*2) for a in range(10) if a < 8]
```
Additional topics
* python modules and packages
* functions
* classes and methods
* generators
* pip and the python packaging land
* virtualenvs
|
github_jupyter
|
Good morning! You have completed the math trail on car plate numbers in a somewhat (semi-)automated way.
Can you actually solve the same tasks with code? Read on and you will be amazed how empowering programming can be to help make mathematics learning more efficient and productive! :)
# Task
Given the incomplete car plate number `SLA9??2H`
Find the missing ?? numbers.
A valid Singapore car plate number typically starts with 3 letters, followed by 4 digits and ending with a 'check' letter.
For example, for the valid car plate number is 'SDG6136T',
- The first letter is 'S' for Singapore.
- The next two letters and the digits are used to compute the check letter, using the following steps:
- Ignoring the first letter 'S', the letters are converted to their positions in the alphabet. For example, 'D' is 4, 'G' is 7 and 'M' is 13.
- The converted letters and the digits form a sequence of 6 numbers. For example, 'DG6136' will give (4, 7, 6, 1, 3, 6).
- The sequence of 6 numbers is multiplied term by term by the sequence of 6 weights (9, 4, 5, 4, 3, 2) respectively, summed up and then divided by 19 to obtain the remainder.
- For example, '476136' will give 4x9 + 7x4 + 6x5 + 1x4 + 3x3 + 6x2 = 119, and this leaves a remainder of 5 after dividing by 19.
- The 'check' letter is obtained by referring to the following table. Thus the check letter corresponding to remainder 5 is T.
```
| Remainder | 'check' letter | Remainder | 'check' letter | Remainder | 'check' letter |
| 0 | A | 7 | R | 13 | H |
| 1 | Z | 8 | P | 14 | G |
| 2 | Y | 9 | M | 15 | E |
| 3 | X | 10 | L | 16 | D |
| 4 | U | 11 | K | 17 | C |
| 5 | T | 12 | J | 18 | B |
| 6 | S | | | | |
```
Reference: https://sgwiki.com/wiki/Vehicle_Checksum_Formula
Pseudocode
```
FOR i = 0 to 99
Car_Plate = 'SJT9' + str(i) + '2H'
IF Check_Letter(Car_Plate) is True
print (Car_Plate) on screen
ENDIF
NEXT
```
```
# we need to store the mapping from A to 1, B to 2, etc.
# for the letters part of the car plate number
# a dictionary is good for this purpose
letter_map = {}
for i in range(27): # 26 alphabets
char = chr(ord('A') + i)
letter_map[char] = i + 1
#print(letter_map) # this will output {'A':1, 'B':2, 'C':3, ..., 'Z':26}
# we also need to store the mapping from remainders to the check letter
# and we can also use a dictionary! :)
check_map = {0:'A', 1:'Z', 2:'Y', 3:'X', 4:'U', 5:'T', 6:'S', 7:'R', 8:'P', \
9:'M', 10:'L', 11:'K', 12:'J', 13:'H', 14:'G', 15:'E', 16:'D', \
17:'C', 18:'B'}
# we define a reusable Boolean function to generate the check letter and
# check if it matches the last letter of the car plate number
def check_letter(car_plate):
weights = [9, 4, 5, 4, 3, 2]
total = 0
for i in range(len(car_plate)-1):
if i < 2: # letters
num = letter_map[car_plate[i]]
else: # digits
num = int(car_plate[i])
total += num * weights[i]
remainder = total % 19
return check_map[remainder] == car_plate[-1]
#main
car_plate = 'DG6136T' # you can use this to verify the given example
if check_letter(car_plate):
print('S' + car_plate, car_plate[3:5])
print()
for i in range(100): # this loop repeats 100 times for you! :)
car_plate = 'LA9' + str(i).zfill(2) + '2H' # 'LA9002H', 'LA9012H', ...
if check_letter(car_plate):
print('S' + car_plate, car_plate[3:5])
#main
for i in range(100):
car_plate = 'LA' + str(i).zfill(2) + '68Y'
if check_letter(car_plate):
print('S' + car_plate, car_plate[2:4])
'0'.zfill(2)
```
# Challenge
- How many car_plate numbers start with SMV and end with D?
```
#main
count = 0
for i in range(10000):
car_plate = 'MV' + str(i).zfill(4) + 'D'
if check_letter(car_plate):
count += 1
print(count)
#main
wanted = []
for i in range(10000):
car_plate = 'MV' + str(i).zfill(4) + 'D'
if check_letter(car_plate):
print('S' + car_plate, end=' ')
wanted.append('S' + car_plate)
print(len(wanted))
```
# More challenges!
Suggest one or more variations of problems you can solve with car plate numbers using the power of Python programming. Some ideas include:
* Check if a given car plate number is valid
* Which valid car plate numbers have a special property (eg prime number, contains at least two '8' digits, does not contain the lucky number 13, etc.)
* If there are the same number of available car plate numbers each series (eg SMV and SMW)
* (your idea here)
Submit a pull request with your ideas and/or code to contribute to learning Mathematics using programming to benefit the world! :)
```
```
# This is really more than car plate numbers!
You have just learned an application of mathematics called modulus arithmetic in generating check letters/digits. Do you know that actually the following are also applications of modulus arithmetic?
* Singapore NRIC numbers (http://www.ngiam.net/NRIC/NRIC_numbers.ppt)
* international ISBNs (https://en.wikipedia.org/wiki/International_Standard_Book_Number)
* credit card numbers (https://en.wikipedia.org/wiki/Luhn_algorithm)
* universal product codes (https://en.wikipedia.org/wiki/Universal_Product_Code)
Can you research on other applications modulus arithmetic has? Better still, contribute by submitting Python code to unleash the power of automation!
You can submit a pull request by doing one of the following:
- suggesting a new application for modulus arithmetic
- creating a new .py file
- uploading an existing .py file
We look forward to your pull requests! :)
|
github_jupyter
|
# Using a new function to evaluate or evaluating a new acquisition function
In this notebook we describe how to integrate a new fitness function to the testing framework as well as how to integrate a new acquisition function.
```
import numpy as np
import matplotlib.pyplot as plt
# add the egreedy module to the path (one directory up from this)
import sys, os
sys.path.append(os.path.realpath(os.path.pardir))
```
## New fitness function
The `perform_experiment` function in the `optimizer` class, used to carry out the optimisation runs (see its docstring and `run_all_experiments.py` for usage examples), imports a fitness **class**. This class, when instantiated is also callable. The class is imported from the `test_problems` module. Therefore, the easiest way to incorporate your own fitness function is to add it to the `test_problems` module by creating a python file in the `egreedy/test_problems/` directory and adding a line importing it into the namespace (see `egreedy/test_problems/__init__.py` for examples) so that it can be directly imported from `test_problems`.
If, for example, your fitness class is called `xSquared` and is located in the file `xs.py`, you would place the python file in the directory `egreedy/test_problems` and add the line:
```
from .xs import xSquared
```
to the `egreedy/test_problems/__init__.py` file.
We will now detail how to structure your fitness class and show the required class methods by creating a new fitness class for the function
\begin{equation}
f( \mathbf{x} ) = \sum_{i=1}^2 x_i^2,
\end{equation}
where $\mathbf{x} \in [-5, 5]^2.$
```
class xSquared:
"""Example fitness class.
.. math::
f(x) = \sum_{i=1}^2 x_i^2
This demonstration class shows all the required attributes and
functionality of the fitness function class.
"""
def __init__(self):
"""Initialisation function.
This is called when the class is instantiated and sets up its
attributes as well as any other internal variables that may
be needed.
"""
# problem dimensionality
self.dim = 2
# lower and upper bounds for each dimension (must be numpy.ndarray)
self.lb = np.array([-5., -5.])
self.ub = np.array([5., 5.])
# location(s) of the optima (optional - not always known)
self.xopt = np.array([0.])
# its/thier fitness value(s)
self.yopt = np.array([0.])
# callable constraint function for the problem - should return
# True if the argument value is **valid** - if no constraint function
# is required then this can take the value of None
self.cf = None
def __call__(self, x):
"""Main callable function.
This is called after the class is instantiated, e.g.
>>> f = xSquared()
>>> f(np.array([2., 2.]))
array([8.])
Note that it is useful to have a function that is able deal with
multiple inputs, which should a numpy.ndarray of shape (N, dim)
"""
# ensure the input is at least 2d, this will cause one-dimensional
# vectors to be reshaped to shape (1, dim)
x = np.atleast_2d(x)
# evaluate the function
val = np.sum(np.square(x), axis=1)
# return the evaluations
return val
```
This class can then either be placed in the directories discussed above and used for evaluating multiple techniques on it or used for testing purposes.
### Optimising the new test function with an acquistion function
The following code outlines how to optimise your newly created test function with the $\epsilon$-greedy with Pareto front selection ($\epsilon$-PF) algorithm.
```
from pyDOE2 import lhs
from egreedy.optimizer import perform_BO_iteration
# ---- instantiate the test problem
f = xSquared()
# ---- Generate testing data by Latin hypercube sampling across the domain
n_training = 2 * f.dim
# LHS sample in [0, 1]^2 and rescale to problem domain
Xtr = lhs(f.dim, n_training, criterion='maximin')
Xtr = (f.ub - f.lb) * Xtr + f.lb
# expensively evaluate and ensure shape is (n_training, 1)
Ytr = np.reshape(f(Xtr), (n_training, 1))
# ---- Select an acquistion function with optimiser.
# In this case we select e-greedy with Pareto front selection (e-PF)
# known as eFront.
#
# All the acqusition functions have the same parameters:
# lb : lower-bound constraints (numpy.ndarray)
# ub : upper-bound constraints (numpy.ndarray)
# acq_budget : max number of calls to the GP model
# cf : callable function constraint function that returns True if
# the argument vector is VALID. Optional, has a value of None
# if not used
# acquisition_args : optional dictionary containing key:value pairs
# of arguments to a specific acqutision function.
# e.g. for an e-greedy method then the dict
# {'epsilon': 0.1} would dictate the epsilon value.
# e-greedy with Pareto front selection (e-PF), known as eFront
from egreedy.acquisition_functions import eFront
# instantiate the optimiser with a budget of 5000d and epsilon = 0.1
acq_budget = 5000 * f.dim
acquisition_args = {'epsilon': 0.1}
acq_func = eFront(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
# ---- Perform the bayesian optimisation loop for a total budget of 20
# function evaluations (including those used for LHS sampling)
total_budget = 20
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr,f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()
```
The plot below shows the difference between the best seen function value and the true minimum, i.e. $|f^\star - f_{min}|$, over each iteration.
```
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.semilogy(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()
```
## New acquisition function
We now detail how to create your own acquisition function class and integrate it into the testing suite.
In a similar manner to the fitness function classes, the acquisition function classes are imported from the `egreedy.acquisition_functions` module, with the specific classes available determined by the `__ini__.py` file in the same module.
If, for example, your acquisition function class is called `greed` and is located in the file `gr.py`, you would place the python file in the directory `egreedy/acquisition_functions` and add the line:
```
from .gr import greed
```
to the `egreedy/acquisition_functions/__init__.py` file.
The python file `egreedy/acquisition_functions/acq_func_optimisers.py` contains base classes for the acquisition function classes. We will now demonstrate how to implement two simple acquisition functions and then show how to optimise one of the test functions included in the suite.
The `BaseOptimiser` class is the base acquisition function class that implements the standard interface for acquisition function optimizers. It only contains an initialisation function with several arguments:
- lb: lower-bound constraint
- ub: upper-bound constraint
- acq_budget : maximum number of calls to the Gaussian Process
- cf : callable constraint function that returns True if the argument decision vector is VALID (optional, default value: None)
- acquisition_args : Optional dictionary containing additional arguments that are unpacked into key=value arguments for an internal acquisition function; e.g. {'epsilon':0.1}.
The `ParetoFrontOptimiser` class implements the base class as well as an additional function named `get_front(model)` that takes in a GPRegression model from GPy and approximates its Pareto front of model prediction and uncertainty. It returns the decision vectors belonging to the members of the front, an array containing corresponding their predicted value, and an array containing the prediction uncertainty.
We first create a simple acquisition function, extending the base class, that generates uniform samples in space and uses the Gaussian Process's mean prediction to select the best (lowest value) predicted location.
```
from egreedy.acquisition_functions.acq_func_optimisers import BaseOptimiser
class greedy_sample(BaseOptimiser):
"""Greedy function that uniformly samples the GP posterior
and returns the location with the best (lowest) mean predicted value.
"""
# note we do not need to implement an __init__ method because the
# base class already does this. Here we will include a commented
# version for clarity.
# def __init__(self, lb, ub, acq_budget, cf=None, acquisition_args={}):
# self.lb = lb
# self.ub = ub
# self.cf = cf
# self.acquisition_args = acquisition_args
# self.acq_budget = acq_budget
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
after uniformly sampling decision space.
"""
# generate samples
X = np.random.uniform(self.lb, self.ub,
size=(acq_budget, self.lb.size))
# evaluate them with the gp
mu, sigmasqr = model.predict(X, full_cov=False)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]
from egreedy.acquisition_functions.acq_func_optimisers import ParetoFrontOptimiser
class greedy_pfront(ParetoFrontOptimiser):
"""Exploitative method that calculates the approximate Pareto front
of a GP model and returns the Pareto set member that has the best
(lowest) predicted value.
"""
# again here we do not need to implement an __init__ method.
def __call__(self, model):
"""Returns the location with the best (lowest) predicted value
from the approximate Pareto set of the GP's predicted value and
its corresponding uncertainty.
"""
# approximate the pareto set; here X are the locations of the
# members of the set and mu and sigma are their predicted values
# and uncertainty
X, mu, sigma = self.get_front(model)
# find the index of the best value
argmin = np.argmin(mu.flat)
# return the best found value
return X[argmin, :]
```
We now create a similar script to the one used above in the function example. This time we will optimise the `push4` function included in the test suite and load the training data associated with the first run all techniques evaluated in the paper carried out.
Note that in this case the training data contains arguments to be passed into the function during instantiation. This is because the `push4` runs are evaluated on a *problem instance* basis.
```
from egreedy.optimizer import perform_BO_iteration
from egreedy import test_problems
# ---- optimisation run details
problem_name = 'push4'
run_no = 1
acq_budget = 5000 * 4 # because the problem dimensionality is 4
total_budget = 25
# ---- load the training data
data_file = f'../training_data/{problem_name:}_{run_no:}.npz'
with np.load(data_file, allow_pickle=True) as data:
Xtr = data['arr_0']
Ytr = data['arr_1']
if 'arr_2' in data:
f_optional_arguments = data['arr_2'].item()
else:
f_optional_arguments = {}
# ---- instantiate the test problem
f_class = getattr(test_problems, problem_name)
f = f_class(**f_optional_arguments)
# ---- instantiate the acquistion function we created earlier
acq_func = greedy_sample(lb=f.lb, ub=f.ub, cf=None, acq_budget=acq_budget,
acquisition_args=acquisition_args)
while Xtr.shape[0] < total_budget:
# perform one iteration of BO:
# this returns the new location and function value (Xtr, Ynew) as well
# as the trained model used to select the new location
Xnew, Ynew, model = perform_BO_iteration(Xtr, Ytr, f, acq_func, verbose=True)
# augment the training data and repeat
Xtr = np.concatenate((Xtr, np.atleast_2d(Xnew)))
Ytr = np.concatenate((Ytr, np.atleast_2d(Ynew)))
print('Best function value so far: {:g}'.format(np.min(Ytr)))
print()
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(np.minimum.accumulate(np.abs(Ytr - f.yopt)))
ax.set_xlabel('Iteration', fontsize=15)
ax.set_ylabel('$|f^\star - f_{min}|$', fontsize=15)
plt.show()
```
|
github_jupyter
|
```
from dgpsi import dgp, kernel, combine, lgp, path, emulator, Poisson, Hetero, NegBin
import numpy as np
import matplotlib.pyplot as plt
```
# Example 1 on heteroskedastic Gaussian likelihood
```
n=12
X=np.linspace(0,1,n)[:,None]
#Create some replications of input positions so that each input position will six different outputs. Note that SI has linear complexity with number of replications.
for i in range(5):
X=np.concatenate((X,np.linspace(0,1,n)[:,None]),axis=0)
f1= lambda x: -1. if x<0.5 else 1. #True mean function, which is a step function
f2= lambda x: np.exp(1.5*np.sin((x-0.3)*7.)-6.5) #True variance function, which has higher values around 0.5 but low values around boundaries
Y=np.array([np.random.normal(f1(x),np.sqrt(f2(x))) for x in X]) #Generate stochastic outputs according to f1 and f2
z=np.linspace(0,1.,200)[:,None]
Yz=np.array([f1(x) for x in z]).flatten()
plt.plot(z,Yz) #Plot true mean function
plt.scatter(X,Y,color='r')
#Create a 2-layered DGP + Hetero model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.2]),name='matern2.5',scale_est=1,connect=np.arange(1)),
kernel(length=np.array([0.2]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[Hetero()]
#Construct the DGP + Hetero model
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
#Train the model
m.train(N=500)
#Construct the emulator
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
#Make predictions across all layers so we can extract predictions for the mean and variance functions
mu,var=emu.predict(z, method='mean_var',full_layer=True)
#Visualize the overall model prediction
s=np.sqrt(var[-1])
u=mu[-1]+2*s
l=mu[-1]-2*s
p=plt.plot(z,mu[-1],color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.scatter(X,Y,color='black')
plt.plot(z,Yz)
#Visualize the prediction for the mean function
mu_mean=mu[-2][:,0]
var_mean=var[-2][:,0]
s=np.sqrt(var_mean)
u=mu_mean+2*s
l=mu_mean-2*s
p=plt.plot(z,mu_mean,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.plot(z,Yz,color='black',lw=1)
#Visualize the prediction for the log(variance) function
mu_var=mu[-2][:,1]
var_var=var[-2][:,1]
s=np.sqrt(var_var)
u=mu_var+2*s
l=mu_var-2*s
p=plt.plot(z,mu_var,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.plot(z,np.array([np.log(f2(x)) for x in z]).reshape(-1,1),color='black',lw=1)
```
# Example 2 on heteroskedastic Gaussian likelihood
```
#Load and visualize the motorcycle dataset
X=np.loadtxt('./mc_input.txt').reshape(-1,1)
Y=np.loadtxt('./mc_output.txt').reshape(-1,1)
X=(X-np.min(X))/(np.max(X)-np.min(X))
Y=(Y-Y.mean())/Y.std()
plt.scatter(X,Y)
#Construct a 2-layered DGP + Hetero model
layer1=[kernel(length=np.array([0.5]),name='sexp')]
layer2=[]
for _ in range(2):
k=kernel(length=np.array([0.2]),name='sexp',scale_est=1,connect=np.arange(1))
layer2.append(k)
layer3=[Hetero()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
#Train the model
m.train(N=500)
#Construct the emulator
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
#Make predictions over [0,1]
z=np.linspace(0,1,100)[:,None]
mu,var=emu.predict(z, method='mean_var')
#Visualize the predictions
s=np.sqrt(var)
u=mu+2*s
l=mu-2*s
p=plt.plot(z,mu,color='r',alpha=1,lw=1)
p1=plt.plot(z,u,'--',color='g',lw=1)
p1=plt.plot(z,l,'--',color='g',lw=1)
plt.scatter(X,Y,color='black')
```
# Example 3 on Poisson likelihood
```
#Generate some data with replications
n=10
X=np.linspace(0,.3,n)[:,None]
for _ in range(4):
X=np.concatenate((X,np.linspace(0,.3,n)[:,None]),axis=0)
X=np.concatenate((X,np.linspace(0.35,1,n)[:,None]),axis=0)
f= lambda x: np.exp(np.exp(-1.5*np.sin(1/((0.7*0.8*(1.5*x+0.1)+0.3)**2))))
Y=np.array([np.random.poisson(f(x)) for x in X]).reshape(-1,1)
z=np.linspace(0,1.,200)[:,None]
Yz=np.array([f(x) for x in z]).flatten()
test_Yz=np.array([np.random.poisson(f(x)) for x in z]).reshape(-1,1) #generate some testing output data
plt.plot(z,Yz)
plt.scatter(X,Y,color='r')
#Train a GP + Poisson model
layer1=[kernel(length=np.array([0.5]),name='matern2.5',scale_est=1)]
layer2=[Poisson()]
all_layer=combine(layer1,layer2)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(z, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(z, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,4))
ax1.set_title('Predicted and True Poisson Mean')
ax1.plot(z,Yz,color='black')
ax1.plot(z,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(z,quant[0,:],'--',color='b',lw=1)
ax1.plot(z,quant[1,:],'--',color='b',lw=1)
ax1.plot(z,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2], var[-2]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged Poisson Mean')
ax2.plot(z,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(z,u,'--',color='g',lw=1)
ax2.plot(z,l,'--',color='g',lw=1)
ax2.plot(z,np.log(Yz),color='black',lw=1)
print('The negative log-likelihood of predictions is', emu.nllik(z,test_Yz)[0])
#Train a 2-layered DGP + Poisson model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.1]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[Poisson()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(z, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(z, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,4))
ax1.set_title('Predicted and True Poisson Mean')
ax1.plot(z,Yz,color='black')
ax1.plot(z,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(z,quant[0,:],'--',color='b',lw=1)
ax1.plot(z,quant[1,:],'--',color='b',lw=1)
ax1.plot(z,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2], var[-2]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged Poisson Mean')
ax2.plot(z,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(z,u,'--',color='g',lw=1)
ax2.plot(z,l,'--',color='g',lw=1)
ax2.plot(z,np.log(Yz),color='black',lw=1)
print('The negative log-likelihood of predictions is', emu.nllik(z,test_Yz)[0])
```
# Example 4 on Negative Binomial likelihood
The Negative Binomial pmf in dgpsi is defined by
$$p_Y(y;\mu,\sigma)=\frac{\Gamma(y+\frac{1}{\sigma})}{\Gamma(1/{\sigma})\Gamma(y+1)}\left(\frac{\sigma\mu}{1+\sigma\mu}\right)^y\left(\frac{1}{1+\sigma\mu}\right)^{1/{\sigma}}$$
with mean $0<\mu<\infty$ and dispersion $0<\sigma<\infty$, which correspond to numpy's negative binomial parameters $n$ and $p$ via $n=1/\sigma$ and $p=1/(1+\mu\sigma)$.
```
#Generate some data from the Negative Binomial distribution.
n=30
X=np.linspace(0,1,n)[:,None]
for _ in range(5):
X=np.concatenate((X,np.linspace(0,1,n)[:,None]),axis=0)
f1= lambda x: 1/np.exp(2) if x<0.5 else np.exp(2) #True mean function
f2= lambda x: np.exp(6*x**2-3) #True dispersion function
Y=np.array([np.random.negative_binomial(1/f2(x),1/(1+f1(x)*f2(x))) for x in X]).reshape(-1,1)
Xt=np.linspace(0,1.,200)[:,None]
Yt=np.array([f1(x) for x in Xt]).flatten()
plt.plot(Xt,Yt)
plt.scatter(X,Y,color='r')
#Train a 2-layered DGP (one GP in the first layer and two in the second corresponding to the mean and dispersion parameters) + NegBin model
layer1=[kernel(length=np.array([0.5]),name='matern2.5')]
layer2=[kernel(length=np.array([0.02]),name='matern2.5',scale_est=1,connect=np.arange(1)),
kernel(length=np.array([0.02]),name='matern2.5',scale_est=1,connect=np.arange(1))]
layer3=[NegBin()]
all_layer=combine(layer1,layer2,layer3)
m=dgp(X,[Y],all_layer)
m.train(N=500)
#Visualize the results
final_layer_obj=m.estimate()
emu=emulator(final_layer_obj)
mu,var=emu.predict(Xt, method='mean_var',full_layer=True) #Make mean-variance prediction
samp=emu.predict(Xt, method='sampling') #Draw some samples to obtain the quantiles of the overall model
quant=np.quantile(np.squeeze(samp), [0.05,0.5,0.95],axis=1) #Compute sample-based quantiles
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(15,4))
ax1.set_title('Predicted and True NegBin Mean')
ax1.plot(Xt,Yt,color='black')
ax1.plot(Xt,mu[-1],'--',color='red',alpha=0.8,lw=3)
ax1.plot(Xt,quant[0,:],'--',color='b',lw=1)
ax1.plot(Xt,quant[1,:],'--',color='b',lw=1)
ax1.plot(Xt,quant[2,:],'--',color='b',lw=1)
mu_gp, var_gp=mu[-2][:,0], var[-2][:,0]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax2.set_title('Predicted and True logged NegBin Mean')
ax2.plot(Xt,mu_gp,color='r',alpha=1,lw=1)
ax2.plot(Xt,u,'--',color='g',lw=1)
ax2.plot(Xt,l,'--',color='g',lw=1)
ax2.plot(Xt,np.log(Yt),color='black',lw=1)
mu_gp, var_gp=mu[-2][:,1], var[-2][:,1]
s=np.sqrt(var_gp)
u,l =mu_gp+2*s, mu_gp-2*s
ax3.set_title('Predicted and True logged NegBin Dispersion')
ax3.plot(Xt,mu_gp,color='r',alpha=1,lw=1)
ax3.plot(Xt,u,'--',color='g',lw=1)
ax3.plot(Xt,l,'--',color='g',lw=1)
ax3.plot(Xt,np.array([np.log(f2(x)) for x in Xt]).reshape(-1,1),color='black',lw=1)
```
|
github_jupyter
|
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "`All modules imported`".
```
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
```
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
```
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
```
<img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
## Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
*If you're having trouble solving problem 1, you can view the solution [here](https://github.com/udacity/deep-learning/blob/master/intro-to-tensorflow/intro_to_tensorflow_solution.ipynb).*
```
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
min_val = 0.1
max_val = 0.9
# normalize to 0..1
x0 = (image_data - np.min(image_data)) / (np.max(image_data) - np.min(image_data))
return min_val + x0 * (max_val - min_val)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
```
# Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
```
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
```
## Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- `features`
- Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`)
- `labels`
- Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`)
- `weights`
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">`tf.truncated_normal()` documentation</a> for help.
- `biases`
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> `tf.zeros()` documentation</a> for help.
*If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available [here](intro_to_tensorflow_solution.ipynb).*
```
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([features_count, labels_count], seed=23))
biases = tf.Variable(tf.zeros([10]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
```
<img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
## Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* **Epochs:** 1
* **Learning Rate:**
* 0.8 (0.09)
* 0.5 (0.78)
* 0.1 (0.74)
* 0.05 (0.72)
* 0.01 (0.58)
Configuration 2
* **Epochs:**
* 1 (0.76)
* 2 (0.77)
* 3 (0.78)
* 4 (0.78)
* 5 (0.79)
* **Learning Rate:** 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
*If you're having trouble solving problem 3, you can view the solution [here](intro_to_tensorflow_solution.ipynb).*
```
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = .2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
```
## Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
```
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
```
# Multiple layers
Good job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers.
|
github_jupyter
|
# Properties of ELGs in DR7 Imaging
The purpose of this notebook is to quantify the observed properties (particulary size and ellipticity) of ELGs using DR7 catalogs of the COSMOS region. We use the HST/ACS imaging of objects in this region as "truth."
J. Moustakas
2018 Aug 15
```
import os, warnings, pdb
import numpy as np
import fitsio
from astropy.table import Table
import matplotlib.pyplot as plt
import seaborn as sns
rc = {'font.family': 'serif'}#, 'text.usetex': True}
sns.set(style='ticks', font_scale=1.5, palette='Set2', rc=rc)
%matplotlib inline
```
#### Read the HST/ACS parent (truth) catalog.
```
acsfile = os.path.join(os.getenv('DESI_ROOT'), 'target', 'analysis', 'truth', 'parent', 'cosmos-acs.fits.gz')
allacs = Table(fitsio.read(acsfile, ext=1, upper=True))
print('Read {} objects from {}'.format(len(allacs), acsfile))
```
#### Assemble all the functions we'll need.
```
def read_tractor(subset='0'):
"""Read the Tractor catalogs for a given cosmos subsest and cross-match
with the ACS catalog.
"""
from glob import glob
from astropy.table import vstack
from astrometry.libkd.spherematch import match_radec
tractordir = '/global/cscratch1/sd/dstn/cosmos-dr7-7{}/tractor'.format(subset)
tractorfiles = glob('{}/???/tractor-*.fits'.format(tractordir))
alldr7 = []
for ii, tractorfile in enumerate(tractorfiles):
#if (ii % 10) == 0:
# print('Read {:02d} / {:02d} Tractor catalogs from subset {}.'.format(ii, len(tractorfiles), subset))
alldr7.append(Table(fitsio.read(tractorfile, ext=1, upper=True)))
alldr7 = vstack(alldr7)
alldr7 = alldr7[alldr7['BRICK_PRIMARY']]
# Cross-match
m1, m2, d12 = match_radec(allacs['RA'], allacs['DEC'], alldr7['RA'],
alldr7['DEC'], 1./3600.0, nearest=True)
print('Read {} objects with HST/ACS and DR7 photometry'.format(len(m1)))
return allacs[m1], alldr7[m2]
def select_ELGs(acs, dr7):
from desitarget.cuts import isELG_south
def unextinct_fluxes(cat):
"""We need to unextinct the fluxes ourselves rather than using desitarget.cuts.unextinct_fluxes
because the Tractor catalogs don't have forced WISE photometry.
"""
res = np.zeros(len(cat), dtype=[('GFLUX', 'f4'), ('RFLUX', 'f4'), ('ZFLUX', 'f4')])
for band in ('G', 'R', 'Z'):
res['{}FLUX'.format(band)] = ( cat['FLUX_{}'.format(band)] /
cat['MW_TRANSMISSION_{}'.format(band)] )
return Table(res)
fluxes = unextinct_fluxes(dr7)
gflux, rflux, zflux = fluxes['GFLUX'], fluxes['RFLUX'], fluxes['ZFLUX']
ielg = isELG_south(gflux=fluxes['GFLUX'], rflux=fluxes['RFLUX'],
zflux=fluxes['ZFLUX'])#, gallmask=alltarg['ALLMASK_G'],
#rallmask=alltarg['ALLMASK_R'], zallmask=alltarg['ALLMASK_Z'])
print('Selected {} / {} ELGs'.format(np.sum(ielg), len(acs)))
return acs[ielg], dr7[ielg]
def get_mag(cat, band='R'):
return 22.5 - 2.5 * np.log10(cat['FLUX_{}'.format(band)])
def get_reff_acs(cat):
"""Convert SExtractor's flux_radius to half-light radius
using the relation (derived from simulations) in Sec 4.2
of Griffith et al. 2012.
"""
with warnings.catch_warnings():
warnings.simplefilter('ignore')
reff = np.log10(0.03 * 0.162 * cat['FLUX_RADIUS']**1.87)
return reff
def get_reff_tractor(cat):
fracdev = cat['FRACDEV']
reff = np.log10(fracdev * cat['SHAPEDEV_R'] + (1 - fracdev) * cat['SHAPEEXP_R'])
return reff
def get_ell_acs(cat):
ell = 1 - cat['B_IMAGE'] / cat['A_IMAGE']
return ell
def get_ell_tractor(cat):
fracdev = cat['FRACDEV']
ell_exp = np.hypot(cat['SHAPEEXP_E1'], cat['SHAPEEXP_E2'])
ell_dev = np.hypot(cat['SHAPEDEV_E1'], cat['SHAPEDEV_E2'])
ell = fracdev * ell_dev + (1 - fracdev) * ell_exp
return ell
def qa_true_properties(acs, dr7, subsetlabel='0', noplots=False,
pngsize=None, pngellipticity=None):
"""Use HST to characterize the *true* ELG size and ellipticity
distributions.
"""
istar = acs['CLASS_STAR'] > 0.9
igal = ~istar
nstar, ngal, nobj = np.sum(istar), np.sum(igal), len(acs)
print('True galaxies, N={} ({:.2f}%):'.format(ngal, 100*ngal/nobj))
for tt in ('PSF ', 'REX ', 'EXP ', 'DEV ', 'COMP'):
nn = np.sum(dr7['TYPE'][igal] == tt)
frac = 100 * nn / ngal
print(' {}: {} ({:.2f}%)'.format(tt, nn, frac))
print('True stars, N={} ({:.2f}%):'.format(nstar, 100*nstar/nobj))
for tt in ('PSF ', 'REX ', 'EXP ', 'DEV ', 'COMP'):
nn = np.sum(dr7['TYPE'][istar] == tt)
frac = 100 * nn / nstar
print(' {}: {} ({:.2f}%)'.format(tt, nn, frac))
if noplots:
return
rmag = get_mag(dr7)
reff = get_reff_acs(acs)
ell = get_ell_acs(acs)
# Size
j = sns.jointplot(rmag[igal], reff[igal], kind='hex', space=0, alpha=0.7,
stat_func=None, cmap='viridis', mincnt=3)
j.set_axis_labels('DECaLS $r$ (AB mag)', r'$\log_{10}$ (HST/ACS Half-light radius) (arcsec)')
j.fig.set_figwidth(10)
j.fig.set_figheight(7)
j.ax_joint.axhline(y=np.log10(0.45), color='k', ls='--')
j.ax_joint.scatter(rmag[istar], reff[istar], marker='s', color='orange', s=10)
j.ax_joint.text(20.8, np.log10(0.45)+0.1, r'$r_{eff}=0.45$ arcsec', ha='left', va='center',
fontsize=14)
j.ax_joint.text(0.15, 0.2, 'HST Stars', ha='left', va='center',
fontsize=14, transform=j.ax_joint.transAxes)
j.ax_joint.text(0.05, 0.9, '{}'.format(subsetlabel), ha='left', va='center',
fontsize=16, transform=j.ax_joint.transAxes)
if pngsize:
plt.savefig(pngsize)
# Ellipticity
j = sns.jointplot(rmag[igal], ell[igal], kind='hex', space=0, alpha=0.7,
stat_func=None, cmap='viridis', mincnt=3)
j.set_axis_labels('DECaLS $r$ (AB mag)', 'HST/ACS Ellipticity')
j.fig.set_figwidth(10)
j.fig.set_figheight(7)
j.ax_joint.scatter(rmag[istar], ell[istar], marker='s', color='orange', s=10)
j.ax_joint.text(0.15, 0.2, 'HST Stars', ha='left', va='center',
fontsize=14, transform=j.ax_joint.transAxes)
j.ax_joint.text(0.05, 0.9, '{}'.format(subsetlabel), ha='left', va='center',
fontsize=16, transform=j.ax_joint.transAxes)
if pngellipticity:
plt.savefig(pngellipticity)
def qa_compare_radii(acs, dr7, subsetlabel='0', seeing=None, png=None):
"""Compare the HST and Tractor sizes."""
igal = dr7['TYPE'] != 'PSF '
reff_acs = get_reff_acs(acs[igal])
reff_tractor = get_reff_tractor(dr7[igal])
sizelim = (-1.5, 1)
j = sns.jointplot(reff_acs, reff_tractor, kind='hex', space=0, alpha=0.7,
stat_func=None, cmap='viridis', mincnt=3,
xlim=sizelim, ylim=sizelim)
j.set_axis_labels(r'$\log_{10}$ (HST/ACS Half-light radius) (arcsec)',
r'$\log_{10}$ (Tractor/DR7 Half-light radius) (arcsec)')
j.fig.set_figwidth(10)
j.fig.set_figheight(7)
j.ax_joint.plot([-2, 2], [-2, 2], color='k')
if seeing:
j.ax_joint.axhline(y=np.log10(seeing), ls='--', color='k')
j.ax_joint.text(0.05, 0.9, '{}'.format(subsetlabel), ha='left', va='center',
fontsize=16, transform=j.ax_joint.transAxes)
if png:
plt.savefig(png)
```
### Use subset 0 to characterize the "true" ELG properties.
```
subset = '0'
allacs, alldr7 = read_tractor(subset=subset)
acs, dr7 = select_ELGs(allacs, alldr7)
subsetlabel = 'Subset {}\n{:.3f}" seeing'.format(subset, np.median(alldr7['PSFSIZE_R']))
qa_true_properties(acs, dr7, subsetlabel=subsetlabel, pngsize='truesize.png', pngellipticity='trueell.png')
```
### Compare radii measured in three subsets of increasingly poor seeing (but same nominal depth).
```
for subset in ('0', '4', '9'):
allacs, alldr7 = read_tractor(subset=subset)
acs, dr7 = select_ELGs(allacs, alldr7)
medseeing = np.median(alldr7['PSFSIZE_R'])
subsetlabel = 'Subset {}\n{:.3f}" seeing'.format(subset, medseeing)
qa_compare_radii(acs, dr7, subsetlabel=subsetlabel, png='size_compare_subset{}.png'.format(subset))
subset = '9'
allacs, alldr7 = read_tractor(subset=subset)
acs, dr7 = select_ELGs(allacs, alldr7)
subsetlabel = 'Subset {}\n{:.3f}" seeing'.format(subset, np.median(alldr7['PSFSIZE_R']))
qa_true_properties(acs, dr7, subsetlabel=subsetlabel, noplots=True)
```
|
github_jupyter
|
```
# default_exp core
```
# hmd_newspaper_dl
> Download Heritage made Digital Newspaper from the BL repository
The aim of this code is to make it easier to download all of the [Heritage Made Digital Newspapers](https://bl.iro.bl.uk/collections/353c908d-b495-4413-b047-87236d2573e3?locale=en) from the British Library's [Research Repository](bl.iro.bl.uk/).
```
# export
import concurrent
import itertools
import json
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
import random
import sys
import time
from collections import namedtuple
from functools import lru_cache
from operator import itemgetter
# from os import umask
import os
from pathlib import Path
from typing import List, Optional, Union
import requests
from bs4 import BeautifulSoup
from fastcore.script import *
from fastcore.test import *
from fastcore.net import urlvalid
from loguru import logger
from nbdev.showdoc import *
from tqdm import tqdm
```
## Getting newspaper links
The Newspapers are currently organised by newspaper title under a collection:

Under each titles you can download a zip file representing a year for that particular newspaper title

If we only want a subset of year or titles we could download these manually but if we're interested in using computational methods it's a bit slow. What we need to do is grab all of the URL's for each title so we can bulk download them all.
```
# export
def _get_link(x: str):
end = x.split("/")[-1]
return "https://bl.iro.bl.uk/concern/datasets/" + end
```
This is a smaller helper function that will generate the correct url once we have got an ID for a title.
```
# export
@lru_cache(256)
def get_newspaper_links():
"""Returns titles from the Newspaper Collection"""
urls = [
f"https://bl.iro.bl.uk/collections/9a6a4cdd-2bfe-47bb-8c14-c0a5d100501f?locale=en&page={page}"
for page in range(1, 3)
]
link_tuples = []
for url in urls:
r = requests.get(url)
r.raise_for_status()
soup = BeautifulSoup(r.text, "lxml")
links = soup.select(".hyc-container > .hyc-bl-results a[id*=src_copy_link]")
for link in links:
url = link["href"]
if url:
t = (link.text, _get_link(url))
link_tuples.append(t)
return link_tuples
```
This function starts from the Newspaper collection and then uses BeatifulSoup to scrape all of the URLs which link to a newspaper title. We have a hard coded URL here which isn't very good practice but since we're writing this code for a fairly narrow purpose we'll not worry about that here.
If we call this function we get a bunch of links back.
```
links = get_newspaper_links()
links
len(links)
```
Although this is code has fairly narrow scope, we might still want some tests to check we're not completely off. `nbdev` makes this super easy. Here we get that the we get back what we expect in terms of tuple length and that our urls look like urls.
```
assert len(links[0]) == 2 # test tuple len
assert (
next(iter(set(map(urlvalid, map(itemgetter(1), links))))) == True
) # check second item valid url
assert len(links) == 10
assert type(links[0]) == tuple
assert (list(map(itemgetter(1), links))[-1]).startswith("https://")
# export
@lru_cache(256)
def get_download_urls(url: str) -> list:
"""Given a dataset page on the IRO repo return all download links for that page"""
data, urls = None, None
try:
r = requests.get(url, timeout=30)
except requests.exceptions.MissingSchema as E:
print(E)
soup = BeautifulSoup(r.text, "lxml")
link_ends = soup.find_all("a", id="file_download")
urls = ["https://bl.iro.bl.uk" + link["href"] for link in link_ends]
# data = json.loads(soup.find("script", type="application/ld+json").string)
# except AttributeError as E:
# print(E)
# if data:
# #data = data["distribution"]
# #urls = [item["contentUrl"] for item in data]
return list(set(urls))
```
`get_download_urls` takes a 'title' URL and then grabs all of the URLs for the zip files related to that title.
```
test_link = links[0][1]
test_link
get_download_urls(test_link)
# export
def create_session() -> requests.sessions.Session:
"""returns a requests session"""
retry_strategy = Retry(total=60)
adapter = HTTPAdapter(max_retries=retry_strategy)
session = requests.Session()
session.mount("https://", adapter)
session.mount("http://", adapter)
return session
```
`create_session` just adds some extra things to our `Requests` session to try and make it a little more robust. This is probably not necessary here but it can be useful to bump up the number of retries
```
# export
def _download(url: str, dir: Union[str, Path]):
time.sleep(10)
fname = None
s = create_session()
try:
r = s.get(url, stream=True, timeout=(30))
r.raise_for_status()
# fname = r.headers["Content-Disposition"].split('_')[1]
fname = "_".join(r.headers["Content-Disposition"].split('"')[1].split("_")[0:5])
if fname:
with open(f"{dir}/{fname}", "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
except KeyError:
pass
except requests.exceptions.RequestException as request_exception:
logger.error(request_exception)
return fname
# for url in get_download_urls("https://bl.iro.bl.uk/concern/datasets/93ec8ab4-3348-409c-bf6d-a9537156f654"):
# s = create_session()
# r = s.get(url, stream=True, timeout=(30))
# print("_".join(r.headers["Content-Disposition"].split('"')[1].split("_")[0:5]))
# s = create_session()
# r = s.get(test_url, stream=True, timeout=(30))
# "_".join(r.headers["Content-Disposition"].split('"')[1].split("_")[0:5])
```
This downloads a file and logs an exception if something goes wrong. Again we do a little test.
```
# slow
test_url = (
"https://bl.iro.bl.uk/downloads/0ea7aa1f-3b4f-4972-bc12-b7559769471f?locale=en"
)
Path("test_dir").mkdir()
test_dir = Path("test_dir")
_download(test_url, test_dir)
# slow
assert list(test_dir.iterdir())[0].suffix == ".zip"
assert len(list(test_dir.iterdir())) == 1
# tidy up
[f.unlink() for f in test_dir.iterdir()]
test_dir.rmdir()
# basic test to check bad urls won't raise unhandled exceptions
bad_link = "https://bl.oar.bl.uk/fail_uploads/download_file?fileset_id=0ea7aa1-3b4f-4972-bc12-b75597694f"
_download(bad_link, "test_dir")
# export
def download_from_urls(urls: List[str], save_dir: Union[str, Path], n_threads: int = 4):
"""Downloads from an input lists of `urls` and saves to `save_dir`, option to set `n_threads` default = 8"""
download_count = 0
tic = time.perf_counter()
Path(save_dir).mkdir(exist_ok=True)
logger.remove()
logger.add(lambda msg: tqdm.write(msg, end=""))
with tqdm(total=len(urls)) as progress:
with concurrent.futures.ThreadPoolExecutor(max_workers=n_threads) as executor:
future_to_url = {
executor.submit(_download, url, save_dir): url for url in urls
}
for future in future_to_url:
future.add_done_callback(lambda p: progress.update(1))
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as e:
logger.error("%r generated an exception: %s" % (url, e))
else:
if data:
logger.info(f"{url} downloaded to {data}")
download_count += 1
toc = time.perf_counter()
logger.remove()
logger.info(f"Downloads completed in {toc - tic:0.4f} seconds")
return download_count
```
`download_from_urls` takes a list of urls and downloads it to a specified directory
```
test_links = [
"https://bl.iro.bl.uk/downloads/0ea7aa1f-3b4f-4972-bc12-b7559769471f?locale=en",
"https://bl.iro.bl.uk/downloads/80708825-d96a-4301-9496-9598932520f4?locale=en",
]
download_from_urls(test_links, "test_dir")
# slow
assert len(test_links) == len(os.listdir("test_dir"))
test_dir = Path("test_dir")
[f.unlink() for f in test_dir.iterdir()]
test_dir.rmdir()
# slow
test_some_bad_links = [
"https://bl.oar.bl.uk/fail_uploads/download_file?fileset_id=0ea7aa1f-3b4f-4972-bc12-b7559769471f",
"https://bl.oar.bl.uk/fail_uploads/download_file?fileset_id=7ac7a0cb-29a2-4172-8b79-4952e2c9b",
]
download_from_urls(test_some_bad_links, "test_dir")
# slow
test_dir = Path("test_dir")
[f.unlink() for f in test_dir.iterdir()]
test_dir.rmdir()
# export
@call_parse
def cli(
save_dir: Param("Output Directory", str),
n_threads: Param("Number threads to use") = 8,
subset: Param("Download subset of HMD", int, opt=True) = None,
url: Param("Download from a specific URL", str, opt=True) = None,
):
"Download HMD newspaper from iro to `save_dir` using `n_threads`"
if url is not None:
logger.info(f"Getting zip download file urls for {url}")
try:
zip_urls = get_download_urls(url)
print(zip_urls)
except Exception as e:
logger.error(e)
download_count = download_from_urls(zip_urls, save_dir, n_threads=n_threads)
else:
logger.info("Getting title urls")
title_urls = get_newspaper_links()
logger.info(f"Found {len(title_urls)} title urls")
all_urls = []
print(title_urls)
for url in title_urls:
logger.info(f"Getting zip download file urls for {url}")
try:
zip_urls = get_download_urls(url[1])
all_urls.append(zip_urls)
except Exception as e:
logger.error(e)
all_urls = list(itertools.chain(*all_urls))
if subset:
if len(all_urls) < subset:
raise ValueError(
f"Size of requested sample {subset} is larger than total number of urls:{all_urls}"
)
all_urls = random.sample(all_urls, subset)
print(all_urls)
download_count = download_from_urls(all_urls, save_dir, n_threads=n_threads)
request_url_count = len(all_urls)
if request_url_count == download_count:
logger.info(
f"\U0001F600 Requested count of urls: {request_url_count} matches number downloaded: {download_count}"
)
if request_url_count > download_count:
logger.warning(
f"\U0001F622 Requested count of urls: {request_url_count} higher than number downloaded: {download_count}"
)
if request_url_count < download_count:
logger.warning(
f"\U0001F937 Requested count of urls: {request_url_count} lower than number downloaded: {download_count}"
)
```
We finally use `fastcore` to make a little CLI that we can use to download all of our files. We even get a little help flag for free 😀. We can either call this as a python function, or when we install the python package it gets registered as a `console_scripts` and can be used like other command line tools.
```
# cli("test_dir", subset=2)
# assert all([f.suffix == '.zip' for f in Path("test_dir").iterdir()])
# assert len(list(Path("test_dir").iterdir())) == 2
from nbdev.export import notebook2script
notebook2script()
# test_dir = Path("test_dir")
# [f.unlink() for f in test_dir.iterdir()]
# test_dir.rmdir()
```
|
github_jupyter
|
<img src="../figures/HeaDS_logo_large_withTitle.png" width="300">
<img src="../figures/tsunami_logo.PNG" width="600">
[](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/fall2021/Conditionals/Conditions.ipynb)
# Boolean and Conditional logic
*prepared by [Katarina Nastou](https://www.cpr.ku.dk/staff/?pure=en/persons/672471) and [Rita Colaço](https://www.cpr.ku.dk/staff/?id=621366&vis=medarbejder)*
## Objectives
- Understand boolean operators and how variables can relate
- Learn about "Truthiness"
- Learn how to write conditional statements and use proper indentation
- Learn how to use comparison operators to make a basic programs
## User Input
There is a built-in function in Python called "input" that will prompt the user and store the result to a variable.
```
name = input("Enter your name here: ")
print(name)
```
## Booleans
```
x = True
print(x)
print(type(x))
```
## Comparison Operators
Comparison operators can tell how two Python values relate, resulting in a boolean. They answer yes/no questions.
In the example `a = 2` and `b = 2`, i.e. we are comparing integers (`int`)
operator | Description | Result | Example (`a, b = 2, 2`)
--- | --- |--- | ---
`==` | **a** equal to **b** | True if **a** has the same value as **b** | `a == b # True`
`!=` | **a** not equal to **b** | True if **a** does NOT have the same value as **b** | `a != b # False`
`>` | **a** greater than **b** | True if **a** is greater than **b** | `a > b # False`
`<` | **a** less than **b** | True if **a** is less than be **b** | `a < b # False`
`>=` | **a** greater than or equal to **b** | True if **a** is greater than or equal to **b** | `a >= b # True`
`<=` | **a** less than or equal to **b** | True if **a** is less than or equal to **b** | `a <= b # True`
> Hint: The result of a comparison is defined by the type of **a** and **b**, and the **operator** used
### Numeric comparisons
```
a, b = 2, 2
a >= b
```
### String comparisons
```
"carl" < "chris"
```
### Quiz
**Question 1**: What will be the result of this comparison?
```python
x = 2
y = "Anthony"
x < y
```
1. True
2. False
3. Error
**Question 2**: What about this comparison?
```python
x = 12.99
y = 12
x >= y
```
1. True
2. False
3. Error
**Question 3**: And this comparison?
```python
x = 5
y = "Hanna"
x == y
```
1. True
2. False
3. Error
## Truthiness
In Python, all conditional checks resolve to `True` or `False`.
```python
x = 1
x == 1 # True
x == 0 # False
```
Besides false conditional checks, other things that are naturally "falsy" include: empty lists/tuples/arrays, empty strings, None, and zero (and non-empty things are normally `True`).
> "Although Python has a bool type, it accepts any object in a boolean context, such as the
> expression controlling an **if** or **while** statement, or as operands to **and**, **or**, and **not**.
> To determine whether a value **x** is _truthy_ or _falsy_, Python applies `bool(x)`, which always returns True or False.
>
> (...) Basically, `bool(x)` calls `x.__bool__()` and uses the result.
> If `__bool__` is not implemented, Python tries to invoke `x.__len__()`, and if that returns zero, bool returns `False`.
> Otherwise bool returns `True`." (Ramalho 2016: Fluent Python, p. 12)
```
a = []
bool(a)
a = ''
bool(a)
a = None
bool(a)
a = 0
b = 1
print(bool(a))
print(bool(b))
```
## Logical Operators or "How to combine boolean values"
In Python, the following operators can be used to make Boolean Logic comparisons. The three most common ones are `and`, `or` and `not`.
`and`, True if both **a** AND **b** are true (logical conjunction)
```python
cats_are_cute = True
dogs_are_cute = True
cats_are_cute and dogs_are_cute # True
```
> But `True and False`, `False and True` and `False and False` all evaluate to `False`.
```
x = 134
x > 49 and x < 155
```
`or`, True if either **a** OR **b** are true (logical disjunction)
```python
am_tired = True
is_bedtime = False
am_tired or is_bedtime # True
```
> `True or True`, `False or True` and `True or False` evaluate to `True`.
> Only `False or False` results in `False`.
```
x = 5
x < 7 or x > 11
```
`not`, True if the opposite of **a** is true (logical negation)
```python
is_weekend = True
not is_weekend # False
```
> `not True` -> False
> `not False` -> True
### Order of precedence
Can you guess the result of this expression?
```python
True or True and False
```
1. True
2. False
3. Error
```
# True or True and False
```
Instead of memorizing the [order of precedence](https://docs.python.org/3/reference/expressions.html#operator-precedence), we can use parentheses to define the order in which operations are performed.
- Helps prevent bugs
- Makes your intentions clearer to whomever reads your code
```
# (True or True) and False
```
## Special operators
### Identity operators
Operator | Description |Example (`a, b = 2, 3`)
--- | --- |---
`is` | True if the operands are identical (refer to the same object) | `a is 2 # True`
`is not` | True if the operands are not identical (do not refer to the same object) | `a is not b # False`
In python, `==` and `is` are very similar operators, however they are NOT the same.
`==` compares **equality**, while `is` compares by checking for the **identity**.
Example 1:
```
a = 1
print(a == 1)
print(a is 1)
```
Example 2:
```
a = [1, 2, 3]
b = [1, 2, 3]
print(a == b)
print(a is b)
```
**`is`** comparisons only return `True` if the variables reference the same item *in memory*. It is recommendend to [test Singletons with `is`](https://www.python.org/dev/peps/pep-0008/#programming-recommendations) and not `==`, e.g. `None`, `True`, `False`.
### Membership operators
Operator | Description |Example (`a = [1, 2, 3]`)
--- | --- |---
`in` | True if value/variable is found in the sequence | `2 in a # True`
`not in` | True if value/variable is not found in the sequence | `5 not in a # False`
```
aa = ['alanine', 'glycine', 'tyrosine']
'alanine' in aa
'gly' in aa[0]
```
## Quiz
**Question 1**: What is truthiness?
1. Statements or facts that seem "kind of true" even if they aren't true necessarily
2. Statements or expressions that result to a True value
3. Code that never lies
4. Computers have the tendency to believe things are True until proven False
**Question 2**: Is the following expression True or False?
```python
x = 15
y = 0
bool(x or y) # this expression
```
**Question 3**: Is the following expression True or False?
```python
x = 0
y = None
bool(x or y) # this expression
```
**Question 4**: (Hard) What is the result of the following expression?
```python
x = 233
y = 0
z = None
x or y or z # this expression
```
**Question 5**: Hardest question! Add parentheses to the expression, so that it shows the order of precedence explicitely?
```python
x = 0
y = -1
x or y and x - 1 == y and y + 1 == x
```
> Tip: check the [order of precedence](https://docs.python.org/3/reference/expressions.html#operator-precedence).
## Conditional Statements
[Conditional statements](https://docs.python.org/3/tutorial/controlflow.html#if-statements), use the keywords `if`, `elif` and `else`, and they let you control what pieces of code are run based on the value of some Boolean condition.
```python
if some condition is True:
do something
elif some other condition is True:
do something
else:
do something
```
> Recipe: if condition, execute expression
> If condition always finishes with `:` (colon)
> Expression to be executed if condition succeeds always needs to be indented (spaces or tab, depending on the editor you are using)
```
cats_are_cute = True
dogs_are_cute = True
if cats_are_cute and dogs_are_cute:
print("Pets are cute!")
```
> Here the `if` statement automatically calls `bool` on the expression, e.g. `bool(cats_are_cute and dogs_are_cute)`.
Adding the `else` statement:
```
is_weekend = True
if not is_weekend:
print("It's Monday.")
print("Go to work.")
else:
print("Sleep in and enjoy the beach.")
```
For more customized behavior, use `elif`:
```
am_tired = True
is_bedtime = True
if not am_tired:
print("One more episode.")
elif am_tired and is_bedtime:
print("Go to sleep.")
else:
print("Go to sleep anyways.")
```
### Quiz:
**Question 1**: If you set the name variable to "Gandalf" and run the script below, what will be the output?
```
name = input("Enter your name here: ")
if name == "Gandalf":
print("Run, you fools!")
elif name == "Aragorn":
print("There is always hope.")
else:
print("Move on then!")
```
**Question 2**: Why do we use `==` and not `is` in the code above?
## Group exercises
### Exercise 1
At the next code block there is some code that randomly picks a number from 1 to 10.
Write a conditional statement to check if `choice` is 5 and print `"Five it is!"` and in any other case print `"Well that's not a 5!"`.
```
from random import randint
choice = randint(1,10)
# YOUR CODE GOES HERE vvvvvv
```
### Exercise 2
At the next code block there is some code that randomly picks a number from 1 to 1000. Use a conditional statement to check if the number is odd and print `"odd"`, otherwise print `"even"`.
> *Hint*: Remember the numerical operators we saw before in [Numbers_and_operators.ipynb](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/fall2021/Numbers_and_operators/Numbers_and_operators.ipynb) and think of which one can help you find an odd number.
```
from random import randint
num = randint(1, 1000) #picks random number from 1-1000
# YOUR CODE GOES HERE vvvvvvv
```
### Exercise 3
Create a variable and assign an integer as value, then build a conditional to test it:
- If the value is below 0, print "The value is negative"
- If the value is between 0 and 20 (including 0 and 20), print the value
- Otherwise, print "Out of scope"
Test it by changing the value of the variable
### Exercise 4
Read the file 'data/samples.txt' following the notebook [Importing data](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/fall2021/Importing_data/Importing_data.ipynb) and check if Denmark is among the countries in this file.
## Recap
- Conditional logic can control the flow of a program
- We can use comparison and logical operators to make conditional if statements
- In general, always make sure you make comparisons between objects of the same type (integers and floats are the exceptions)
- Conditional logic evaluates whether statements are true or false
*Note: This notebook's content structure has been adapted from Colt Steele's slides used in [Modern Python 3 Bootcamp Course](https://www.udemy.com/course/the-modern-python3-bootcamp/) on Udemy*
|
github_jupyter
|
# Getting Started with gensim
This section introduces the basic concepts and terms needed to understand and use `gensim` and provides a simple usage example.
## Core Concepts and Simple Example
At a very high-level, `gensim` is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). `gensim` accomplishes this by taking a *corpus*, a collection of text documents, and producing a *vector* representation of the text in the corpus. The vector representation can then be used to train a *model*, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how `gensim` works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
### Corpus
A *corpus* is a collection of digital documents. This collection is the input to `gensim` from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the *training corpus*. No human intervention (such as tagging the documents by hand) is required - the topic classification is [unsupervised](https://en.wikipedia.org/wiki/Unsupervised_learning).
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
```
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
```
This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll [tokenise][1] our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
[1]: https://en.wikipedia.org/wiki/Tokenization_(lexical_analysis)
```
# Create a set of frequent words
stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
```
Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the `gensim.corpora.Dictionary` class. This dictionary defines the vocabulary of all words that our processing knows about.
```
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
```
Because our corpus is small, there are only 12 different tokens in this `Dictionary`. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
### Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the *bag-of-words model*. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words `['coffee', 'milk', 'sugar', 'spoon']` a document consisting of the string `"coffee milk coffee"` could be represented by the vector `[2, 1, 0, 0]` where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
```
print(dictionary.token2id)
```
For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the `doc2bow` method of the dictionary, which returns a sparse representation of the word counts:
```
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
```
The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors:
```
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
```
Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, `gensim` allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
### Model
Now that we have vectorized our corpus we can begin to transform it using *models*. We use model as an abstract term referring to a transformation from one document representation to another. In `gensim` documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is [tf-idf](https://en.wikipedia.org/wiki/Tf%E2%80%93idf). The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors":
```
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
```
The `tfidf` model again returns a list of tuples, where the first entry is the token ID and the second entry is the tf-idf weighting. Note that the ID corresponding to "system" (which occurred 4 times in the original corpus) has been weighted lower than the ID corresponding to "minors" (which only occurred twice).
`gensim` offers a number of different models/transformations. See [Transformations and Topics](Topics_and_Transformations.ipynb) for details.
|
github_jupyter
|
# Simple Test between NumPy and Numba
$$
\Gamma = \sqrt{\frac{\eta_H}{\eta_V} \kappa^2 + \eta_H \zeta_H}
$$
```
import numba
import cython
import numexpr
import numpy as np
%load_ext cython
# Used cores by numba can be shown with (xy default all cores are used):
#print(numba.config.NUMBA_DEFAULT_NUM_THREADS)
# This can be changed with the following line
#numba.config.NUMBA_NUM_THREADS = 4
from empymod import filters
from scipy.constants import mu_0 # Magn. permeability of free space [H/m]
from scipy.constants import epsilon_0 # Elec. permittivity of free space [F/m]
res = np.array([2e14, 0.3, 1, 50, 1]) # nlay
freq = np.arange(1, 201)/20. # nfre
off = np.arange(1, 101)*1000 # noff
lambd = filters.key_201_2009().base/off[:, None] # nwav
aniso = np.array([1, 1, 1.5, 2, 1])
epermH = np.array([1, 80, 9, 20, 1])
epermV = np.array([1, 40, 9, 10, 1])
mpermH = np.array([1, 1, 3, 5, 1])
etaH = 1/res + np.outer(2j*np.pi*freq, epermH*epsilon_0)
etaV = 1/(res*aniso*aniso) + np.outer(2j*np.pi*freq, epermV*epsilon_0)
zetaH = np.outer(2j*np.pi*freq, mpermH*mu_0)
```
## NumPy
Numpy version to check result and compare times
```
def test_numpy(eH, eV, zH, l):
return np.sqrt((eH/eV) * (l*l) + (zH*eH))
```
## Numba @vectorize
This is exactly the same function as with NumPy, just added the @vectorize decorater.
```
@numba.vectorize('c16(c16, c16, c16, f8)')
def test_numba_vnp(eH, eV, zH, l):
return np.sqrt((eH/eV) * (l*l) + (zH*eH))
@numba.vectorize('c16(c16, c16, c16, f8)', target='parallel')
def test_numba_v(eH, eV, zH, l):
return np.sqrt((eH/eV) * (l*l) + (zH*eH))
```
## Numba @njit
```
@numba.njit
def test_numba_nnp(eH, eV, zH, l):
o1, o3 = eH.shape
o2, o4 = l.shape
out = np.empty((o1, o2, o3, o4), dtype=numba.complex128)
for nf in numba.prange(o1):
for nl in numba.prange(o3):
ieH = eH[nf, nl]
ieV = eV[nf, nl]
izH = zH[nf, nl]
for no in numba.prange(o2):
for ni in numba.prange(o4):
il = l[no, ni]
out[nf, no, nl, ni] = np.sqrt(ieH/ieV * il*il + izH*ieH)
return out
@numba.njit(nogil=True, parallel=True)
def test_numba_n(eH, eV, zH, l):
o1, o3 = eH.shape
o2, o4 = l.shape
out = np.empty((o1, o2, o3, o4), dtype=numba.complex128)
for nf in numba.prange(o1):
for nl in numba.prange(o3):
ieH = eH[nf, nl]
ieV = eV[nf, nl]
izH = zH[nf, nl]
for no in numba.prange(o2):
for ni in numba.prange(o4):
il = l[no, ni]
out[nf, no, nl, ni] = np.sqrt(ieH/ieV * il*il + izH*ieH)
return out
```
## Run comparison for a small and a big matrix
```
eH = etaH[:, None, :, None]
eV = etaV[:, None, :, None]
zH = zetaH[:, None, :, None]
l = lambd[None, :, None, :]
# Output shape
out_shape = (freq.size, off.size, res.size, filters.key_201_2009().base.size)
print(' Shape Test Matrix ::', out_shape, '; total # elements:: '+str(freq.size*off.size*res.size*filters.key_201_2009().base.size))
print('------------------------------------------------------------------------------------------')
print(' NumPy :: ', end='')
# Get NumPy result for comparison
numpy_result = test_numpy(eH, eV, zH, l)
# Get runtime
%timeit test_numpy(eH, eV, zH, l)
print(' Numba @vectorize :: ', end='')
# Ensure it agrees with NumPy
numba_vnp_result = test_numba_vnp(eH, eV, zH, l)
if not np.allclose(numpy_result, numba_vnp_result, atol=0, rtol=1e-10):
print(' * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_vnp(eH, eV, zH, l)
print(' Numba @vectorize par :: ', end='')
# Ensure it agrees with NumPy
numba_v_result = test_numba_v(eH, eV, zH, l)
if not np.allclose(numpy_result, numba_v_result, atol=0, rtol=1e-10):
print(' * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_v(eH, eV, zH, l)
print(' Numba @njit :: ', end='')
# Ensure it agrees with NumPy
numba_nnp_result = test_numba_nnp(etaH, etaV, zetaH, lambd)
if not np.allclose(numpy_result, numba_nnp_result, atol=0, rtol=1e-10):
print(' * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_nnp(etaH, etaV, zetaH, lambd)
print(' Numba @njit par :: ', end='')
# Ensure it agrees with NumPy
numba_n_result = test_numba_n(etaH, etaV, zetaH, lambd)
if not np.allclose(numpy_result, numba_n_result, atol=0, rtol=1e-10):
print(' * FAIL, DOES NOT AGREE WITH NumPy RESULT!')
# Get runtime
%timeit test_numba_n(etaH, etaV, zetaH, lambd)
from empymod import versions
versions('HTML', add_pckg=[cython, numba], ncol=5)
```
|
github_jupyter
|
```
import os
import pandas as pd
import numpy as np
import json
import pickle
from collections import defaultdict
from pathlib import Path
from statistics import mean, stdev
from sklearn.metrics import ndcg_score, dcg_score
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import os, sys
parentPath = os.path.abspath("..")
if parentPath not in sys.path:
sys.path.insert(0, parentPath)
from src.data import load_source
from src.config import Config, get_option_fallback
from src.path import get_best_model_paths, get_exp_paths, get_report_path, load_json, load_rep_cfg, get_exp_names
from src.trainer import Trainer
# projectdir = Path('/code')
projectdir = Path('..')
assert projectdir.exists()
```
# Common Functions
```
def summarize_test_res(rep, folds=5):
print(rep['config']['exp_name'], end=':\t')
s = pd.Series([rep['best']['auc_epoch'][str(i)] for i in range(folds)])
print(f'Best epoch at {s.mean():>6.1f}±{s.std():<5.1f}', end='\t')
s = pd.Series([rep['best']['auc'][str(i)] for i in range(folds)])
print(f'Valid AUC: {s.mean()*100:.4f}±{s.std()*100:.4f}', end='\t')
s = pd.Series([rep['indicator']['test_auc'][str(i)][0] for i in range(folds)])
print(f'Test AUC: {s.mean()*100:.4f}±{s.std()*100:.4f}', end='\t')
s = rep['indicator']['RPsoft']['all']
print(f'Good:Bad = {s["good"]}:{s["bad"]}', end='\t')
s = rep['indicator']['test_auc']['all'][0]
print(f'All Test AUC: {s*100:.4f}')
def show_valid_lc(name, idclist_dic, idc='eval_auc'):
min_len = min([len(_x) for _x in idclist_dic['epoch'].values()])
x = idclist_dic['epoch']['0'][:min_len] * (len(idclist_dic['epoch']) -1) # exclude 'all'
y = []
for _y in idclist_dic[idc].values():
y += _y[:min_len]
sns.lineplot(x=x, y=y, label=name)
plt.title(idc)
def summarize_results(config_name, folds=5):
report_paths = [get_report_path(projectdir, config_name, e) for e in get_exp_names(projectdir, config_name)]
reports = [load_json(r) for r in report_paths]
df = pd.DataFrame(columns=['dataset', 'model', 'auc', 'auc_std', 'r1_good', 'r1_goodbad', 'r2', 'r2_std'])
for r in reports:
row = {
'dataset': r['config']['config_name'],
'model': r['config']['exp_name'],
'auc': mean([r['indicator']['test_auc'][str(i)][0] for i in range(folds)]),
'auc_std': stdev([r['indicator']['test_auc'][str(i)][0] for i in range(folds)]) if folds > 1 else np.nan,
'r1_good': r['indicator']['RPsoft']['all']['good'],
'r1_goodbad': r['indicator']['RPsoft']['all']['good'] + r['indicator']['RPsoft']['all']['bad'],
'r2': mean(r['indicator']['RPhard']['all']),
'r2_std': stdev(r['indicator']['RPhard']['all'])
}
df = df.append(row, ignore_index=True)
return df
```
# Summary
## AUC table
```
summarize_results('20_0310_edm2020_assist09')
summarize_results('20_0310_edm2020_assist15')
summarize_results('20_0310_edm2020_synthetic', folds=1)
summarize_results('20_0310_edm2020_statics')
print(summarize_results('20_0310_edm2020_assist09').to_latex())
print(summarize_results('20_0310_edm2020_assist15').to_latex())
print(summarize_results('20_0310_edm2020_synthetic', folds=1).to_latex())
print(summarize_results('20_0310_edm2020_statics').to_latex())
```
## NDCG distplot
```
def ndcg_distplot(config_name, ax, idx, label_names, bins=20):
report_paths = [get_report_path(projectdir, config_name, e) for e in get_exp_names(projectdir, config_name)]
reports = [load_json(r) for r in report_paths]
for rep in reports:
if rep['config']['pre_dummy_epoch_size'] not in {0, 10}:
continue
r = rep['indicator']['RPhard']['all']
name = rep['config']['exp_name']
sns.distplot(r, ax=ax,bins=bins, label=label_names[name], kde_kws={'clip': (0.0, 1.0)})
ax.set_xlabel('NDCG score')
if idx == 0:
ax.set_ylabel('frequency')
if idx == 3:
ax.legend()
ax.set_title(label_names[config_name])
ax.set_xlim([0.59, 1.01])
ax.title.set_fontsize(18)
ax.xaxis.label.set_fontsize(14)
ax.yaxis.label.set_fontsize(14)
fig, axs = plt.subplots(1, 4, sharey=True, figsize=(4*4,3))
# plt.subplots_adjust(hspace=0.3)
fig.subplots_adjust(hspace=.1, wspace=.16)
label_names = {
'20_0310_edm2020_assist09' : 'ASSISTment 2009',
'20_0310_edm2020_assist15' : 'ASSISTment 2015',
'20_0310_edm2020_synthetic': 'Simulated-5',
'20_0310_edm2020_statics' : 'Statics 2011',
'pre_dummy_epoch_size10.auto': 'pre-train 10 epochs',
'pre_dummy_epoch_size0.auto': 'pre-train 0 epoch',
}
ndcg_distplot('20_0310_edm2020_assist09' , ax=axs[0], idx=0, label_names=label_names)
ndcg_distplot('20_0310_edm2020_assist15' , ax=axs[1], idx=1, label_names=label_names)
ndcg_distplot('20_0310_edm2020_synthetic', ax=axs[2], idx=2, label_names=label_names)
ndcg_distplot('20_0310_edm2020_statics' , ax=axs[3], idx=3, label_names=label_names)
```
## Learning curve
```
def lc_plot(config_name, ax, idx, label_names):
report_paths = [get_report_path(projectdir, config_name, e) for e in get_exp_names(projectdir, config_name)]
reports = [load_json(r) for r in report_paths]
for r in reports:
if r['config']['pre_dummy_epoch_size'] not in {0, 10}:
continue
idclist_dic = r['indicator']
idc = 'eval_auc'
min_len = min([len(_x) for _x in idclist_dic['epoch'].values()])
x = idclist_dic['epoch']['0'][:min_len] * (len(idclist_dic['epoch']) -1) # exclude 'all'
y = []
for _y in idclist_dic[idc].values():
y += _y[:min_len]
sns.lineplot(x=x, y=y, ax=ax, label=label_names[r['config']['exp_name']], ci='sd')
ax.set_xlabel('epoch')
if idx == 0:
ax.set_ylabel('AUC')
if idx == 3:
ax.legend()
else:
ax.get_legend().remove()
ax.set_title(label_names[config_name])
ax.title.set_fontsize(18)
ax.xaxis.label.set_fontsize(14)
ax.yaxis.label.set_fontsize(14)
fig, axs = plt.subplots(1, 4, sharey=False, figsize=(4*4,3))
# plt.subplots_adjust(hspace=0.3)
fig.subplots_adjust(hspace=.1, wspace=.16)
label_names = {
'20_0310_edm2020_assist09' : 'ASSISTment 2009',
'20_0310_edm2020_assist15' : 'ASSISTment 2015',
'20_0310_edm2020_synthetic': 'Simulated-5',
'20_0310_edm2020_statics' : 'Statics 2011',
'pre_dummy_epoch_size10.auto': 'pre-train 10 epochs',
'pre_dummy_epoch_size0.auto': 'pre-train 0 epoch',
}
lc_plot('20_0310_edm2020_assist09' , ax=axs[0], idx=0, label_names=label_names)
lc_plot('20_0310_edm2020_assist15' , ax=axs[1], idx=1, label_names=label_names)
lc_plot('20_0310_edm2020_synthetic', ax=axs[2], idx=2, label_names=label_names)
lc_plot('20_0310_edm2020_statics' , ax=axs[3], idx=3, label_names=label_names)
plt.show()
```
# `20_0310_edm2020_assist09`
## Simulated curve
```
config_name = '20_0310_edm2020_assist09'
report_list = []
for r in sorted([load_json(get_report_path(projectdir, e)) for e in get_exp_paths(projectdir, config_name)], key=lambda x: x['config']['pre_dummy_epoch_size']):
if r['config']['pre_dummy_epoch_size'] not in {0, 10}:
continue
r['config']['exp_name'] = f"DKT pre {r['config']['pre_dummy_epoch_size']}"
report_list.append(r)
[r['config']['exp_name'] for r in report_list]
def get_simu_res(report_dic):
return report_dic['indicator']['simu_pred']['all']
simures_list = []
for r in report_list:
simu_res = get_simu_res(r)
simures_list.append(simu_res)
base_idx = 0
base_res = {k:v for k, v in sorted(simures_list[base_idx].items(), key=lambda it: it[1][1][-1] - it[1][1][0])}
descres_list = []
for i, simu_res in enumerate(simures_list):
if i == base_idx:
continue
desc_res = {k:simu_res[k] for k in base_res.keys()}
descres_list.append(desc_res)
n_skills = report_list[base_idx]['config']['n_skills']
h, w = (n_skills+7)//8, 8
figscale = 2.5
hspace = 0.35
fig, axs = plt.subplots(h, w, figsize=(w*figscale, h*figscale))
plt.subplots_adjust(hspace=hspace)
for i, (v, (xidx, sanity)) in enumerate(list(base_res.items())[:h*w]):
ax = axs[i//(w), i%(w)]
ax.set_ylim([0, 1])
ax.set_title('KC{}'.format(v))
sns.lineplot(xidx, sanity, ax=ax, label='base', palette="ch:2.5,.25")
for i, desc_res in enumerate(descres_list):
sns.lineplot(xidx, desc_res[v][1], ax=ax, label=str(i+1), palette="ch:2.5,.25")
ax.get_legend().remove()
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='upper center')
plt.show()
```
## Single ones
```
def plot_single(kc):
x, y = base_res[str(kc)]
sns.lineplot(x=x, y=y)
plot_single(78)
f, axs = plt.subplots(1, 3, sharey=True, figsize=(12,3))
f.tight_layout()
for i, (kc, ax) in enumerate(zip([30, 83, 98], axs)):
ax.set_ylim([0, 1])
x, y = base_res[str(kc)]
sns.lineplot(x=x, y=y, ax=ax)
ax.set_title(f'KC{kc}')
ax.set_ylabel('predicted accuracy')
ax.set_xlabel('$k$\n({})'.format(['a','b','c'][i]))
plt.show()
```
## NDCG
```
for rep in report_list:
r = rep['indicator']['RPhard']['all']
name = rep['config']['exp_name']
sns.distplot(r, bins=10, label=name, kde_kws={'clip': (0.0, 1.0)})
print(f'{name:<20s}\t{mean(r):.4f}±{stdev(r):.4f}')
plt.legend()
plt.show()
for rep in report_list:
r = rep['indicator']['RPsoft']['all']
name = rep['config']['exp_name']
print(f'{name:<20s}\tGood:Bad = {r["good"]}:{r["bad"]}')
```
## Learning curve
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'], idc='eval_loss')
plt.show()
```
## Test AUC
```
for r in report_list:
summarize_test_res(r)
```
# `Debug`
## Simulated curve
```
def get_simu_res(report_dic):
return report_dic['indicator']['simu_pred']['all']
simures_list = []
for r in report_list:
simu_res = get_simu_res(r)
simures_list.append(simu_res)
base_idx = 0
base_res = {k:v for k, v in sorted(simures_list[base_idx].items(), key=lambda it: it[1][1][-1] - it[1][1][0])}
descres_list = []
for i, simu_res in enumerate(simures_list):
if i == base_idx:
continue
desc_res = {k:simu_res[k] for k in base_res.keys()}
descres_list.append(desc_res)
```
## NDCG
```
for rep in report_list:
r = rep['indicator']['RPsoft']['all']
name = rep['config']['exp_name']
print(f'{name:<20s}\tGood:Bad = {r["good"]}:{r["bad"]}')
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
```
## Test AUC
## Simulated curve
```
def get_simu_res(report_dic):
return report_dic['indicator']['simu_pred']['all']
simures_list = []
for r in report_list:
simu_res = get_simu_res(r)
simures_list.append(simu_res)
base_idx = 1
base_res = {k:v for k, v in sorted(simures_list[base_idx].items(), key=lambda it: it[1][1][-1] - it[1][1][0])}
descres_list = []
for i, simu_res in enumerate(simures_list):
if i == base_idx:
continue
desc_res = {k:simu_res[k] for k in base_res.keys()}
descres_list.append(desc_res)
```
## NDCG
```
for rep in report_list:
r = rep['indicator']['RPsoft']['all']
name = rep['config']['exp_name']
print(f'{name:<20s}\tGood:Bad = {r["good"]}:{r["bad"]}')
```
## Learning curve
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
```
## Test AUC
```
def summarize_test_res(rep):
print(rep['config']['exp_name'], end=':\t')
s = pd.Series([rep['best']['auc_epoch'][str(i)] for i in range(5)])
print(f'Best epoch at {s.mean():>6.1f}±{s.std():<5.1f}', end='\t')
s = pd.Series([rep['best']['auc'][str(i)] for i in range(5)])
print(f'Valid AUC: {s.mean()*100:.4f}±{s.std()*100:.4f}', end='\t')
s = rep['indicator']['test_auc']['all'][0]
print(f'Test AUC: {s*100:.4f}')
for r in report_list:
summarize_test_res(r)
```
# `20_0310_edm2020_synthetic`
## Simulated curve
```
config_name = '20_0310_edm2020_synthetic'
report_list = []
for r in sorted([load_json(get_report_path(projectdir, e)) for e in get_exp_paths(projectdir, config_name)], key=lambda x: x['config']['pre_dummy_epoch_size']):
report_list.append(r)
[r['config']['exp_name'] for r in report_list]
def get_simu_res(report_dic):
return report_dic['indicator']['simu_pred']['all']
simures_list = []
for r in report_list:
simu_res = get_simu_res(r)
simures_list.append(simu_res)
base_idx = 0
base_res = {k:v for k, v in sorted(simures_list[base_idx].items(), key=lambda it: it[1][1][-1] - it[1][1][0])}
descres_list = []
for i, simu_res in enumerate(simures_list):
if i == base_idx:
continue
desc_res = {k:simu_res[k] for k in base_res.keys()}
descres_list.append(desc_res)
n_skills = report_list[base_idx]['config']['n_skills']
h, w = (n_skills+7)//8, 8
figscale = 2.5
hspace = 0.35
fig, axs = plt.subplots(h, w, figsize=(w*figscale, h*figscale))
plt.subplots_adjust(hspace=hspace)
for i, (v, (xidx, sanity)) in enumerate(list(base_res.items())[:h*w]):
ax = axs[i//(w), i%(w)]
ax.set_ylim([0, 1])
ax.set_title('KC{} s{}0'.format(v, '>' if sanity[-1]>sanity[0] else '<'))
sns.lineplot(xidx, sanity, ax=ax, label='base', palette="ch:2.5,.25")
for i, desc_res in enumerate(descres_list):
sns.lineplot(xidx, desc_res[v][1], ax=ax, label=str(i+1), palette="ch:2.5,.25")
ax.get_legend().remove()
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='upper center')
plt.show()
```
## NDCG
```
for rep in report_list:
r = rep['indicator']['RPhard']['all']
name = rep['config']['exp_name']
sns.distplot(r, bins=10, label=name, kde_kws={'clip': (0.0, 1.0)})
print(f'{name:<20s}\t{mean(r):.4f}±{stdev(r):.4f}')
plt.legend()
plt.show()
for rep in report_list:
r = rep['indicator']['RPsoft']['all']
name = rep['config']['exp_name']
print(f'{name:<20s}\tGood:Bad = {r["good"]}:{r["bad"]}')
```
## Learning curve
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
```
## Test AUC
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'], idc='eval_loss')
summarize_test_res(r, folds=1)
plt.show()
```
## Test AUC
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
plt.show()
```
## Learning curve
```
for rep in report_list:
r = rep['indicator']['RPsoft']['all']
name = rep['config']['exp_name']
print(f'{name:<20s}\tGood:Bad = {r["good"]}:{r["bad"]}')
```
# `20_0310_edm2020_assist15`
## Simulated curve
```
# config_name = '20_0310_edm2020_assist15'
config_name = '20_0310_edm2020_assist09'
report_paths = [get_report_path(projectdir, config_name, e) for e in get_exp_names(projectdir, config_name)]
reports = [load_json(r) for r in report_paths]
# print([r['config']['exp_name'] for r in reports])
# =>['pre_dummy_epoch_size150.auto', 'pre_dummy_epoch_size10.auto', 'pre_dummy_epoch_size0.auto']
def get_simu_res(report_dic):
return report_dic['indicator']['simu_pred']['all']
simures_list = []
for r in reports:
if r['config']['pre_dummy_epoch_size'] not in {0, 10}:
continue
simu_res = get_simu_res(r)
simures_list.append(simu_res)
base_idx = 1
base_res = {k:v for k, v in sorted(simures_list[base_idx].items(), key=lambda it: it[1][1][-1] - it[1][1][0])}
descres_list = []
for i, simu_res in enumerate(simures_list):
if i == base_idx:
continue
desc_res = {k:simu_res[k] for k in base_res.keys()}
descres_list.append(desc_res)
n_skills = reports[base_idx]['config']['n_skills']
# h, w = (n_skills+7)//8, 8
h, w = 4, 8
figscale = 2.5
hspace = 0.35
fig, axs = plt.subplots(h, w, sharex=True, sharey=True, figsize=(w*figscale, h*figscale))
plt.subplots_adjust(hspace=0.20, wspace=0.05)
for i, (v, (xidx, sanity)) in enumerate(list(base_res.items())[:h*w]):
ax = axs[i//(w), i%(w)]
ax.set_ylim([0, 1])
ax.set_title(f'KC{v}')
sns.lineplot(xidx, sanity, ax=ax, label='pre-train 0', palette="ch:2.5,.25")
for j, desc_res in enumerate(descres_list):
sns.lineplot(xidx, desc_res[v][1], ax=ax, label=f'pre-train {[10,150][j]}', palette="ch:2.5,.25")
if i < 31:
ax.get_legend().remove()
else:
ax.legend()
break
# handles, labels = ax.get_legend_handles_labels()
# fig.legend(handles, labels, loc='upper center')
plt.show()
```
## NDCG
```
for rep in report_list:
r = rep['indicator']['RPhard']['all']
name = rep['config']['exp_name']
sns.distplot(r, bins=20, label=name, kde_kws={'clip': (0.0, 1.0)})
print(f'{name:<20s}\t{mean(r):.4f}±{stdev(r):.4f}')
plt.legend()
plt.show()
```
## Learning curve AUC
```
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'])
summarize_test_res(r)
plt.show()
for r in report_list:
show_valid_lc(r['config']['exp_name'], r['indicator'], idc='eval_loss')
plt.show()
```
|
github_jupyter
|
```
import numpy as np
import torch
from torch import nn, optim
import matplotlib.pyplot as plt
from neurodiffeq import diff
from neurodiffeq.ode import IVP, solve_system, Monitor, ExampleGenerator, Solution, _trial_solution
from neurodiffeq.networks import FCNN, SinActv
from scipy.special import roots_legendre
torch.set_default_tensor_type('torch.DoubleTensor')
FROM, TO = 0., 5.
N_NODE = 8
QUADRATURE_DEGREE = 32
TRAIN_SIZE = 10 # the training set is not actually used
VALID_SIZE = 10
MAX_EPOCHS = 10000
q_points, q_weights = roots_legendre(QUADRATURE_DEGREE)
global_q_points = FROM + torch.tensor(q_points).reshape(-1, 1) * (TO-FROM)
global_q_points.requires_grad = True
global_q_weights = torch.tensor(q_weights).reshape(-1, 1)
def solve_system_quadrature(
ode_system, conditions, t_min, t_max,
single_net=None, nets=None, train_generator=None, shuffle=True, valid_generator=None,
optimizer=None, criterion=None, additional_loss_term=None, metrics=None, batch_size=16,
max_epochs=1000,
monitor=None, return_internal=False,
return_best=False,
):
########################################### subroutines ###########################################
def train(train_generator, net, nets, ode_system, conditions, criterion, additional_loss_term, shuffle, optimizer):
train_examples_t = train_generator.get_examples()
train_examples_t = train_examples_t.reshape((-1, 1))
n_examples_train = train_generator.size
idx = np.random.permutation(n_examples_train) if shuffle else np.arange(n_examples_train)
batch_start, batch_end = 0, batch_size
while batch_start < n_examples_train:
if batch_end > n_examples_train:
batch_end = n_examples_train
batch_idx = idx[batch_start:batch_end]
ts = train_examples_t[batch_idx]
train_loss_batch = calculate_loss(ts, net, nets, ode_system, conditions, criterion, additional_loss_term)
optimizer.zero_grad()
train_loss_batch.backward()
optimizer.step()
batch_start += batch_size
batch_end += batch_size
train_loss_epoch = calculate_loss(train_examples_t, net, nets, ode_system, conditions, criterion, additional_loss_term)
train_metrics_epoch = calculate_metrics(train_examples_t, net, nets, conditions, metrics)
return train_loss_epoch, train_metrics_epoch
def valid(valid_generator, net, nets, ode_system, conditions, criterion, additional_loss_term):
valid_examples_t = valid_generator.get_examples()
valid_examples_t = valid_examples_t.reshape((-1, 1))
valid_loss_epoch = calculate_loss(valid_examples_t, net, nets, ode_system, conditions, criterion, additional_loss_term)
valid_loss_epoch = valid_loss_epoch.item()
valid_metrics_epoch = calculate_metrics(valid_examples_t, net, nets, conditions, metrics)
return valid_loss_epoch, valid_metrics_epoch
# calculate the loss with Gaussian quadrature
# uses global variables, just for convenience
def calculate_loss(ts, net, nets, ode_system, conditions, criterion, additional_loss_term):
ts = global_q_points
ws = global_q_weights
us = _trial_solution(net, nets, ts, conditions)
Futs = ode_system(*us, ts)
loss = sum(
torch.sum(ws * Fut**2) for Fut in Futs
)
return loss
def calculate_metrics(ts, net, nets, conditions, metrics):
us = _trial_solution(net, nets, ts, conditions)
metrics_ = {
metric_name: metric_function(*us, ts).item()
for metric_name, metric_function in metrics.items()
}
return metrics_
###################################################################################################
if single_net and nets:
raise RuntimeError('Only one of net and nets should be specified')
# defaults to use a single neural network
if (not single_net) and (not nets):
single_net = FCNN(n_input_units=1, n_output_units=len(conditions), n_hidden_units=32, n_hidden_layers=1,
actv=nn.Tanh)
if single_net:
# mark the Conditions so that we know which condition correspond to which output unit
for ith, con in enumerate(conditions):
con.set_impose_on(ith)
if not train_generator:
if (t_min is None) or (t_max is None):
raise RuntimeError('Please specify t_min and t_max when train_generator is not specified')
train_generator = ExampleGenerator(32, t_min, t_max, method='equally-spaced-noisy')
if not valid_generator:
if (t_min is None) or (t_max is None):
raise RuntimeError('Please specify t_min and t_max when train_generator is not specified')
valid_generator = ExampleGenerator(32, t_min, t_max, method='equally-spaced')
if (not optimizer) and single_net: # using a single net
optimizer = optim.Adam(single_net.parameters(), lr=0.001)
if (not optimizer) and nets: # using multiple nets
all_parameters = []
for net in nets:
all_parameters += list(net.parameters())
optimizer = optim.Adam(all_parameters, lr=0.001)
if not criterion:
criterion = nn.MSELoss()
if metrics is None:
metrics = {}
history = {}
history['train_loss'] = []
history['valid_loss'] = []
for metric_name, _ in metrics.items():
history['train__' + metric_name] = []
history['valid__' + metric_name] = []
if return_best:
valid_loss_epoch_min = np.inf
solution_min = None
for epoch in range(max_epochs):
train_loss_epoch, train_metrics_epoch = train(train_generator, single_net, nets, ode_system, conditions, criterion, additional_loss_term, shuffle,
optimizer)
history['train_loss'].append(train_loss_epoch)
for metric_name, metric_value in train_metrics_epoch.items():
history['train__'+metric_name].append(metric_value)
valid_loss_epoch, valid_metrics_epoch = valid(valid_generator, single_net, nets, ode_system, conditions, criterion, additional_loss_term,)
history['valid_loss'].append(valid_loss_epoch)
for metric_name, metric_value in valid_metrics_epoch.items():
history['valid__'+metric_name].append(metric_value)
if monitor and epoch % monitor.check_every == 0:
monitor.check(single_net, nets, conditions, history)
if return_best and valid_loss_epoch < valid_loss_epoch_min:
valid_loss_epoch_min = valid_loss_epoch
solution_min = Solution(single_net, nets, conditions)
if return_best:
solution = solution_min
else:
solution = Solution(single_net, nets, conditions)
if return_internal:
internal = {
'single_net': single_net,
'nets': nets,
'conditions': conditions,
'train_generator': train_generator,
'valid_generator': valid_generator,
'optimizer': optimizer,
'criterion': criterion
}
return solution, history, internal
else:
return solution, history
%matplotlib notebook
odes = lambda x, y, t : [diff(x, t) + t*y,
diff(y, t) - t*x]
ivps = [
IVP(t_0=0., x_0=1.),
IVP(t_0=0., x_0=0.)
]
nets = [
FCNN(n_hidden_units=N_NODE, n_hidden_layers=1, actv=SinActv),
FCNN(n_hidden_units=N_NODE, n_hidden_layers=1, actv=SinActv)
]
train_gen = ExampleGenerator(TRAIN_SIZE, t_min=FROM, t_max=TO, method='equally-spaced')
valid_gen = ExampleGenerator(VALID_SIZE, t_min=FROM, t_max=TO, method='equally-spaced')
def rmse(x, y, t):
true_x = torch.cos(t**2/2)
true_y = torch.sin(t**2/2)
x_sse = torch.sum((x - true_x) ** 2)
y_sse = torch.sum((y - true_y) ** 2)
return torch.sqrt( (x_sse+y_sse)/(len(x)+len(y)) )
solution, _ = solve_system_quadrature(
ode_system=odes,
conditions=ivps,
t_min=FROM, t_max=TO,
nets=nets,
train_generator=train_gen,
valid_generator=valid_gen,
batch_size=TRAIN_SIZE,
max_epochs=MAX_EPOCHS,
monitor=Monitor(t_min=FROM, t_max=TO, check_every=100),
metrics={'rmse': rmse}
)
```
|
github_jupyter
|
# imports
```
import sys; sys.path.append(_dh[0].split("knowknow")[0])
from knowknow import *
```
# User settings
```
database_name = "sociology-wos"
pubyears = None
if 'wos' in database_name:
pubyears = load_variable("%s.pubyears" % database_name)
print("Pubyears loaded for %s entries" % len(pubyears.keys()))
RELIABLE_DATA_ENDS_HERE = 2019
if 'jstor' in database_name:
RELIABLE_DATA_ENDS_HERE = 2010
import re
def create_cysum(cits, typ):
meta_counters = defaultdict(int)
cy = defaultdict(lambda:defaultdict(int))
for (c,y),count in cits['c.fy'].items():
cy[c][y] = count
if 'fy' in cits:
fyc = cits['fy']
else:
fyc = cits['y']
cysum = {}
for ci,c in enumerate(cy):
meta_counters['at least one citation'] += 1
count = cy[c]
prop = {
y: county / fyc[y]
for y,county in count.items()
}
res = {
'first': min(count),
'last': max(count),
'maxcounty': max(count, key=lambda y:(count[y],y)),
'maxpropy': max(count, key=lambda y:(prop[y],y))
}
res['maxprop'] = prop[ res['maxpropy'] ]
res['maxcount'] = count[ res['maxcounty'] ]
res['total'] = sum(count.values())
res['totalprop'] = sum(prop.values())
res['name'] = c
# gotta do something here...
res['type'] = 'article'
if typ == 'wos':
sp = c.split("|")
if len(sp) < 2:
continue
try:
res['pub'] = int(sp[1])
res['type'] = 'article'
except ValueError:
res['type'] = 'book'
res['pub'] = pubyears[c]
elif typ == 'jstor':
inparens = re.findall(r'\(([^)]+)\)', c)[0]
res['pub'] = int(inparens)
# DEFINING DEATH1
# death1 is max, as long as it's before RELIABLE_DATA_ENDS_HERE
res['death1'] = None
if res['maxpropy'] <= RELIABLE_DATA_ENDS_HERE:
res['death1'] = res['maxcounty']
# DEFINING DEATH2
# this list has an entry for each year after and including the maximum citations ever received (the last time)
# look ahead to the next ten years and take the average
next_year_sums = [
(ycheck, sum( c for y,c in count.items() if ycheck + 10 >= y > ycheck ))
for ycheck in range(res['maxcounty'], RELIABLE_DATA_ENDS_HERE - 10)
]
# need to make sure ALL subsequent decade intervals are also less...
my_death_year = None
l = len(next_year_sums)
for i in range(l):
not_this_one = False
for j in range(i,l):
if next_year_sums[j][1] >= res['maxcount']:
not_this_one = True
if not_this_one:
break
if not_this_one:
continue
my_death_year = next_year_sums[i][0]
break
if not len(next_year_sums):
res['death2'] = None
else:
res['death2'] = my_death_year
# DEATH3 is last, as long as it's before RELIABLE_DATA_ENDS_HERE
res['death3'] = None
if res['last'] <= RELIABLE_DATA_ENDS_HERE:
res['death3'] = res['last']
# DEATH5
# 90% of their citations were received before death4, and it's been at least 30% of their lifespan
myspan = np.array( [cits['c.fy'][(c,ycheck)] for ycheck in range(1900, 2020)] )
res['death5'] = None
Ea = np.sum(myspan)
csum = np.sum(myspan)
nonzeroyears = list(np.where(myspan>0))
if not len(nonzeroyears):
continue
try:
firsti = np.min(nonzeroyears)
except:
print("some strange error, that shouldn't happen, right??")
first_year = firsti + 1900
for cci, cc in enumerate(myspan[firsti:]):
this_year = first_year+cci
# running residual...
Ea -= cc
# don't let them die too soon
if cc == 0:
continue
if Ea/csum < 0.1 and (RELIABLE_DATA_ENDS_HERE - this_year)/(RELIABLE_DATA_ENDS_HERE - first_year) > 0.3:
res['death5'] = this_year
break
if res['death2'] is not None and res['death2'] < res['pub']:
meta_counters['death2 < pub!? dropped.'] += 1
# small error catch
continue
#small error catch
if res['maxpropy'] < res['pub']:
meta_counters['maxpropy < pub!? dropped.'] += 1
continue
# don't care about those with only a single citation
if res['total'] <= 1:
meta_counters['literally 1 citation. dropped.'] += 1
continue
# we really don't care about those that never rise in use
#if res['first'] == res['maxpropy']:
# continue
meta_counters['passed tests pre-blacklist'] += 1
cysum[c] = res
blacklist = []
for b in blacklist:
if b in cysum:
del cysum[b]
todelete = []
for c in todelete:
if c in cysum:
meta_counters['passed all other tests but was blacklisted'] += 1
del cysum[c]
print(dict(meta_counters))
return cysum
OVERWRITE_EXISTING = True
print("Processing database '%s'"%database_name)
varname = "%s.cysum"%database_name
run = True # run
if not OVERWRITE_EXISTING:
try:
load_variable(varname)
run = False
except FileNotFoundError:
pass
if run:
cits = get_cnt("%s.doc"%database_name, ['c.fy','fy'])
if 'wos' in database_name and 'jstor' in database_name:
raise Exception("Please put 'wos' or 'jstor' but not both in any database_name.")
elif 'wos' in database_name:
cysum = create_cysum(cits, 'wos')
elif 'jstor' in database_name:
cysum = create_cysum(cits, 'jstor')
else:
raise Exception("Please include either 'wos' or 'jstor' in the name of the variable. This keys which data processing algorithm you used.")
save_variable(varname, cysum)
print("%s cysum entries for database '%s'" % (len(cysum), database_name))
```
# only necessary if you plan on filtering based on this set
```
save_variable("%s.included_citations"%database_name, set(cysum.keys()))
```
|
github_jupyter
|
<center><em>Copyright by Pierian Data Inc.</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# KNN Project Exercise
Due to the simplicity of KNN for Classification, let's focus on using a PipeLine and a GridSearchCV tool, since these skills can be generalized for any model.
## The Sonar Data
### Detecting a Rock or a Mine
Sonar (sound navigation ranging) is a technique that uses sound propagation (usually underwater, as in submarine navigation) to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels.
<img src="sonar.jpg" style="max-height: 500px; max-width: 500px;">
The data set contains the response metrics for 60 separate sonar frequencies sent out against a known mine field (and known rocks). These frequencies are then labeled with the known object they were beaming the sound at (either a rock or a mine).
<img src="mine.jpg" style="max-height: 500px; max-width: 500px;">
Our main goal is to create a machine learning model capable of detecting the difference between a rock or a mine based on the response of the 60 separate sonar frequencies.
Data Source: https://archive.ics.uci.edu/ml/datasets/Connectionist+Bench+(Sonar,+Mines+vs.+Rocks)
### Complete the Tasks in bold
**TASK: Run the cells below to load the data.**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('../Data/sonar.all-data.csv')
df.head()
```
## Data Exploration
```
df.info()
df.describe()
```
**TASK: Create a heatmap of the correlation between the difference frequency responses.**
```
plt.figure(figsize=(8,6))
sns.heatmap(df.corr(), cmap='coolwarm');
```
-----
**TASK: What are the top 5 correlated frequencies with the target\label?**
*Note: You many need to map the label to 0s and 1s.*
*Additional Note: We're looking for **absolute** correlation values.*
```
df['Label'].value_counts()
# As we can't find the correlation between numbers and label string, we need to map the label (Rock / Mine) to 0s and 1s
df['Target'] = df['Label'].map({'M': 1, 'R': 0})
df.head(1)
df.corr()['Target']
# get the highest 5 ones
np.absolute(df.corr()['Target'].sort_values(ascending=False))[:6]
#option 2
np.absolute(df.corr()['Target'].sort_values()).tail(6)
```
-------
## Train | Test Split
Our approach here will be one of using Cross Validation on 90% of the dataset, and then judging our results on a final test set of 10% to evaluate our model.
**TASK: Split the data into features and labels, and then split into a training set and test set, with 90% for Cross-Validation training, and 10% for a final test set.**
*Note: The solution uses a random_state=42*
```
from sklearn.model_selection import train_test_split
X = df.drop(['Label', 'Target'], axis=1)
y = df['Label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1, random_state=42)
```
----
**TASK: Create a PipeLine that contains both a StandardScaler and a KNN model**
```
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
scaler = StandardScaler()
knn = KNeighborsClassifier()
operations = [('scaler', scaler), ('knn', knn)]
from sklearn.pipeline import Pipeline
pipe = Pipeline(operations)
```
-----
**TASK: Perform a grid-search with the pipeline to test various values of k and report back the best performing parameters.**
```
from sklearn.model_selection import GridSearchCV
k_values = list(range(1, 30))
parameters = {'knn__n_neighbors': k_values}
full_cv_classifier = GridSearchCV(pipe, parameters, cv=5, scoring='accuracy')
full_cv_classifier.fit(X_train, y_train)
# check best estimator
full_cv_classifier.best_estimator_.get_params()
```
----
**(HARD) TASK: Using the .cv_results_ dictionary, see if you can create a plot of the mean test scores per K value.**
```
pd.DataFrame(full_cv_classifier.cv_results_).head()
mean_test_scores = full_cv_classifier.cv_results_['mean_test_score']
mean_test_scores
# plt.plot(k_values, mean_test_scores, marker='.', markersize=10)
plt.plot(k_values, mean_test_scores, 'o-')
plt.xlabel('K')
plt.ylabel('Mean Test Score / Accuracy');
```
----
### Final Model Evaluation
**TASK: Using the grid classifier object from the previous step, get a final performance classification report and confusion matrix.**
```
full_pred = full_cv_classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, plot_confusion_matrix, classification_report
confusion_matrix(y_test, full_pred)
plot_confusion_matrix(full_cv_classifier, X_test, y_test);
```
**IMPORTANT:**
- As we can see from the confusion matrix, there are 1 False Positive and 1 False Negative.
- Although False Positive case (thinking Rock as a Mine) may not be dangerous, False Negative case (thinking Mine as a Rock) is extremelly dangerous.
- So we may need to revisit the modelling to make sure there is no False Negative.
```
print(classification_report(y_test, full_pred))
```
### Great Job!
|
github_jupyter
|
```
!python --version
# In case issues with installation of tensortrade, Install the version below using that way
# https://github.com/tensortrade-org/tensortrade/issues/229#issuecomment-633164703
# version: https://github.com/tensortrade-org/tensortrade/releases/tag/v1.0.3
!pip install -U tensortrade==1.0.3 ta matplotlib tensorboardX scikit-learn
from tensortrade.data.cdd import CryptoDataDownload
import pandas as pd
import tensortrade.version
print(tensortrade.__version__)
import random
import ta
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import tensortrade.env.default as default
from tensortrade.feed.core import Stream, DataFeed, NameSpace
from tensortrade.oms.exchanges import Exchange
from tensortrade.oms.services.execution.simulated import execute_order
from tensortrade.oms.instruments import USD, BTC, ETH
from tensortrade.oms.wallets import Wallet, Portfolio
from tensortrade.agents import A2CAgent
import tensortrade.stochastic as sp
from tensortrade.oms.instruments import Instrument
from tensortrade.env.default.actions import SimpleOrders, BSH, ManagedRiskOrders
from collections import OrderedDict
from tensortrade.oms.orders.criteria import Stop, StopDirection
from tensortrade.env.default.actions import ManagedRiskOrders
from tensortrade.env.default.rewards import RiskAdjustedReturns
from scipy.signal import savgol_filter
def fetchTaFeatures(data):
data = ta.add_all_ta_features(data, 'open', 'high', 'low', 'close', 'volume', fillna=True)
data.columns = [name.lower() for name in data.columns]
return data
def createEnv(config):
coins = ["coin{}".format(x) for x in range(5)]
bitfinex_streams = []
with NameSpace("bitfinex"):
for coin in coins:
coinColumns = filter(lambda name: name.startswith(coin), config["data"].columns)
bitfinex_streams += [
Stream.source(list(config["data"][c]), dtype="float").rename(c) for c in coinColumns
]
feed = DataFeed(bitfinex_streams)
streams = []
for coin in coins:
streams.append(Stream.source(list(data[coin+":"+"close"]), dtype="float").rename("USD-"+coin))
streams = tuple(streams)
bitstamp = Exchange("bitfinex", service=execute_order)(
Stream.source(list(data["coin0:close"]), dtype="float").rename("USD-BTC"),
Stream.source(list(data["coin1:close"]), dtype="float").rename("USD-ETH"),
Stream.source(list(data["coin1:close"]), dtype="float").rename("USD-TTC1"),
Stream.source(list(data["coin3:close"]), dtype="float").rename("USD-TTC2"),
Stream.source(list(data["coin4:close"]), dtype="float").rename("USD-TTC3"),
Stream.source(list(data["coin5:close"]), dtype="float").rename("USD-TTC4"),
Stream.source(list(data["coin6:close"]), dtype="float").rename("USD-TTC5"),
Stream.source(list(data["coin7:close"]), dtype="float").rename("USD-TTC6"),
Stream.source(list(data["coin8:close"]), dtype="float").rename("USD-TTC7"),
Stream.source(list(data["coin9:close"]), dtype="float").rename("USD-TTC8"),
)
TTC1 = Instrument("TTC1", 8, "TensorTrade Coin1")
TTC2 = Instrument("TTC2", 8, "TensorTrade Coin2")
TTC3 = Instrument("TTC3", 8, "TensorTrade Coin3")
TTC4 = Instrument("TTC4", 8, "TensorTrade Coin4")
TTC5 = Instrument("TTC5", 8, "TensorTrade Coin5")
TTC6 = Instrument("TTC6", 8, "TensorTrade Coin6")
TTC7 = Instrument("TTC7", 8, "TensorTrade Coin7")
TTC8 = Instrument("TTC8", 8, "TensorTrade Coin8")
cash = Wallet(bitstamp, 10000 * USD)
asset = Wallet(bitstamp, 0 * BTC)
asset1 = Wallet(bitstamp, 0 * ETH)
asset2 = Wallet(bitstamp, 0 * TTC1)
asset3 = Wallet(bitstamp, 0 * TTC2)
asset4 = Wallet(bitstamp, 0 * TTC3)
asset5 = Wallet(bitstamp, 0 * TTC4)
asset6 = Wallet(bitstamp, 0 * TTC5)
asset7 = Wallet(bitstamp, 0 * TTC6)
asset8 = Wallet(bitstamp, 0 * TTC7)
asset9 = Wallet(bitstamp, 0 * TTC8)
portfolio = Portfolio(USD, [cash, asset, asset1, asset2, asset3, asset4, asset5, asset6, asset7, asset8, asset9
])
portfolio = Portfolio(USD, [cash, asset, asset1
])
reward = RiskAdjustedReturns(return_algorithm = "sortino", window_size=300)
action_scheme = ManagedRiskOrders(stop=[0.1], take=[0.05, 0.1, 0.04], trade_sizes=[5])
env = default.create(
feed=feed,
portfolio=portfolio,
action_scheme=action_scheme,
reward_scheme=reward,
window_size=config["window_size"]
)
return env
coins = ["coin{}".format(x) for x in range(10)]
dfs = []
funcs = [sp.gbm, sp.heston]
for coin in coins:
df = funcs[random.randint(0, 1)](
base_price=random.randint(1, 2000),
base_volume=random.randint(10, 5000),
start_date="2010-01-01",
times_to_generate=5000,
time_frame='1H').add_prefix(coin+":")
for column in ["close", "open", "high", "low"]:
df[coin+f":diff_{column}"] = df[coin+f":{column}"].apply(np.log).diff().dropna()
df[coin+f":soft_{column}"] = savgol_filter(df[coin+":"+column], 35, 2)
ta.add_all_ta_features(
df,
colprefix=coin+":",
**{k: coin+":" + k for k in ['open', 'high', 'low', 'close', 'volume']})
dfs.append(df)
data = pd.concat(dfs, axis=1)
scaler = MinMaxScaler()
norm_data = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
norm_data.to_csv("fake_data_1h_norm.csv", index=False)
config = {
"window_size": 10,
"data": norm_data
}
env = createEnv(config)
!mkdir -p agents/
agent = A2CAgent(env)
reward = agent.train(n_steps=5000, save_path="agents/", n_episodes = 10)
env = createEnv({
"window_size": 10,
"data": norm_data
})
episode_reward = 0
done = False
obs = env.reset()
while not done:
action = agent.get_action(obs)
obs, reward, done, info = env.step(action)
episode_reward += reward
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
fig.suptitle("Performance")
axs[0].plot(np.arange(len(data["coin0:close"])), data["coin0:close"], label="price")
axs[0].set_title("Trading Chart")
performance_df = pd.DataFrame().from_dict(env.action_scheme.portfolio.performance, orient='index')
performance_df.plot(ax=axs[1])
axs[1].set_xlim(0, 5000)
axs[1].set_title("Net Worth")
plt.show()
orDict = OrderedDict()
for k in env.action_scheme.portfolio.performance.keys():
orDict[k] = env.action_scheme.portfolio.performance[k]["net_worth"]
pd.DataFrame().from_dict(orDict, orient='index').plot()
```
|
github_jupyter
|
Copyright 2019 Google LLC.
SPDX-License-Identifier: Apache-2.0
**Notebook Version** - 1.0.0
```
# Install datacommons
!pip install --upgrade --quiet git+https://github.com/datacommonsorg/[email protected]
```
# Analyzing Income Distribution
The American Community Survey (published by the US Census) annually reports the number of individuals in a given income bracket at the State level. We can use this information, stored in Data Commons, to visualize disparity in income for each State in the US. Our goal for this tutorial will be to generate a plot that visualizes the total number of individuals across a given set of income brackets for a given state.
Before we begin, we'll setup our notebook
```
# Import the Data Commons library
import datacommons as dc
# Import other libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import json
from google.colab import drive
```
We will also need to provide the API with an API key. See the [Analyzing Statistics in Data Commons Using the Python Client API](https://colab.research.google.com/drive/1ZNXTHu3J0W3vo9Mg3kNUpk0hnD6Ce1u6#scrollTo=ijxoBhFHjo3Z) to see how to set this up for a Colab Notebook.
```
# Mount the Drive
drive.mount('/content/drive', force_remount=True)
# REPLACE THIS with the path to your key.
key_path = '/content/drive/My Drive/DataCommons/secret.json'
# Read the key in and provide it to the Data Commons API
with open(key_path, 'r') as f:
secrets = json.load(f)
dc.set_api_key(secrets['dc_api_key'])
```
## Preparing the Data
We'll begin by creating a dataframe with states and their total population. We can use **`get_places_in`** to get all States within the United States. We can then call **`get_populations`** and **`get_observations`** to get the population of all persons in each State.
```
# Initialize a DataFrame holding the USA.
data = pd.DataFrame({'country': ['country/USA']})
# Add a column for states and get their names
data['state'] = dc.get_places_in(data['country'], 'State')
data = dc.flatten_frame(data)
# Get all state names and store it in a column "name"
data['name'] = dc.get_property_values(data['state'], 'name')
data = dc.flatten_frame(data)
# Get StatisticalPopulations representing all persons in each state.
data['all_pop'] = dc.get_populations(data['state'], 'Person')
# Get the total count of all persons in each population
data['all'] = dc.get_observations(data['all_pop'],
'count',
'measuredValue',
'2017',
measurement_method='CenusACS5yrSurvey')
# Display the first five rows of the table.
data.head(5)
```
### Querying for Income Brackets
Next, let's get the population level for each income bracket. The datacommons graph identifies 16 different income brackets. For each bracket and state, we can get the population level. Remember that we first get the StatisticalPopulation, and then a corresponding observation. We'll filter observations to between published in 2017 by the American Community Survey.
```
# A list of income brackets
income_brackets = [
"USDollarUpto10000",
"USDollar10000To14999",
"USDollar15000To19999",
"USDollar20000To24999",
"USDollar25000To29999",
"USDollar30000To34999",
"USDollar35000To39999",
"USDollar40000To44999",
"USDollar45000To49999",
"USDollar50000To59999",
"USDollar60000To74999",
"USDollar75000To99999",
"USDollar100000To124999",
"USDollar125000To149999",
"USDollar150000To199999",
"USDollar200000Onwards",
]
# Add a column containin the population count for each income bracket
for bracket in income_brackets:
# Get the new column names
pop_col = '{}_pop'.format(bracket)
obs_col = bracket
# Create the constraining properties map
pvs = {'income': bracket}
# Get the StatisticalPopulation and Observation
data[pop_col] = dc.get_populations(data['state'], 'Household',
constraining_properties=pvs)
data[obs_col] = dc.get_observations(data[pop_col],
'count',
'measuredValue',
'2017',
measurement_method='CenusACS5yrSurvey')
# Display the table
data.head(5)
```
Let's limit the size of this DataFrame by selecting columns with only the State name and Observations.
```
# Select columns that will be used for plotting
data = data[['name', 'all'] + income_brackets]
# Display the table
data.head(5)
```
## Analyzing the Data
Let's plot our data as a histogram. Notice that the income ranges as tabulated by the US Census are not equal. At the low end, the range is 0-9999, whereas, towards the top, the range 150,000-199,999 is five times as broad! We will make the width of each of the columns correspond to their range, and will give us an idea of the total earnings, not just the number of people in that group.
First we provide code for generating the plot.
```
# Histogram bins
label_to_range = {
"USDollarUpto10000": [0, 9999],
"USDollar10000To14999": [10000, 14999],
"USDollar15000To19999": [15000, 19999],
"USDollar20000To24999": [20000, 24999],
"USDollar25000To29999": [25000, 29999],
"USDollar30000To34999": [30000, 34999],
"USDollar35000To39999": [35000, 39999],
"USDollar40000To44999": [40000, 44999],
"USDollar45000To49999": [45000, 49999],
"USDollar50000To59999": [50000, 59999],
"USDollar60000To74999": [60000, 74999],
"USDollar75000To99999": [75000, 99999],
"USDollar100000To124999": [100000, 124999],
"USDollar125000To149999": [125000, 149999],
"USDollar150000To199999": [150000, 199999],
"USDollar200000Onwards": [250000, 300000],
}
bins = [
0, 10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000, 50000, 60000,
75000, 100000, 125000, 150000, 250000
]
def plot_income(data, state_name):
# Assert that "state_name" is a valid state name
frame_search = data.loc[data['name'] == state_name].squeeze()
if frame_search.shape[0] == 0:
print('{} does not have sufficient income data to generate the plot!'.format(state_name))
return
# Print the resulting series
data = frame_search[2:]
# Calculate the bar lengths
lengths = []
for bracket in income_brackets:
r = label_to_range[bracket]
lengths.append(int((r[1] - r[0]) / 18))
# Calculate the x-axis positions
pos, total = [], 0
for l in lengths:
pos.append(total + (l // 2))
total += l
# Plot the histogram
plt.figure(figsize=(12, 10))
plt.xticks(pos, income_brackets, rotation=90)
plt.grid(True)
plt.bar(pos, data.values, lengths, color='b', alpha=0.3)
# Return the resulting frame.
return frame_search
```
We can then call this code with a state to plot the income bracket sizes.
```
#@title Enter State to plot { run: "auto" }
state_name = "Tennessee" #@param ["Missouri", "Arkansas", "Arizona", "Ohio", "Connecticut", "Vermont", "Illinois", "South Dakota", "Iowa", "Oklahoma", "Kansas", "Washington", "Oregon", "Hawaii", "Minnesota", "Idaho", "Alaska", "Colorado", "Delaware", "Alabama", "North Dakota", "Michigan", "California", "Indiana", "Kentucky", "Nebraska", "Louisiana", "New Jersey", "Rhode Island", "Utah", "Nevada", "South Carolina", "Wisconsin", "New York", "North Carolina", "New Hampshire", "Georgia", "Pennsylvania", "West Virginia", "Maine", "Mississippi", "Montana", "Tennessee", "New Mexico", "Massachusetts", "Wyoming", "Maryland", "Florida", "Texas", "Virginia"]
result = plot_income(data, state_name)
# Show the plot
plt.show()
```
and we can display the raw table of values.
```
# Additionally print the table of income bracket sizes
result
```
This is only the beginning! What else can you analyze? For example, you could try computing a measure of income disparity in each state (see [Gini Coefficient](https://en.wikipedia.org/wiki/Gini_coefficient)).
You could then expand the dataframe to include more information and analyze how attributes like education level, crime, or even weather effect income disparity.
|
github_jupyter
|
## Importing dependencies and loading the data
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_boston
dataset=load_boston()
dataset
```
### So in the given data there are certain features and target prices of houses in boston. So let's first transform the given data into dataframe
```
df=pd.DataFrame(dataset.data,columns=dataset.feature_names)
df.head()
```
Let's put the target variable to the dataframe
```
df['Target']=dataset.target
df.head()
```
## Since we have transformed the data into the dataframe now let's do the Exploratory Data Analysis i.e. EDA
```
df.describe()
```
Let's see if there is any missing data or not
```
df.isnull().sum()
```
### Since there is not any missing data let's see the correlation between the features
```
df.corr()
```
### Let's visualize the data on the heatmap
```
plt.figure(figsize=(10,10)) #this increase the dimension of the figure
sns.heatmap(df.corr(),annot=True) #this plots the data into the heatmap
```
### let's see the distribution of the data since all the features are continuous
```
cont=[feature for feature in df.columns]
cont
for feature in cont:
sns.distplot(df[feature])
plt.show()
```
#### Let's draw the regplot between the features and target
```
for feature in cont:
if feature!='Target':
sns.regplot(x=feature,y='Target',data=df)
plt.show()
cont
plt.figure(figsize=(10,10)) #this increase the dimension of the figure
sns.heatmap(df.corr(),annot=True)
```
### Let's do some feature engineering and drop some features which have low correlation with the target
```
'''Now let's take some of the features and test a model and after
seeing the result we can again take some more features to see if the model is working fine or not.'''
x=df.loc[:,[
'ZN',
'INDUS',
'NOX',
'RM',
'AGE',
'DIS',
'TAX',
'PTRATIO',
'B',
'LSTAT']]
y=df.Target
x.head()
# Now let's split the data into train and test data using train_test_split
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=5)
# fitting the model
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
# Predicting the values
y_pre=model.predict(x_test)
y_pre
# Let's see how our model is performing
from sklearn.metrics import r2_score
score=r2_score(y_test,y_pre)
score
sns.scatterplot(y_test,y_pre)
sns.distplot(y_test-y_pre)
```
|
github_jupyter
|
# Move Files
```
import numpy as np
import pandas as pd
import os
from datetime import datetime
import shutil
import random
pd.set_option('max_colwidth', -1)
```
# Create list of current files
```
SAGEMAKER_REPO_PATH = r'/home/ec2-user/SageMaker/classify-streetview'
ORIGINAL_IMAGE_PATH = os.path.join(SAGEMAKER_REPO_PATH, 'images')
ORIGINAL_TRAIN_PATH = os.path.join(ORIGINAL_IMAGE_PATH, 'train')
os.listdir(ORIGINAL_TRAIN_PATH)
subset_list = ['train', 'valid', 'test']
# Include obstacle and surface prob in case we need to move those images
class_list = ['3_present', '0_missing', '1_null', '2_obstacle', '4_surface_prob']
original_df_list = []
# Get all existing jpgs with their detailed info
for split in subset_list:
for class_name in class_list:
full_folder_path = os.path.join(ORIGINAL_IMAGE_PATH, split, class_name)
jpg_names = os.listdir(full_folder_path)
df_part = pd.DataFrame({'jpg_name' : jpg_names, 'original_folder_path' : full_folder_path, 'original_group' : split, 'original_label' : class_name})
original_df_list.append(df_part)
# Create a full list all files
df_original = pd.concat(original_df_list)
print(df_original.shape)
df_original.head()
df_original.to_csv('March-SmartCrop-ImageList.csv', index = False)
df_original['original_label'].value_counts()
```
## Get the New ROI Image Details
```
df_train = pd.read_csv('train_labels.csv')
df_train['new_group'] = 'train'
df_val = pd.read_csv('validation_labels.csv')
df_val['new_group'] = 'valid'
df_test = pd.read_csv('test_labels.csv')
df_test['new_group'] = 'test'
df_new_roi = pd.concat([df_train, df_val, df_test])
print(df_new_roi.shape)
df_new_roi.head()
df_new_roi['jpg_name'] = df_new_roi['img_id'].astype(str) + '_' + df_new_roi['heading'].astype(str) + '_' +df_new_roi['crop_number'].astype(str) + '.jpg'
df_new_roi.head()
```
# Combine ROI Images with Original Image details
```
df_combine = df_new_roi.merge(df_original, how = 'outer', left_on = 'jpg_name', right_on = 'jpg_name')
print(df_combine.shape)
df_combine.head()
df_combine['crop_number'].value_counts(dropna = False)
df_combine.loc[df_combine['crop_number'].isna()]
df_combine['original_folder_path'].value_counts(dropna = False).head()
df_group_label = df_combine.groupby(['ground_truth', 'original_label'])['jpg_name'].count()
df_group_label
df_combine['jpg_name'].value_counts().describe()
```
# Observations
* There's exactly 1 row per jpg_name
* There's a row with ipynb_checkpoints, which is fine
* There are some lost images (mainly null)
* The grouping by label showing how images move around into the new "ground_truth"
# Create the list of files before and after locations
```
df_move = df_combine.dropna().copy()
df_move.shape
df_move['ground_truth'].value_counts()
df_move['new_group'].value_counts()
df_move.head()
df_move['new_folder_path'] = SAGEMAKER_REPO_PATH + '/roi-images/' + df_move['new_group'] + '/' + df_move['ground_truth']
df_move.head()
df_move.to_csv('roi-images-sagemaker-paths.csv', index = False)
```
# Actually Copy the Images
```
# Make sure folders exst for all new folders
unique_new_folders = list(df_move['new_folder_path'].unique())
print(len(unique_new_folders))
for new_folder in unique_new_folders:
if not os.path.exists(new_folder):
os.makedirs(new_folder)
print(new_folder)
for index, row in df_move.iterrows():
original = os.path.join(row['original_folder_path'], row['jpg_name'])
target = os.path.join(row['new_folder_path'], row['jpg_name'])
try:
shutil.copyfile(original, target)
except:
print(f"could not copy: {row['jpg_name']}")
```
# Make an alphabetical list of the test images
```
df_test = df_move.loc[df_move['new_group'] == 'test']
print(df_test.shape)
df_test.columns
keep_cols = ['img_id', 'heading', 'crop_number', '0_missing', '1_null', '2_present', 'count_all', 'ground_truth', 'jpg_name', 'new_folder_path']
df_test_keep = df_test[keep_cols].copy()
df_test_keep = df_test_keep.sort_values(['new_folder_path', 'jpg_name'])
df_test_keep.head()
df_test_keep.to_csv('test_roi_image_locations_sorted.csv', index = False)
```
|
github_jupyter
|
## Main Driver Notebook for Training Graph NNs on TSP for Edge Classification
### MODELS
- GatedGCN
- GCN
- GAT
- GraphSage
- GIN
- MoNet
- MLP
### DATASET
- TSP
### TASK
- Edge Classification, i.e. Classifying each edge as belonging/not belonging to the optimal TSP solution set.
```
"""
IMPORTING LIBS
"""
import dgl
import numpy as np
import os
import socket
import time
import random
import glob
import argparse, json
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from tensorboardX import SummaryWriter
from tqdm import tqdm
class DotDict(dict):
def __init__(self, **kwds):
self.update(kwds)
self.__dict__ = self
# """
# AUTORELOAD IPYTHON EXTENSION FOR RELOADING IMPORTED MODULES
# """
def in_ipynb():
try:
cfg = get_ipython().config
return True
except NameError:
return False
notebook_mode = in_ipynb()
print(notebook_mode)
if notebook_mode == True:
%load_ext autoreload
%autoreload 2
"""
IMPORTING CUSTOM MODULES/METHODS
"""
from nets.TSP_edge_classification.load_net import gnn_model # import all GNNS
from data.data import LoadData # import dataset
from train.train_TSP_edge_classification import train_epoch, evaluate_network # import train functions
"""
GPU Setup
"""
def gpu_setup(use_gpu, gpu_id):
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id)
if torch.cuda.is_available() and use_gpu:
print('cuda available with GPU:',torch.cuda.get_device_name(0))
device = torch.device("cuda")
else:
print('cuda not available')
device = torch.device("cpu")
return device
use_gpu = True
gpu_id = -1
device = None
# """
# USER CONTROLS
# """
if notebook_mode == True:
MODEL_NAME = 'MLP'
# MODEL_NAME = 'GCN'
MODEL_NAME = 'GatedGCN'
# MODEL_NAME = 'GAT'
# MODEL_NAME = 'GraphSage'
# MODEL_NAME = 'DiffPool'
# MODEL_NAME = 'GIN'
DATASET_NAME = 'TSP'
out_dir = 'out/TSP_edge_classification/'
root_log_dir = out_dir + 'logs/' + MODEL_NAME + "_" + DATASET_NAME + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
root_ckpt_dir = out_dir + 'checkpoints/' + MODEL_NAME + "_" + DATASET_NAME + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
print("[I] Loading data (notebook) ...")
dataset = LoadData(DATASET_NAME)
trainset, valset, testset = dataset.train, dataset.val, dataset.test
print("[I] Finished loading.")
MODEL_NAME = 'GatedGCN'
MODEL_NAME = 'GCN'
MODEL_NAME = 'GAT'
#MODEL_NAME = 'GraphSage'
#MODEL_NAME = 'MLP'
#MODEL_NAME = 'GIN'
#MODEL_NAME = 'MoNet'
# """
# PARAMETERS
# """
if notebook_mode == True:
#MODEL_NAME = 'GCN'
n_heads = -1
edge_feat = False
pseudo_dim_MoNet = -1
kernel = -1
gnn_per_block = -1
embedding_dim = -1
pool_ratio = -1
n_mlp_GIN = -1
gated = False
self_loop = False
max_time = 48
if MODEL_NAME == 'MLP':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=3; hidden_dim=144; out_dim=hidden_dim; dropout=0.0; readout='mean'; gated = False # Change gated = True for Gated MLP model
if MODEL_NAME == 'GCN':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; hidden_dim=128; out_dim=hidden_dim; dropout=0.0; readout='mean';
if MODEL_NAME == 'GraphSage':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; hidden_dim=96; out_dim=hidden_dim; dropout=0.0; readout='mean';
if MODEL_NAME == 'GAT':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; n_heads=8; hidden_dim=16; out_dim=128; dropout=0.0; readout='mean';
if MODEL_NAME == 'GIN':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; hidden_dim=112; out_dim=hidden_dim; dropout=0.0; readout='mean';
if MODEL_NAME == 'MoNet':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; hidden_dim=80; out_dim=hidden_dim; dropout=0.0; readout='mean';
if MODEL_NAME == 'GatedGCN':
seed=41; epochs=500; batch_size=64; init_lr=0.001; lr_reduce_factor=0.5; lr_schedule_patience=10; min_lr = 1e-5; weight_decay=0
L=4; hidden_dim=64; out_dim=hidden_dim; dropout=0.0; readout='mean'; edge_feat = True
# generic new_params
net_params = {}
net_params['device'] = device
net_params['in_dim'] = trainset[0][0].ndata['feat'][0].size(0)
net_params['in_dim_edge'] = trainset[0][0].edata['feat'][0].size(0)
net_params['residual'] = True
net_params['hidden_dim'] = hidden_dim
net_params['out_dim'] = out_dim
num_classes = len(np.unique(np.concatenate(trainset[:][1])))
net_params['n_classes'] = num_classes
net_params['n_heads'] = n_heads
net_params['L'] = L # min L should be 2
net_params['readout'] = "mean"
net_params['graph_norm'] = True
net_params['batch_norm'] = True
net_params['in_feat_dropout'] = 0.0
net_params['dropout'] = 0.0
net_params['edge_feat'] = edge_feat
net_params['self_loop'] = self_loop
# for MLPNet
net_params['gated'] = gated
# specific for MoNet
net_params['pseudo_dim_MoNet'] = 2
net_params['kernel'] = 3
# specific for GIN
net_params['n_mlp_GIN'] = 2
net_params['learn_eps_GIN'] = True
net_params['neighbor_aggr_GIN'] = 'sum'
# specific for graphsage
net_params['sage_aggregator'] = 'meanpool'
# setting seeds
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if device == 'cuda':
torch.cuda.manual_seed(seed)
"""
VIEWING MODEL CONFIG AND PARAMS
"""
def view_model_param(MODEL_NAME, net_params):
model = gnn_model(MODEL_NAME, net_params)
total_param = 0
print("MODEL DETAILS:\n")
#print(model)
for param in model.parameters():
# print(param.data.size())
total_param += np.prod(list(param.data.size()))
print('MODEL/Total parameters:', MODEL_NAME, total_param)
return total_param
if notebook_mode == True:
view_model_param(MODEL_NAME, net_params)
"""
TRAINING CODE
"""
def train_val_pipeline(MODEL_NAME, dataset, params, net_params, dirs):
t0 = time.time()
per_epoch_time = []
DATASET_NAME = dataset.name
#assert net_params['self_loop'] == False, "No self-loop support for %s dataset" % DATASET_NAME
trainset, valset, testset = dataset.train, dataset.val, dataset.test
root_log_dir, root_ckpt_dir, write_file_name, write_config_file = dirs
device = net_params['device']
# Write the network and optimization hyper-parameters in folder config/
with open(write_config_file + '.txt', 'w') as f:
f.write("""Dataset: {},\nModel: {}\n\nparams={}\n\nnet_params={}\n\n\nTotal Parameters: {}\n\n"""\
.format(DATASET_NAME, MODEL_NAME, params, net_params, net_params['total_param']))
log_dir = os.path.join(root_log_dir, "RUN_" + str(0))
writer = SummaryWriter(log_dir=log_dir)
# setting seeds
random.seed(params['seed'])
np.random.seed(params['seed'])
torch.manual_seed(params['seed'])
if device == 'cuda':
torch.cuda.manual_seed(params['seed'])
print("Training Graphs: ", len(trainset))
print("Validation Graphs: ", len(valset))
print("Test Graphs: ", len(testset))
print("Number of Classes: ", net_params['n_classes'])
model = gnn_model(MODEL_NAME, net_params)
model = model.to(device)
optimizer = optim.Adam(model.parameters(), lr=params['init_lr'], weight_decay=params['weight_decay'])
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',
factor=params['lr_reduce_factor'],
patience=params['lr_schedule_patience'],
verbose=True)
epoch_train_losses, epoch_val_losses = [], []
epoch_train_f1s, epoch_val_f1s = [], []
train_loader = DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, collate_fn=dataset.collate)
val_loader = DataLoader(valset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)
test_loader = DataLoader(testset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)
# At any point you can hit Ctrl + C to break out of training early.
try:
with tqdm(range(params['epochs'])) as t:
for epoch in t:
t.set_description('Epoch %d' % epoch)
start = time.time()
epoch_train_loss, epoch_train_f1, optimizer = train_epoch(model, optimizer, device, train_loader, epoch)
epoch_val_loss, epoch_val_f1 = evaluate_network(model, device, val_loader, epoch)
epoch_train_losses.append(epoch_train_loss)
epoch_val_losses.append(epoch_val_loss)
epoch_train_f1s.append(epoch_train_f1)
epoch_val_f1s.append(epoch_val_f1)
writer.add_scalar('train/_loss', epoch_train_loss, epoch)
writer.add_scalar('val/_loss', epoch_val_loss, epoch)
writer.add_scalar('train/_f1', epoch_train_f1, epoch)
writer.add_scalar('val/_f1', epoch_val_f1, epoch)
writer.add_scalar('learning_rate', optimizer.param_groups[0]['lr'], epoch)
_, epoch_test_f1 = evaluate_network(model, device, test_loader, epoch)
t.set_postfix(time=time.time()-start, lr=optimizer.param_groups[0]['lr'],
train_loss=epoch_train_loss, val_loss=epoch_val_loss,
train_f1=epoch_train_f1, val_f1=epoch_val_f1,
test_f1=epoch_test_f1)
per_epoch_time.append(time.time()-start)
# Saving checkpoint
ckpt_dir = os.path.join(root_ckpt_dir, "RUN_")
if not os.path.exists(ckpt_dir):
os.makedirs(ckpt_dir)
torch.save(model.state_dict(), '{}.pkl'.format(ckpt_dir + "/epoch_" + str(epoch)))
files = glob.glob(ckpt_dir + '/*.pkl')
for file in files:
epoch_nb = file.split('_')[-1]
epoch_nb = int(epoch_nb.split('.')[0])
if epoch_nb < epoch-1:
os.remove(file)
scheduler.step(epoch_val_loss)
if optimizer.param_groups[0]['lr'] < params['min_lr']:
print("\n!! LR EQUAL TO MIN LR SET.")
break
# Stop training after params['max_time'] hours
if time.time()-t0 > params['max_time']*3600:
print('-' * 89)
print("Max_time for training elapsed {:.2f} hours, so stopping".format(params['max_time']))
break
except KeyboardInterrupt:
print('-' * 89)
print('Exiting from training early because of KeyboardInterrupt')
_, test_f1 = evaluate_network(model, device, test_loader, epoch)
_, train_f1 = evaluate_network(model, device, train_loader, epoch)
print("Test F1: {:.4f}".format(test_f1))
print("Train F1: {:.4f}".format(train_f1))
print("TOTAL TIME TAKEN: {:.4f}s".format(time.time()-t0))
print("AVG TIME PER EPOCH: {:.4f}s".format(np.mean(per_epoch_time)))
writer.close()
"""
Write the results in out_dir/results folder
"""
with open(write_file_name + '.txt', 'w') as f:
f.write("""Dataset: {},\nModel: {}\n\nparams={}\n\nnet_params={}\n\n{}\n\nTotal Parameters: {}\n\n
FINAL RESULTS\nTEST F1: {:.4f}\nTRAIN F1: {:.4f}\n\n
Total Time Taken: {:.4f}hrs\nAverage Time Per Epoch: {:.4f}s\n\n\n"""\
.format(DATASET_NAME, MODEL_NAME, params, net_params, model, net_params['total_param'],
np.mean(np.array(test_f1)), np.mean(np.array(train_f1)), (time.time()-t0)/3600, np.mean(per_epoch_time)))
# send results to gmail
try:
from gmail import send
subject = 'Result for Dataset: {}, Model: {}'.format(DATASET_NAME, MODEL_NAME)
body = """Dataset: {},\nModel: {}\n\nparams={}\n\nnet_params={}\n\n{}\n\nTotal Parameters: {}\n\n
FINAL RESULTS\nTEST F1: {:.4f}\nTRAIN F1: {:.4f}\n\n
Total Time Taken: {:.4f}hrs\nAverage Time Per Epoch: {:.4f}s\n\n\n"""\
.format(DATASET_NAME, MODEL_NAME, params, net_params, model, net_params['total_param'],
np.mean(np.array(test_f1)), np.mean(np.array(train_f1)), (time.time()-t0)/3600, np.mean(per_epoch_time))
send(subject, body)
except:
pass
def main(notebook_mode=False,config=None):
"""
USER CONTROLS
"""
# terminal mode
if notebook_mode==False:
parser = argparse.ArgumentParser()
parser.add_argument('--config', help="Please give a config.json file with training/model/data/param details")
parser.add_argument('--gpu_id', help="Please give a value for gpu id")
parser.add_argument('--model', help="Please give a value for model name")
parser.add_argument('--dataset', help="Please give a value for dataset name")
parser.add_argument('--out_dir', help="Please give a value for out_dir")
parser.add_argument('--seed', help="Please give a value for seed")
parser.add_argument('--epochs', help="Please give a value for epochs")
parser.add_argument('--batch_size', help="Please give a value for batch_size")
parser.add_argument('--init_lr', help="Please give a value for init_lr")
parser.add_argument('--lr_reduce_factor', help="Please give a value for lr_reduce_factor")
parser.add_argument('--lr_schedule_patience', help="Please give a value for lr_schedule_patience")
parser.add_argument('--min_lr', help="Please give a value for min_lr")
parser.add_argument('--weight_decay', help="Please give a value for weight_decay")
parser.add_argument('--print_epoch_interval', help="Please give a value for print_epoch_interval")
parser.add_argument('--L', help="Please give a value for L")
parser.add_argument('--hidden_dim', help="Please give a value for hidden_dim")
parser.add_argument('--out_dim', help="Please give a value for out_dim")
parser.add_argument('--residual', help="Please give a value for residual")
parser.add_argument('--edge_feat', help="Please give a value for edge_feat")
parser.add_argument('--readout', help="Please give a value for readout")
parser.add_argument('--kernel', help="Please give a value for kernel")
parser.add_argument('--n_heads', help="Please give a value for n_heads")
parser.add_argument('--gated', help="Please give a value for gated")
parser.add_argument('--in_feat_dropout', help="Please give a value for in_feat_dropout")
parser.add_argument('--dropout', help="Please give a value for dropout")
parser.add_argument('--graph_norm', help="Please give a value for graph_norm")
parser.add_argument('--batch_norm', help="Please give a value for batch_norm")
parser.add_argument('--sage_aggregator', help="Please give a value for sage_aggregator")
parser.add_argument('--data_mode', help="Please give a value for data_mode")
parser.add_argument('--num_pool', help="Please give a value for num_pool")
parser.add_argument('--gnn_per_block', help="Please give a value for gnn_per_block")
parser.add_argument('--embedding_dim', help="Please give a value for embedding_dim")
parser.add_argument('--pool_ratio', help="Please give a value for pool_ratio")
parser.add_argument('--linkpred', help="Please give a value for linkpred")
parser.add_argument('--cat', help="Please give a value for cat")
parser.add_argument('--self_loop', help="Please give a value for self_loop")
parser.add_argument('--max_time', help="Please give a value for max_time")
args = parser.parse_args()
with open(args.config) as f:
config = json.load(f)
# device
if args.gpu_id is not None:
config['gpu']['id'] = int(args.gpu_id)
config['gpu']['use'] = True
device = gpu_setup(config['gpu']['use'], config['gpu']['id'])
# model, dataset, out_dir
if args.model is not None:
MODEL_NAME = args.model
else:
MODEL_NAME = config['model']
if args.dataset is not None:
DATASET_NAME = args.dataset
else:
DATASET_NAME = config['dataset']
dataset = LoadData(DATASET_NAME)
if args.out_dir is not None:
out_dir = args.out_dir
else:
out_dir = config['out_dir']
# parameters
params = config['params']
if args.seed is not None:
params['seed'] = int(args.seed)
if args.epochs is not None:
params['epochs'] = int(args.epochs)
if args.batch_size is not None:
params['batch_size'] = int(args.batch_size)
if args.init_lr is not None:
params['init_lr'] = float(args.init_lr)
if args.lr_reduce_factor is not None:
params['lr_reduce_factor'] = float(args.lr_reduce_factor)
if args.lr_schedule_patience is not None:
params['lr_schedule_patience'] = int(args.lr_schedule_patience)
if args.min_lr is not None:
params['min_lr'] = float(args.min_lr)
if args.weight_decay is not None:
params['weight_decay'] = float(args.weight_decay)
if args.print_epoch_interval is not None:
params['print_epoch_interval'] = int(args.print_epoch_interval)
if args.max_time is not None:
params['max_time'] = float(args.max_time)
# network parameters
net_params = config['net_params']
net_params['device'] = device
net_params['gpu_id'] = config['gpu']['id']
net_params['batch_size'] = params['batch_size']
if args.L is not None:
net_params['L'] = int(args.L)
if args.hidden_dim is not None:
net_params['hidden_dim'] = int(args.hidden_dim)
if args.out_dim is not None:
net_params['out_dim'] = int(args.out_dim)
if args.residual is not None:
net_params['residual'] = True if args.residual=='True' else False
if args.edge_feat is not None:
net_params['edge_feat'] = True if args.edge_feat=='True' else False
if args.readout is not None:
net_params['readout'] = args.readout
if args.kernel is not None:
net_params['kernel'] = int(args.kernel)
if args.n_heads is not None:
net_params['n_heads'] = int(args.n_heads)
if args.gated is not None:
net_params['gated'] = True if args.gated=='True' else False
if args.in_feat_dropout is not None:
net_params['in_feat_dropout'] = float(args.in_feat_dropout)
if args.dropout is not None:
net_params['dropout'] = float(args.dropout)
if args.graph_norm is not None:
net_params['graph_norm'] = True if args.graph_norm=='True' else False
if args.batch_norm is not None:
net_params['batch_norm'] = True if args.batch_norm=='True' else False
if args.sage_aggregator is not None:
net_params['sage_aggregator'] = args.sage_aggregator
if args.data_mode is not None:
net_params['data_mode'] = args.data_mode
if args.num_pool is not None:
net_params['num_pool'] = int(args.num_pool)
if args.gnn_per_block is not None:
net_params['gnn_per_block'] = int(args.gnn_per_block)
if args.embedding_dim is not None:
net_params['embedding_dim'] = int(args.embedding_dim)
if args.pool_ratio is not None:
net_params['pool_ratio'] = float(args.pool_ratio)
if args.linkpred is not None:
net_params['linkpred'] = True if args.linkpred=='True' else False
if args.cat is not None:
net_params['cat'] = True if args.cat=='True' else False
if args.self_loop is not None:
net_params['self_loop'] = True if args.self_loop=='True' else False
# notebook mode
if notebook_mode:
# parameters
params = config['params']
# dataset
DATASET_NAME = config['dataset']
dataset = LoadData(DATASET_NAME)
# device
device = gpu_setup(config['gpu']['use'], config['gpu']['id'])
out_dir = config['out_dir']
# GNN model
MODEL_NAME = config['model']
# network parameters
net_params = config['net_params']
net_params['device'] = device
net_params['gpu_id'] = config['gpu']['id']
net_params['batch_size'] = params['batch_size']
# TSP
net_params['in_dim'] = dataset.train[0][0].ndata['feat'][0].shape[0]
net_params['in_dim_edge'] = dataset.train[0][0].edata['feat'][0].size(0)
num_classes = len(np.unique(np.concatenate(dataset.train[:][1])))
net_params['n_classes'] = num_classes
root_log_dir = out_dir + 'logs/' + MODEL_NAME + "_" + DATASET_NAME + "_GPU" + str(config['gpu']['id']) + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
root_ckpt_dir = out_dir + 'checkpoints/' + MODEL_NAME + "_" + DATASET_NAME + "_GPU" + str(config['gpu']['id']) + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
write_file_name = out_dir + 'results/result_' + MODEL_NAME + "_" + DATASET_NAME + "_GPU" + str(config['gpu']['id']) + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
write_config_file = out_dir + 'configs/config_' + MODEL_NAME + "_" + DATASET_NAME + "_GPU" + str(config['gpu']['id']) + "_" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')
dirs = root_log_dir, root_ckpt_dir, write_file_name, write_config_file
if not os.path.exists(out_dir + 'results'):
os.makedirs(out_dir + 'results')
if not os.path.exists(out_dir + 'configs'):
os.makedirs(out_dir + 'configs')
net_params['total_param'] = view_model_param(MODEL_NAME, net_params)
train_val_pipeline(MODEL_NAME, dataset, params, net_params, dirs)
if notebook_mode==True:
config = {}
# gpu config
gpu = {}
gpu['use'] = use_gpu
gpu['id'] = gpu_id
config['gpu'] = gpu
# GNN model, dataset, out_dir
config['model'] = MODEL_NAME
config['dataset'] = DATASET_NAME
config['out_dir'] = out_dir
# parameters
params = {}
params['seed'] = seed
params['epochs'] = epochs
params['batch_size'] = batch_size
params['init_lr'] = init_lr
params['lr_reduce_factor'] = lr_reduce_factor
params['lr_schedule_patience'] = lr_schedule_patience
params['min_lr'] = min_lr
params['weight_decay'] = weight_decay
params['print_epoch_interval'] = 5
params['max_time'] = max_time
config['params'] = params
# network parameters
config['net_params'] = net_params
# convert to .py format
from utils.cleaner_main import *
cleaner_main('main_TSP_edge_classification')
main(True,config)
else:
main()
```
|
github_jupyter
|
# k-Nearest Neighbor (kNN) exercise
#### This assignment was adapted from Stanford's CS231n course: http://cs231n.stanford.edu/
The kNN classifier consists of two stages:
- During training, the classifier takes the training data and simply remembers it
- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
- The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
### YOUR NAME: YOUR_NAME
### List of collaborators (optional): N/A
```
# Run some setup code for this notebook.
import random
import numpy as np
from psyc272cava.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Information about the CIFAR-10 dataset: https://www.cs.toronto.edu/~kriz/cifar.html
# Load the raw CIFAR-10 data.
cifar10_dir = 'psyc272cava/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 10
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from psyc272cava.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
```
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
1. First we must compute the distances between all test examples and all train examples.
2. Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.
**Note: For the three distance computations that we require you to implement in this notebook, you may not use the np.linalg.norm() function that numpy provides.**
First, open `psyc272cava/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
```
# Open psyc272cava/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
```
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
You should expect to see a slightly better performance than with `k = 1`.
```
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('One loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('No loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# You should see significantly faster performance with the fully vectorized implementation!
# NOTE: depending on what machine you're using,
# you might not see a speedup when you go from two loops to one loop,
# and might even see a slow-down.
```
### Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
```
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
num_split = X_train.shape[0] / num_folds
acc_k = np.zeros((len(k_choices), num_folds), dtype=np.float)
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 9
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
|
github_jupyter
|
# Introduction to Qiskit
Welcome to the Quantum Challenge! Here you will be using Qiskit, the open source quantum software development kit developed by IBM Quantum and community members around the globe. The following exercises will familiarize you with the basic elements of Qiskit and quantum circuits.
To begin, let us define what a quantum circuit is:
> **"A quantum circuit is a computational routine consisting of coherent quantum operations on quantum data, such as qubits. It is an ordered sequence of quantum gates, measurements, and resets, which may be conditioned on real-time classical computation."** (https://qiskit.org/textbook/ch-algorithms/defining-quantum-circuits.html)
While this might be clear to a quantum physicist, don't worry if it is not self-explanatory to you. During this exercise you will learn what a qubit is, how to apply quantum gates to it, and how to measure its final state. You will then be able to create your own quantum circuits! By the end, you should be able to explain the fundamentals of quantum circuits to your colleagues.
Before starting with the exercises, please run cell *Cell 1* below by clicking on it and pressing 'shift' + 'enter'. This is the general way to execute a code cell in the Jupyter notebook environment that you are using now. While it is running, you will see `In [*]:` in the top left of that cell. Once it finishes running, you will see a number instead of the star, which indicates how many cells you've run. You can find more information about Jupyter notebooks here: https://qiskit.org/textbook/ch-prerequisites/python-and-jupyter-notebooks.html.
---
For useful tips to complete this exercise as well as pointers for communicating with other participants and asking questions, please take a look at the following [repository](https://github.com/qiskit-community/may4_challenge_exercises). You will also find a copy of these exercises, so feel free to edit and experiment with these notebooks.
---
```
# Cell 1
import numpy as np
from qiskit import Aer, QuantumCircuit, execute
from qiskit.visualization import plot_histogram
from IPython.display import display, Math, Latex
from may4_challenge import plot_state_qsphere
from may4_challenge.ex1 import minicomposer
from may4_challenge.ex1 import check1, check2, check3, check4, check5, check6, check7, check8
from may4_challenge.ex1 import return_state, vec_in_braket, statevec
```
## Exercise I: Basic Operations on Qubits and Measurements
### Writing down single-qubit states
Let us start by looking at a single qubit. The main difference between a classical bit, which can take the values 0 and 1 only, is that a quantum bit, or **qubit**, can be in the states $\vert0\rangle$, $\vert1\rangle$, as well as a linear combination of these two states. This feature is known as superposition, and allows us to write the most general state of a qubit as:
$$\vert\psi\rangle = \sqrt{1-p}\vert0\rangle + e^{i \phi} \sqrt{p} \vert1\rangle$$
If we were to measure the state of this qubit, we would find the result $1$ with probability $p$, and the result $0$ with probability $1-p$. As you can see, the total probability is $1$, meaning that we will indeed measure either $0$ or $1$, and no other outcomes exists.
In addition to $p$, you might have noticed another parameter above. The variable $\phi$ indicates the relative quantum phase between the two states $\vert0\rangle$ and $\vert1\rangle$. As we will discover later, this relative phase is quite important. For now, it suffices to note that the quantum phase is what enables interference between quantum states, resulting in our ability to write quantum algorithms for solving specific tasks.
If you are interested in learning more, we refer you to [the section in the Qiskit textbook on representations of single-qubit states](https://qiskit.org/textbook/ch-states/representing-qubit-states.html).
### Visualizing quantum states
We visualize quantum states throughout this exercise using what is known as a `qsphere`. Here is how the `qsphere` looks for the states $\vert0\rangle$ and $\vert1\rangle$, respectively. Note that the top-most part of the sphere represents the state $\vert0\rangle$, while the bottom represents $\vert1\rangle$.
<img src="qsphere01.png" alt="qsphere with states 0 and 1" style="width: 400px;"/>
It should be no surprise that the superposition state with quantum phase $\phi = 0$ and probability $p = 1/2$ (meaning an equal likelihood of measuring both 0 and 1) is shown on the `qsphere` with two points. However, note also that the size of the circles at the two points is smaller than when we had simply $\vert0\rangle$ and $\vert1\rangle$ above. This is because the size of the circles is proportional to the probability of measuring each one, which is now reduced by half.
<img src="qsphereplus.png" alt="qsphere with superposition 1" style="width: 200px;"/>
In the case of superposition states, where the quantum phase is non-zero, the qsphere allows us to visualize that phase by changing the color of the respective blob. For example, the state with $\phi = 90^\circ$ (degrees) and probability $p = 1/2$ is shown in the `qsphere` below.
<img src="qspherey.png" alt="qsphere with superposition 2" style="width: 200px;"/>
### Manipulating qubits
Qubits are manipulated by applying quantum gates. Let's go through an overview of the different gates that we will consider in the following exercises.
First, let's describe how we can change the value of $p$ for our general quantum state. To do this, we will use two gates:
1. **$X$-gate**: This gate flips between the two states $\vert0\rangle$ and $\vert1\rangle$. This operation is the same as the classical NOT gate. As a result, the $X$-gate is sometimes referred to as a bit flip or NOT gate. Mathematically, the $X$ gate changes $p$ to $1-p$, so in particular from 0 to 1, and vice versa.
2. **$H$-gate**: This gate allows us to go from the state $\vert0\rangle$ to the state $\frac{1}{\sqrt{2}}\left(\vert0\rangle + \vert1\rangle\right)$. This state is also known as the $\vert+\rangle$. Mathematically, this means going from $p=0, \phi=0$ to $p=1/2, \phi=0$. As the final state of the qubit is a superposition of $\vert0\rangle$ and $\vert1\rangle$, the Hadamard gate represents a true quantum operation.
Notice that both gates changed the value of $p$, but not $\phi$. Fortunately for us, it's quite easy to visualize the action of these gates by looking at the figure below.
<img src="quantumgates.png" alt="quantum gates" style="width: 400px;"/>
Once we have the state $\vert+\rangle$, we can then change the quantum phase by applying several other gates. For example, an $S$ gate adds a phase of $90$ degrees to $\phi$, while the $Z$ gate adds a phase of $180$ degrees to $\phi$. To subtract a phase of $90$ degrees, we can apply the $S^\dagger$ gate, which is read as S-dagger, and commonly written as `sdg`. Finally, there is a $Y$ gate which applies a sequence of $Z$ and $X$ gates.
You can experiment with the gates $X$, $Y$, $Z$, $H$, $S$ and $S^\dagger$ to become accustomed to the different operations and how they affect the state of a qubit. To do so, you can run *Cell 2* which starts our circuit widget. After running the cell, choose a gate to apply to a qubit, and then choose the qubit (in the first examples, the only qubit to choose is qubit 0). Watch how the corresponding state changes with each gate, as well as the description of that state. It will also provide you with the code that creates the corresponding quantum circuit in Qiskit below the qsphere.
If you want to learn more about describing quantum states, Pauli operators, and other single-qubit gates, see chapter 1 of our textbook: https://qiskit.org/textbook/ch-states/introduction.html.
```
# Cell 2
# press shift + return to run this code cell
# then, click on the gate that you want to apply to your qubit
# next, you have to choose the qubit that you want to apply it to (choose '0' here)
# click on clear to restart
minicomposer(1, dirac=True, qsphere=True)
```
Here are four small exercises to attain different states on the qsphere. You can either solve them with the widget above and copy paste the code it provides into the respective cells to create the quantum circuits, or you can directly insert a combination of the following code lines into the program to apply the different gates:
qc.x(0) # bit flip
qc.y(0) # bit and phase flip
qc.z(0) # phase flip
qc.h(0) # superpostion
qc.s(0) # quantum phase rotation by pi/2 (90 degrees)
qc.sdg(0) # quantum phase rotation by -pi/2 (90 degrees)
The '(0)' indicates that we apply this gate to qubit 'q0', which is the first (and in this case only) qubit.
Try to attain the given state on the qsphere in each of the following exercises.
### I.i) Let us start by performing a bit flip. The goal is to reach the state $\vert1\rangle$ starting from state $\vert0\rangle$. <img src="state1.png" width="300">
If you have reached the desired state with the widget, copy and paste the code from *Cell 2* into *Cell 3* (where it says "FILL YOUR CODE IN HERE") and run it to check your solution.
```
# Cell 3
def create_circuit():
qc = QuantumCircuit(1)
#
#
qc.x(0)
#
#
return qc
# check solution
qc = create_circuit()
state = statevec(qc)
check1(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True)
```
### I.ii) Next, let's create a superposition. The goal is to reach the state $|+\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle\right)$. <img src="stateplus.png" width="300">
Fill in the code in the lines indicated in *Cell 4*. If you prefer the widget, you can still copy the code that the widget gives in *Cell 2* and paste it into *Cell 4*.
```
# Cell 4
def create_circuit2():
qc = QuantumCircuit(1)
#
#
qc.h(0)
#
#
return qc
qc = create_circuit2()
state = statevec(qc)
check2(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True)
```
### I.iii) Let's combine those two. The goal is to reach the state $|-\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle - |1\rangle\right)$. <img src="stateminus.png" width="300">
Can you combine the above two tasks to come up with the solution?
```
# Cell 5
def create_circuit3():
qc = QuantumCircuit(1)
#
#
qc.x(0)
qc.h(0)
#
#
return qc
qc = create_circuit3()
state = statevec(qc)
check3(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True)
```
### I.iv) Finally, we move on to the complex numbers. The goal is to reach the state $|\circlearrowleft\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle - i|1\rangle\right)$. <img src="stateleft.png" width="300">
```
# Cell 6
def create_circuit4():
qc = QuantumCircuit(1)
#
#
qc.h(0)
qc.sdg(0)
#
#
return qc
qc = create_circuit4()
state = statevec(qc)
check4(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True)
```
## Exercise II: Quantum Circuits Using Multi-Qubit Gates
Great job! Now that you've understood the single-qubit gates, let us look at gates operating on multiple qubits. The basic gates on two qubits are given by
qc.cx(c,t) # controlled-X (= CNOT) gate with control qubit c and target qubit t
qc.cz(c,t) # controlled-Z gate with control qubit c and target qubit t
qc.swap(a,b) # SWAP gate that swaps the states of qubit a and qubit b
If you'd like to read more about the different multi-qubit gates and their relations, visit chapter 2 of our textbook: https://qiskit.org/textbook/ch-gates/introduction.html.
As before, you can use the two-qubit circuit widget below to see how the combined two qubit state evolves when applying different gates (run *Cell 7*) and get the corresponding code that you can copy and paste into the program. Note that for two qubits a general state is of the form $a|00\rangle + b |01\rangle + c |10\rangle + d|11\rangle$, where $a$, $b$, $c$, and $d$ are complex numbers whose absolute values squared give the probability to measure the respective state; e.g., $|a|^2$ would be the probability to end in state '0' on both qubits. This means we can now have up to four points on the qsphere.
```
# Cell 7
# press shift + return to run this code cell
# then, click on the gate that you want to apply followed by the qubit(s) that you want it to apply to
# for controlled gates, the first qubit you choose is the control qubit and the second one the target qubit
# click on clear to restart
minicomposer(2, dirac = True, qsphere = True)
```
We start with the canonical two qubit gate, the controlled-NOT (also CNOT or CX) gate. Here, as with all controlled two qubit gates, one qubit is labelled as the "control", while the other is called the "target". If the control qubit is in state $|0\rangle$, it applies the identity $I$ gate to the target, i.e., no operation is performed. Instead, if the control qubit is in state $|1\rangle$, an X-gate is performed on the target qubit. Therefore, with both qubits in one of the two classical states, $|0\rangle$ or $|1\rangle$, the CNOT gate is limited to classical operations.
This situation changes dramatically when we first apply a Hadamard gate to the control qubit, bringing it into the superposition state $|+\rangle$. The action of a CNOT gate on this non-classical input can produce highly entangled states between control and target qubits. If the target qubit is initially in the $|0\rangle$ state, the resulting state is denoted by $|\Phi^+\rangle$, and is one of the so-called Bell states.
### II.i) Construct the Bell state $|\Phi^+\rangle = \frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right)$. <img src="phi+.png" width="300">
For this state we would have probability $\frac{1}{2}$ to measure "00" and probability $\frac{1}{2}$ to measure "11". Thus, the outcomes of both qubits are perfectly correlated.
```
# Cell 8
def create_circuit():
qc = QuantumCircuit(2)
#
#
qc.h(0)
qc.cx(0, 1)
#
#
return qc
qc = create_circuit()
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check5(state)
qc.draw(output='mpl') # we draw the circuit
```
Next, try to create the state of perfectly anti-correlated qubits. Note the minus sign here, which indicates the relative phase between the two states.
### II.ii) Construct the Bell state $\vert\Psi^-\rangle = \frac{1}{\sqrt{2}}\left(\vert01\rangle - \vert10\rangle\right)$. <img src="psi-.png" width="300">
```
# Cell 9
def create_circuit6():
qc = QuantumCircuit(2,2) # this time, we not only want two qubits, but also
# two classical bits for the measurement later
#
#
qc.h(0)
qc.x(1)
qc.cx(0, 1)
qc.z(1)
#
#
return qc
qc = create_circuit6()
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check6(state)
qc.measure(0, 0) # we perform a measurement on qubit q_0 and store the information on the classical bit c_0
qc.measure(1, 1) # we perform a measurement on qubit q_1 and store the information on the classical bit c_1
qc.draw(output='mpl') # we draw the circuit
```
As you can tell from the circuit (and the code) we have added measurement operators to the circuit. Note that in order to store the measurement results, we also need two classical bits, which we have added when creating the quantum circuit: `qc = QuantumCircuit(num_qubits, num_classicalbits)`.
In *Cell 10* we have defined a function `run_circuit()` that will run a circuit on the simulator. If the right state is prepared, we have probability $\frac{1}{2}$ to measure each of the two outcomes, "01" and "10". However, performing the measurement with 1000 shots does not imply that we will measure exactly 500 times "01" and 500 times "10". Just like flipping a coin multiple times, it is unlikely that one will get exactly a 50/50 split between the two possible output values. Instead, there are fluctuations about this ideal distribution. You can call `run_circuit` multiple times to see the variance in the ouput.
```
# Cell 10
def run_circuit(qc):
backend = Aer.get_backend('qasm_simulator') # we choose the simulator as our backend
result = execute(qc, backend, shots = 1000).result() # we run the simulation
counts = result.get_counts() # we get the counts
return counts
counts = run_circuit(qc)
print(counts)
plot_histogram(counts) # let us plot a histogram to see the possible outcomes and corresponding probabilities
```
### II.iii) You are given the quantum circuit described in the function below. Swap the states of the first and the second qubit.
This should be your final state: <img src="stateIIiii.png" width="300">
```
# Cell 11
def create_circuit7():
qc = QuantumCircuit(2)
qc.rx(np.pi/3,0)
qc.x(1)
return qc
qc = create_circuit7()
#
#
qc.swap(0, 1)
#
#
state = statevec(qc) # determine final state after running the circuit
display(Math(vec_in_braket(state.data)))
check7(state)
plot_state_qsphere(state.data, show_state_labels=True, show_state_angles=True)
```
### II.iv) Write a program from scratch that creates the GHZ state (on three qubits), $\vert \text{GHZ}\rangle = \frac{1}{\sqrt{2}} \left(|000\rangle + |111 \rangle \right)$, performs a measurement with 2000 shots, and returns the counts. <img src="ghz.png" width="300">
If you want to track the state as it is evolving, you could use the circuit widget from above for three qubits, i.e., `minicomposer(3, dirac=True, qsphere=True)`. For how to get the counts of a measurement, look at the code in *Cell 9* and *Cell 10*.
```
# Cell 12
#
def run_circuit(qc, shots):
backend = Aer.get_backend('qasm_simulator') # we choose the simulator as our backend
result = execute(qc, backend, shots = shots).result() # we run the simulation
counts = result.get_counts() # we get the counts
return counts
qc = QuantumCircuit(3)
qc.h(0)
qc.cx(0, 1)
qc.cx(1, 2)
qc.measure_all()
counts = run_circuit(qc, 2000)
#
#
#
print(counts)
check8(counts)
plot_histogram(counts)
```
Congratulations for finishing this introduction to Qiskit! Once you've reached all 8 points, the solution string will be displayed. You need to copy and paste that string on the IBM Quantum Challenge page to complete the exercise and track your progress.
Now that you have created and run your first quantum circuits, you are ready for the next exercise, where we will make use of the actual hardware and learn how to reduce the noise in the outputs.
|
github_jupyter
|
# Using Variational Autoencoder to Generate Faces
In this example, we are going to use VAE to generate faces. The dataset we are going to use is [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). The dataset consists of more than 200K celebrity face images. You have to download the Align&Cropped Images from the above website to run this example.
```
from bigdl.nn.layer import *
from bigdl.nn.criterion import *
from bigdl.optim.optimizer import *
from bigdl.dataset import mnist
import datetime as dt
from glob import glob
import os
import numpy as np
from utils import *
import imageio
image_size = 148
Z_DIM = 128
ENCODER_FILTER_NUM = 32
#download the data CelebA, and may repalce with your own data path
DATA_PATH = os.getenv("ANALYTICS_ZOO_HOME") + "/apps/variational-autoencoder/img_align_celeba"
from zoo.common.nncontext import *
sc = init_nncontext("Variational Autoencoder Example")
sc.addFile(os.getenv("ANALYTICS_ZOO_HOME")+"/apps/variational-autoencoder/utils.py")
```
## Define the Model
Here, we define a slightly more complicate CNN networks using convolution, batchnorm, and leakyRelu.
```
def conv_bn_lrelu(in_channels, out_channles, kw=4, kh=4, sw=2, sh=2, pw=-1, ph=-1):
model = Sequential()
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def upsample_conv_bn_lrelu(in_channels, out_channles, out_width, out_height, kw=3, kh=3, sw=1, sh=1, pw=-1, ph=-1):
model = Sequential()
model.add(ResizeBilinear(out_width, out_height))
model.add(SpatialConvolution(in_channels, out_channles, kw, kh, sw, sh, pw, ph))
model.add(SpatialBatchNormalization(out_channles))
model.add(LeakyReLU(0.2))
return model
def get_encoder_cnn():
input0 = Input()
#CONV
conv1 = conv_bn_lrelu(3, ENCODER_FILTER_NUM)(input0) # 32 * 32 * 32
conv2 = conv_bn_lrelu(ENCODER_FILTER_NUM, ENCODER_FILTER_NUM * 2)(conv1) # 16 * 16 * 64
conv3 = conv_bn_lrelu(ENCODER_FILTER_NUM * 2, ENCODER_FILTER_NUM * 4)(conv2) # 8 * 8 * 128
conv4 = conv_bn_lrelu(ENCODER_FILTER_NUM * 4, ENCODER_FILTER_NUM * 8)(conv3) # 4 * 4 * 256
view = View([4*4*ENCODER_FILTER_NUM*8])(conv4)
inter = Linear(4*4*ENCODER_FILTER_NUM*8, 2048)(view)
inter = BatchNormalization(2048)(inter)
inter = ReLU()(inter)
# fully connected to generate mean and log-variance
mean = Linear(2048, Z_DIM)(inter)
log_variance = Linear(2048, Z_DIM)(inter)
model = Model([input0], [mean, log_variance])
return model
def get_decoder_cnn():
input0 = Input()
linear = Linear(Z_DIM, 2048)(input0)
linear = Linear(2048, 4*4*ENCODER_FILTER_NUM * 8)(linear)
reshape = Reshape([ENCODER_FILTER_NUM * 8, 4, 4])(linear)
bn = SpatialBatchNormalization(ENCODER_FILTER_NUM * 8)(reshape)
# upsampling
up1 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*8, ENCODER_FILTER_NUM*4, 8, 8)(bn) # 8 * 8 * 128
up2 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*4, ENCODER_FILTER_NUM*2, 16, 16)(up1) # 16 * 16 * 64
up3 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM*2, ENCODER_FILTER_NUM, 32, 32)(up2) # 32 * 32 * 32
up4 = upsample_conv_bn_lrelu(ENCODER_FILTER_NUM, 3, 64, 64)(up3) # 64 * 64 * 3
output = Sigmoid()(up4)
model = Model([input0], [output])
return model
def get_autoencoder_cnn():
input0 = Input()
encoder = get_encoder_cnn()(input0)
sampler = GaussianSampler()(encoder)
decoder_model = get_decoder_cnn()
decoder = decoder_model(sampler)
model = Model([input0], [encoder, decoder])
return model, decoder_model
model, decoder = get_autoencoder_cnn()
```
## Load the Dataset
```
def get_data():
data_files = glob(os.path.join(DATA_PATH, "*.jpg"))
rdd_train_images = sc.parallelize(data_files[:100000]) \
.map(lambda path: inverse_transform(get_image(path, image_size)).transpose(2, 0, 1))
rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray(img, [np.array(0.0), img]))
return rdd_train_sample
train_data = get_data()
```
## Define the Training Objective
```
criterion = ParallelCriterion()
criterion.add(KLDCriterion(), 1.0) # You may want to twick this parameter
criterion.add(BCECriterion(size_average=False), 1.0 / 64)
```
## Define the Optimizer
```
batch_size = 100
# Create an Optimizer
optimizer = Optimizer(
model=model,
training_rdd=train_data,
criterion=criterion,
optim_method=Adam(0.001, beta1=0.5),
end_trigger=MaxEpoch(1),
batch_size=batch_size)
app_name='vea-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary = TrainSummary(log_dir='/tmp/vae',
app_name=app_name)
train_summary.set_summary_trigger("LearningRate", SeveralIteration(10))
train_summary.set_summary_trigger("Parameters", EveryEpoch())
optimizer.set_train_summary(train_summary)
print ("saving logs to ",app_name)
```
## Spin Up the Training
This could take a while. It took about 2 hours on a desktop with a intel i7-6700 cpu and 40GB java heap memory. You can reduce the training time by using less data (some changes in the "Load the Dataset" section), but the performce may not as good.
```
redire_spark_logs()
show_bigdl_info_logs()
def gen_image_row():
decoder.evaluate()
return np.column_stack([decoder.forward(np.random.randn(1, Z_DIM)).reshape(3, 64,64).transpose(1, 2, 0) for s in range(8)])
def gen_image():
return np.row_stack([gen_image_row() for i in range(8)])
for i in range(1, 6):
optimizer.set_end_when(MaxEpoch(i))
trained_model = optimizer.optimize()
image = gen_image()
if not os.path.exists("./images"):
os.makedirs("./images")
if not os.path.exists("./models"):
os.makedirs("./models")
# you may change the following directory accordingly and make sure the directory
# you are writing to exists
imageio.imwrite("./images/image_%s.png" % i , image)
decoder.saveModel("./models/decoder_%s.model" % i, over_write = True)
import matplotlib
matplotlib.use('Agg')
%pylab inline
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
loss = np.array(train_summary.read_scalar("Loss"))
plt.figure(figsize = (12,12))
plt.plot(loss[:,0],loss[:,1],label='loss')
plt.xlim(0,loss.shape[0]+10)
plt.grid(True)
plt.title("loss")
```
## Random Sample Some Images
```
from matplotlib.pyplot import imshow
img = gen_image()
imshow(img)
```
|
github_jupyter
|
<img src="../Pics/MLSb-T.png" width="160">
<br><br>
<center><u><H1>LSTM and GRU on Sentiment Analysis</H1></u></center>
```
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.log_device_placement = True
sess = tf.Session(config=config)
set_session(sess)
import numpy as np
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Embedding, GRU, LSTM, CuDNNLSTM, CuDNNGRU, Dropout
from keras.datasets import imdb
from keras.callbacks import EarlyStopping
from keras.optimizers import Adam
num_words = 20000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=num_words)
print(len(X_train), 'train_data')
print(len(X_test), 'test_data')
print(X_train[0])
len(X_train[0])
```
## Hyperparameters:
```
max_len = 256
embedding_size = 10
batch_size = 128
n_epochs = 10
```
## Creating Sequences
```
pad = 'pre' #'post'
X_train_pad = pad_sequences(X_train, maxlen=max_len, padding=pad, truncating=pad)
X_test_pad = pad_sequences(X_test, maxlen=max_len, padding=pad, truncating=pad)
X_train_pad[0]
```
## Creating the model:
```
model = Sequential()
#The input is a 2D tensor: (samples, sequence_length)
# this layer will return 3D tensor: (samples, sequence_length, embedding_dim)
model.add(Embedding(input_dim=num_words,
output_dim=embedding_size,
input_length=max_len,
name='layer_embedding'))
model.add(Dropout(0.2))
#model.add(LSTM(128,dropout=0.2, recurrent_dropout=0.2))
model.add(CuDNNLSTM(128, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid', name='classification'))
model.summary()
```
## Compiling the model:
```
#optimizer = Adam(lr=0.001, decay=1e-6)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
```
## Callbacks:
```
callback_early_stopping = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
```
## Training the model:
```
%%time
model.fit(X_train_pad, y_train,
epochs=n_epochs,
batch_size=batch_size,
validation_split=0.05,
callbacks=[callback_early_stopping]
)
```
## Testing the model:
```
%%time
eval_ = model.evaluate(X_test_pad, y_test)
print("Loss: {0:.5}".format(eval_[0]))
print("Accuracy: {0:.2%}".format(eval_[1]))
```
## Saving the model:
```
model.save("..\data\models\{}".format('Sentiment-LSTM-GRU'))
```
## GRU model:
```
model_GRU = Sequential()
model_GRU.add(Embedding(input_dim=num_words,
output_dim=embedding_size,
input_length=max_len,
name='layer_embedding'))
model_GRU.add(CuDNNGRU(units=16, return_sequences=True))
model_GRU.add(CuDNNGRU(units=8, return_sequences=True))
model_GRU.add(CuDNNGRU(units=4, return_sequences=False))
model_GRU.add(Dense(1, activation='sigmoid'))
model_GRU.summary()
model_GRU.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
%%time
model_GRU.fit(X_train_pad, y_train, validation_split=0.05, epochs=n_epochs, batch_size=batch_size)
%%time
eval_GRU = model_GRU.evaluate(X_test_pad, y_test)
print("Loss: {0:.5}".format(eval_GRU[0]))
print("Accuracy: {0:.2%}".format(eval_GRU[1]))
```
## Examples of Mis-Classified Text
```
#making predictions for the first 1000 test samples
y_pred = model.predict(X_test_pad[:1000])
y_pred = y_pred.T[0]
labels_pred = np.array([1.0 if p > 0.5 else 0.0 for p in y_pred])
true_labels = np.array(y_test[:1000])
incorrect = np.where(labels_pred != true_labels)
incorrect = incorrect[0]
print(incorrect)
len(incorrect)
idx = incorrect[1]
idx
text = X_test[idx]
print(text)
y_pred[idx]
true_labels[idx]
```
## Converting integers in Text
```
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
word_index.items()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
print(reverse_word_index)
def decode_index(text):
return ' '.join([reverse_word_index.get(i) for i in text])
decode_index(X_train[0])
text_data = []
for i in range(len(X_train)):
text_data.append(decode_index(X_train[i]))
text_data[0]
```
## Embeddings
```
layer_embedding = model.get_layer('layer_embedding')
weights_embedding = layer_embedding.get_weights()[0]
weights_embedding.shape
weights_embedding[word_index.get('good')]
```
## Similar Words
```
from scipy.spatial.distance import cdist
def print_similar_words(word, metric='cosine'):
token = word_index.get(word)
embedding = weights_embedding[token]
distances = cdist(weights_embedding, [embedding],
metric=metric).T[0]
sorted_index = np.argsort(distances)
sorted_distances = distances[sorted_index]
sorted_words = [reverse_word_index[token] for token in sorted_index
if token != 0]
def print_words(words, distances):
for word, distance in zip(words, distances):
print("{0:.3f} - {1}".format(distance, word))
N = 10
print("Distance from '{0}':".format(word))
print_words(sorted_words[0:N], sorted_distances[0:N])
print("-------")
print_words(sorted_words[-N:], sorted_distances[-N:])
print_similar_words('good', metric='cosine')
```
## Reference:
https://keras.io/layers/recurrent/
|
github_jupyter
|
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter([1700, 2100, 1900, 1300, 1600, 2200], [53000, 65000, 59000, 41000, 50000, 68000])
plt.show()
x = [1300, 1400, 1600, 1900, 2100, 2300]
y = [88000, 72000, 94000, 86000, 112000, 98000]
plt.scatter(x, y, s=32, c='cyan', alpha=0.5)
plt.show()
plt.bar(x, y, width=20, alpha=0.5)
plt.show()
plt.hist([100, 400, 200, 100, 400, 100, 300, 200, 100], alpha=0.5)
plt.show()
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']
plt.pie([11, 13, 56, 67, 23], colors=colors, shadow=True, startangle=90)
plt.show()
height = [65.0, 59.8, 63.3, 63.2, 65.0, 63.3, 65.8, 62.8, 61.1, 64.3, 63.0, 64.2, 65.4, 64.1, 64.7, 64.0, 66.1, 64.6, 67.0, 64.0, 59.0, 65.2, 62.9, 65.4, 63.7, 65.7, 64.1, 65.4, 64.7, 65.3, 65.2, 64.8, 66.4, 65.0, 65.6, 65.5, 67.4, 65.1, 66.8, 65.5, 67.8, 65.1, 69.5, 65.5, 62.5, 66.6, 63.8, 66.4, 64.5, 66.1, 65.0, 66.0, 64.7, 66.0, 65.7, 66.5, 65.5, 65.7, 65.6, 66.0, 66.9, 65.9, 66.6, 65.9, 66.5, 66.5, 67.9, 65.8, 68.3, 66.3, 67.7, 66.1, 68.5, 66.3, 69.4, 66.3, 71.8, 66.4, 62.4, 67.2, 64.5, 67.5, 64.5, 67.0, 63.9, 66.8, 65.4, 67.0, 65.0, 66.8, 65.7, 69.3, 68.7, 69.1, 66.5, 61.7, 64.9, 65.7, 69.6, 69.0, 64.8, 67.4, 65.3, 67.2, 65.8, 67.1, 65.8, 67.3, 65.6, 67.6, 65.9, 67.5, 65.8, 66.9, 67.1, 67.6, 66.6, 67.2, 67.4, 66.8, 67.3, 67.2, 66.6, 67.5, 68.2, 67.6, 67.8, 67.2, 68.3, 67.5, 68.1, 67.4, 69.0, 67.6, 68.9, 67.3, 69.6, 66.8, 70.4, 66.7, 70.0, 66.9, 72.8, 67.6, 62.8, 68.0, 62.9, 68.5, 63.9, 68.0, 64.5, 68.3, 64.5, 68.3, 66.0, 68.3, 65.8, 68.2, 66.0, 68.5, 65.5, 68.1, 65.7, 68.3, 66.8, 68.0, 66.7, 68.6, 67.0, 67.9, 66.9, 68.1, 66.8, 68.4, 67.1, 67.9, 67.7, 68.2, 68.3, 68.0, 67.6, 68.2, 68.4, 67.9, 67.7, 68.6, 68.7, 68.0, 69.3, 68.3, 68.7, 67.9, 69.1, 68.6, 69.3, 68.2, 68.6, 68.6, 69.6, 68.1, 70.4, 68.4, 71.2, 67.8, 70.8, 68.6, 71.7, 67.9, 73.3, 67.8, 63.0, 68.8, 63.7, 69.6, 65.4, 69.7, 64.6, 69.4, 66.4, 69.7, 65.8, 69.2, 65.7, 69.5, 66.1, 69.6, 66.5, 69.3, 66.6, 69.5, 66.6, 68.7, 67.7, 69.3, 68.5, 69.2, 67.8, 69.2, 67.6, 69.5, 68.1, 69.1, 69.2, 68.9, 68.7, 69.5, 68.6, 69.3, 68.6, 69.2, 68.6, 68.7, 70.4, 69.3, 70.0, 68.9, 70.1, 69.3, 70.2, 69.2, 71.3, 69.6, 70.9, 69.1, 72.2, 69.1, 75.0, 69.0, 64.9, 69.9, 65.6, 70.1, 65.7, 69.9, 65.9, 70.3, 65.9, 70.5, 67.4, 70.5, 67.5, 69.8, 67.6, 70.4, 68.5, 70.0, 68.5, 69.8, 68.1, 70.7, 69.5, 70.2, 69.1, 70.1, 69.4, 70.0, 69.4, 70.3, 69.5, 69.8, 70.2, 70.0, 69.9, 69.9, 70.4, 69.7, 70.9, 70.1, 71.3, 70.0, 72.1, 70.7, 72.2, 70.0, 75.4, 70.1, 64.5, 71.3, 66.4, 70.8, 65.6, 71.4, 66.8, 71.2, 66.9, 71.7, 68.2, 71.4, 67.5, 70.7, 67.8, 71.3, 69.0, 71.0, 69.3, 71.3, 68.7, 70.9, 69.7, 71.3, 70.3, 71.6, 70.0, 71.2, 70.2, 71.0, 70.9, 71.4, 71.2, 71.6, 72.4, 71.1, 73.0, 70.9, 74.8, 71.7, 67.4, 72.4, 67.3, 71.9, 67.8, 72.3, 69.3, 72.2, 68.7, 72.5, 70.0, 72.0, 69.8, 72.3, 70.7, 72.5, 71.1, 72.3, 72.5, 72.0, 72.5, 72.2, 67.5, 72.8, 68.2, 73.0, 68.8, 72.9, 69.9, 73.2, 71.5, 73.6, 70.8, 72.9, 71.9, 73.2, 63.1, 74.3, 68.2, 74.4, 70.1, 73.8, 70.8, 73.9, 72.6, 73.8, 67.9, 75.6, 67.5, 75.7, 72.8, 77.2, 62.7, 61.3, 68.2, 74.3, 65.1, 70.9, 73.4, 75.3, 62.9, 61.8, 62.5, 64.0, 69.9, 62.5, 71.1, 73.7, 71.1, 66.3, 69.5, 62.2, 70.2, 65.4, 65.5, 64.0, 62.0, 62.8, 63.6, 63.5, 65.6, 63.5, 68.0, 62.9, 61.8, 63.7, 63.8, 63.7, 64.9, 64.4, 65.8, 63.7, 66.4, 64.4, 68.8, 64.3, 61.8, 65.2, 64.3, 65.1, 63.7, 65.6, 65.0, 64.9, 65.3, 65.1, 64.8, 65.2, 65.7, 65.6, 66.0, 65.6, 67.0, 64.9, 67.8, 65.4, 69.0, 64.7, 62.2, 65.8, 62.8, 65.8, 63.9, 66.7, 65.4, 66.5, 64.6, 66.4, 65.6, 66.3, 66.2, 66.2, 66.0, 66.4, 65.8, 66.7, 67.4, 66.2, 67.1, 66.4, 67.3, 66.7, 67.9, 65.8, 68.3, 66.2, 68.0, 66.3, 68.7, 65.8, 71.2, 66.3, 62.4, 66.9, 62.9, 66.8, 64.1, 67.4, 63.9, 67.7, 64.8, 67.2, 65.4, 67.3, 64.8, 67.5, 68.7, 69.0, 65.2, 62.0, 64.3, 64.1, 66.1, 66.0, 69.0, 65.8, 64.5, 66.9, 66.1, 66.8, 65.7, 67.0, 66.5, 67.4, 65.6, 66.8, 66.4, 67.3, 67.3, 67.5, 66.8, 67.2, 66.7, 67.6, 67.3, 66.9, 67.4, 66.7, 67.9, 67.2, 67.8, 67.2, 68.1, 66.8, 68.3, 66.8, 68.8, 67.5, 69.4, 67.1, 69.3, 67.1, 70.5, 66.9, 70.1, 67.5, 70.6, 66.9, 62.4, 67.7, 63.2, 67.9, 63.5, 68.7, 63.9, 68.6, 64.6, 68.4, 64.9, 68.4, 65.9, 68.4, 66.2, 68.4, 66.5, 67.9, 65.5, 68.3, 66.9, 68.0, 67.1, 68.2, 66.8, 68.6, 67.2, 68.6, 66.5, 67.8, 67.0, 67.9, 66.6, 68.2, 68.2, 68.0, 67.6, 67.9, 68.3, 67.8, 68.0, 68.6, 69.0, 68.1, 69.3, 67.9, 68.9, 68.5, 68.9, 68.6, 69.4, 68.1, 69.5, 68.1, 70.3, 67.7, 69.9, 68.4, 70.7, 68.6, 70.6, 68.3, 72.4, 68.1, 72.5, 68.4, 62.7, 69.4, 63.9, 69.0, 64.5, 69.4, 64.8, 68.9, 65.4, 69.4, 65.8, 69.3, 66.3, 69.0, 65.8, 69.6, 66.8, 69.2, 67.2, 69.7, 67.3, 68.9, 67.5, 69.1, 67.9, 69.0, 68.4, 69.2, 67.6, 69.6, 68.5, 68.8, 68.6, 69.2, 69.1, 69.1, 68.6, 69.0, 69.2, 69.3, 68.6, 69.2, 68.5, 68.9, 69.7, 69.4, 70.4, 68.7, 70.0, 68.8, 70.3, 69.4, 71.3, 69.4, 71.3, 69.5, 72.6, 69.2, 64.4, 70.0, 64.9, 69.9, 66.3, 70.0, 66.0, 69.7, 65.7, 69.8, 66.7, 69.9, 66.6, 70.3, 68.3, 69.8, 67.9, 69.9, 68.0, 70.3, 68.3, 70.7, 68.8, 70.4, 69.1, 70.4, 69.0, 70.6, 69.4, 70.2, 69.8, 69.9, 69.6, 70.2, 70.5, 70.3, 69.9, 70.3, 71.0, 69.8, 71.2, 70.5, 71.5, 70.0, 71.6, 69.7, 73.2, 69.9, 63.9, 70.9, 66.0, 71.4, 66.0, 71.2, 67.5, 70.8, 66.7, 70.7, 68.3, 71.3, 67.7, 70.9, 68.3, 70.9, 68.4, 70.9, 69.1, 71.2, 69.1, 71.3, 69.7, 71.2, 70.0, 71.4, 69.6, 71.6, 70.0, 71.0, 70.9, 71.2, 70.6, 71.4, 72.3, 71.4, 71.5, 70.9, 73.0, 71.3, 66.2, 72.1, 67.3, 72.0, 67.8, 72.0, 69.1, 71.7, 69.4, 71.9, 69.6, 72.2, 70.1, 72.3, 70.2, 72.6, 71.3, 72.0, 72.1, 71.8, 72.3, 71.8, 67.1, 73.1, 67.9, 73.4, 69.1, 73.2, 69.6, 73.4, 69.7, 73.2, 70.5, 72.9, 72.4, 72.8, 72.8, 73.3, 68.1, 74.0, 68.6, 74.6, 71.3, 73.9, 72.1, 74.6, 74.7, 74.3, 71.2, 75.1, 68.3, 77.2, 60.4, 60.8, 63.9, 62.4, 63.1, 66.4, 64.0, 58.5, 73.9, 70.0, 72.0, 71.0, 61.0, 65.0, 65.4, 70.5, 72.0, 69.2, 71.3, 64.9, 65.2, 68.8, 68.9, 69.9, 64.5, 62.5, 64.0, 63.3, 66.5, 63.5, 67.1, 62.9, 62.3, 63.9, 63.8, 64.3, 65.4, 64.2, 65.6, 64.6, 66.2, 64.3, 67.6, 63.8, 60.2, 65.7, 63.0, 64.8, 63.6, 65.6, 65.2, 64.7, 65.1, 64.8, 64.8, 65.1, 66.2, 65.6, 66.2, 65.3, 66.6, 65.1, 68.0, 65.1, 69.0, 64.8, 62.1, 66.0, 63.2, 66.4, 64.5, 66.4, 63.8, 66.3, 64.6, 65.8, 64.6, 66.1, 66.1, 66.5, 66.0, 66.3, 65.7, 66.6, 67.1, 66.0, 67.3, 66.5, 67.2, 66.0, 68.4, 66.5, 67.6, 66.4, 67.6, 66.3, 68.5, 66.3, 70.0, 66.6, 61.1, 66.8, 62.7, 67.5, 64.3, 67.2, 64.1, 66.8, 63.7, 67.3, 64.7, 66.8, 64.7, 66.9, 68.0, 68.1, 64.6, 63.6, 66.1, 63.2, 65.3, 65.7, 69.7, 67.2, 65.3, 67.4, 65.5, 67.6, 66.1, 67.2, 66.2, 67.1, 66.0, 66.8, 66.0, 67.1, 65.9, 66.9, 67.1, 66.8, 67.0, 67.0, 67.4, 67.7, 67.4, 67.6, 67.3, 67.7, 68.3, 67.3, 68.3, 67.0, 67.6, 67.0, 68.2, 67.1, 69.1, 67.2, 68.8, 67.5, 70.5, 67.7, 70.0, 66.9, 69.5, 67.0, 61.5, 67.7, 62.9, 68.5, 64.1, 67.9, 63.9, 67.8, 65.1, 68.3, 64.6, 68.2, 65.9, 68.6, 66.2, 67.8, 65.7, 67.9, 66.1, 67.7, 66.3, 68.0, 67.2, 68.1, 66.9, 67.8, 66.9, 68.1, 66.6, 68.2, 67.2, 68.6, 67.1, 68.3, 67.9, 68.4, 67.9, 68.6, 67.8, 68.3, 67.9, 67.9, 68.6, 67.7, 68.6, 68.4, 69.0, 68.1, 69.0, 68.0, 69.1, 68.2, 68.5, 68.1, 70.2, 68.5, 69.8, 68.6, 69.9, 67.7, 71.5, 68.7, 72.4, 68.6, 71.9, 68.4, 61.0, 69.1, 63.0, 69.4, 64.6, 69.5, 65.4, 69.1, 64.8, 69.6, 65.5, 69.4, 65.6, 69.2, 66.1, 68.8, 67.4, 69.1, 66.6, 69.0, 67.4, 69.1, 67.2, 69.1, 68.2, 68.7, 67.9, 69.0, 68.0, 69.0, 67.7, 69.2, 68.9, 68.8, 68.7, 68.7, 69.1, 69.0, 68.8, 69.3, 68.8, 69.4, 69.3, 68.9, 69.6, 69.3, 69.8, 69.6, 70.2, 69.4, 70.1, 69.0, 71.1, 69.5, 71.4, 68.7, 71.8, 69.2, 64.4, 70.4, 65.2, 70.1, 66.1, 70.4, 65.8, 70.0, 65.5, 69.8, 66.7, 70.4, 67.2, 70.0, 67.2, 70.3, 68.1, 69.9, 67.9, 70.7, 68.2, 70.3, 69.3, 69.7, 68.8, 70.3, 69.2, 70.0, 68.7, 69.7, 69.5, 70.2, 70.0, 70.0, 69.7, 70.3, 70.2, 70.4, 70.8, 69.8, 70.9, 69.8, 71.3, 70.5, 72.3, 70.0, 72.8, 70.6, 64.4, 71.1, 64.9, 71.2, 65.8, 71.0, 67.4, 70.9, 67.4, 70.8, 67.9, 71.5, 67.9, 71.6, 68.5, 70.8, 67.6, 71.0, 69.4, 71.6, 69.3, 71.1, 69.5, 71.5, 70.2, 71.4, 70.0, 71.5, 69.8, 71.0, 69.6, 71.6, 71.5, 71.4, 72.2, 71.2, 72.4, 70.9, 72.5, 71.5, 64.7, 72.4, 66.8, 72.6, 67.8, 71.8, 68.2, 72.0, 69.2, 72.0, 68.9, 72.4, 70.1, 71.9, 70.1, 72.1, 71.0, 72.6, 71.4, 72.5, 72.0, 71.8, 72.7, 72.6, 68.0, 73.1, 69.0, 73.2, 68.9, 73.4, 69.6, 73.4, 71.2, 73.7, 72.0, 72.8, 72.9, 73.0, 65.9, 74.7, 68.5, 73.9, 70.7, 74.4, 72.3, 74.0, 72.6, 73.9, 68.8, 75.7, 73.5, 76.1, 70.1, 78.2, 67.9, 61.9, 64.7, 69.9, 60.8, 62.3, 74.9, 71.4, 70.6, 71.6, 60.9, 65.5, 65.3, 71.9, 71.4, 71.2, 71.7, 64.5, 62.7, 65.3, 71.4, 69.6, 66.6, 65.1, 67.2, 61.0, 62.5, 63.1, 64.9, 63.6, 66.9, 63.5, 62.4, 63.8, 63.6, 64.2, 65.4, 64.7, 65.0, 64.1, 66.4, 64.4, 66.7, 64.6, 59.5, 64.8, 63.0, 65.2, 64.1, 65.6, 64.1, 65.6, 64.5, 64.9, 65.2, 65.6, 66.3, 65.7, 66.0, 65.6, 66.8, 65.1, 68.2, 64.9, 68.7, 65.7, 61.3, 66.5, 63.3, 66.2, 63.9, 65.9, 64.1, 66.3, 64.8, 66.3, 64.6, 66.2, 66.2, 66.1, 66.5, 66.2, 66.4, 65.8, 67.3, 66.6, 67.4, 66.5, 67.3, 66.3, 67.7, 66.5, 68.4, 66.3, 67.8, 65.9, 69.3, 66.6, 69.7, 66.6, 60.0, 67.3, 62.1, 67.4, 63.6, 67.5, 63.8, 66.9, 64.5, 67.1, 64.7, 67.0, 65.4, 66.9, 65.1, 68.1, 69.3, 67.5, 67.7, 63.0, 64.9, 67.4, 69.5, 68.8, 65.1, 66.9, 65.0, 67.3, 65.6, 67.2, 65.9, 67.1, 65.7, 67.1, 65.9, 67.2, 65.9, 66.7, 67.4, 66.8, 66.5, 67.4, 67.4, 67.4, 67.2, 67.0, 67.3, 67.4, 68.1, 66.9, 68.0, 66.9, 68.0, 66.8, 68.4, 66.8, 68.8, 67.2, 69.5, 67.5, 70.2, 67.7, 69.7, 67.1, 70.2, 67.6, 61.1, 68.0, 62.7, 68.7, 64.3, 68.4, 64.3, 68.0, 65.0, 68.6, 64.8, 68.5, 66.2, 67.8, 66.1, 68.1, 66.4, 68.2, 66.2, 68.5, 65.9, 68.6, 67.0, 68.7, 67.5, 68.4, 66.7, 68.5, 66.7, 68.6, 67.3, 67.9, 67.1, 68.0, 67.8, 68.0, 68.4, 68.6, 67.7, 68.1, 68.1, 67.9, 67.7, 68.6, 69.0, 67.9, 69.2, 68.6, 69.3, 68.7, 69.3, 68.4, 68.8, 68.2, 69.6, 68.3, 69.6, 68.1, 69.9, 67.8, 70.6, 68.2, 72.3, 68.0, 71.6, 68.0, 72.9, 68.1, 63.3, 69.2, 64.3, 69.2, 65.1, 68.9, 65.2, 69.3, 65.9, 69.5, 66.0, 69.0, 65.6, 69.3, 65.8, 69.6, 66.9, 69.4, 67.0, 69.0, 67.4, 69.3, 67.9, 69.1, 67.8, 69.3, 68.4, 69.3, 68.3, 68.7, 68.4, 69.5, 68.7, 69.5, 69.0, 68.9, 69.3, 68.8, 68.8, 69.4, 68.9, 68.8, 69.8, 69.1, 69.7, 69.2, 70.4, 69.3, 70.3, 68.8, 71.0, 69.1, 71.3, 69.4, 72.0, 68.7, 63.3, 70.4, 64.9, 70.5, 65.7, 70.1, 66.1, 69.8, 66.5, 70.3, 66.1, 70.3, 66.7, 69.7, 67.1, 70.1, 67.6, 70.4, 68.2, 69.9, 68.3, 69.8, 68.1, 69.9, 69.2, 70.3, 69.2, 70.2, 68.5, 70.4, 68.8, 70.4, 69.7, 70.7, 69.9, 69.7, 70.5, 70.5, 71.2, 70.5, 70.6, 70.5, 70.5, 70.5, 72.4, 70.3, 73.2, 70.3, 64.1, 71.4, 64.6, 71.0, 65.7, 71.6, 67.1, 70.9, 66.8, 71.4, 68.4, 71.5, 68.3, 71.2, 68.3, 71.6, 68.4, 71.6, 68.7, 71.3, 68.7, 71.1, 69.0, 71.5, 70.2, 71.0, 69.9, 71.5, 70.2, 70.9, 70.2, 71.4, 71.4, 71.3, 70.7, 71.2, 72.4, 71.5, 73.0, 70.8, 64.7, 72.7, 67.1, 72.0, 67.8, 72.3, 68.4, 72.2, 69.2, 72.4, 68.6, 72.0, 69.9, 71.8, 70.2, 72.5, 70.5, 72.6, 71.0, 71.9, 71.8, 72.6, 72.8, 72.4, 68.0, 73.1, 67.8, 73.6, 69.3, 73.3, 70.5, 73.1, 71.4, 73.7, 72.3, 73.6, 71.9, 72.9, 64.6, 73.9, 67.8, 74.3, 69.9, 73.9, 70.9, 74.2, 72.7, 74.5, 69.0, 75.1, 72.4, 76.4, 69.1, 78.4, 70.2, 61.2, 72.4, 72.6, 59.6, 64.9, 73.3, 73.0, 68.1, 71.8, 63.2, 65.3, 66.0, 60.9, 71.5, 72.1, 68.1, 71.0, 65.3, 61.7, 70.4, 67.5, 68.4, 64.4, 61.9, 63.3, 65.0, 63.5, 66.2, 63.7, 59.5, 63.9, 62.8, 63.9, 63.9, 63.9, 64.6, 64.1, 65.6, 64.7, 66.3, 64.4, 70.6, 63.9, 62.1, 64.8, 64.4, 65.3, 64.4, 65.2, 64.9, 65.5, 65.3, 65.2, 65.2, 65.4, 66.1, 65.6, 66.1, 64.7, 67.4, 64.8, 67.8, 65.6, 70.3, 65.4, 63.2, 66.5, 63.7, 65.7, 64.1, 66.1, 64.7, 66.3, 65.1, 66.6, 65.9, 66.4, 65.7, 66.5, 66.0, 66.3, 67.3, 66.4, 66.7, 66.3, 66.6, 66.1, 66.8, 66.5, 68.2, 66.3, 67.5, 66.1, 68.4, 65.9, 69.1, 65.7, 70.8, 66.1, 61.7, 67.6, 63.0, 67.4, 64.3, 67.0, 63.9, 67.2, 65.5, 67.2, 64.7, 67.4, 64.7, 67.4, 67.5, 68.2, 66.8, 62.7, 65.0, 66.3, 69.4, 65.7, 63.7, 68.4, 64.6, 67.5, 66.0, 67.2, 66.0, 67.2, 66.5, 67.5, 66.3, 66.9, 65.7, 67.7, 66.9, 67.0, 66.7, 66.7, 66.5, 66.8, 67.1, 67.4, 67.0, 66.9, 67.6, 67.5, 67.6, 66.9, 67.5, 67.0, 68.1, 66.9, 68.6, 67.0, 68.5, 67.1, 69.2, 67.5, 69.8, 67.5, 69.9, 67.3, 70.5, 67.6, 62.8, 67.8, 63.2, 68.1, 64.4, 68.5, 64.3, 68.6, 64.7, 67.9, 64.7, 68.6, 66.0, 68.2, 65.6, 68.3, 65.8, 68.2, 65.9, 68.2, 66.6, 68.2, 66.8, 68.6, 66.8, 68.3, 67.2, 68.7, 67.1, 68.5, 66.9, 67.7, 68.0, 68.0, 68.0, 67.8, 68.3, 68.1, 68.3, 68.6, 68.2, 68.2, 68.5, 68.2, 68.5, 68.2, 69.1, 68.1, 69.2, 68.1, 69.3, 68.5, 69.4, 68.4, 70.3, 68.2, 69.7, 67.7, 70.5, 67.9, 71.3, 68.3, 72.2, 68.3, 72.3, 68.3, 62.9, 69.6, 63.9, 68.7, 64.6, 68.9, 65.5, 68.8, 65.9, 69.1, 66.3, 69.0, 65.8, 69.0, 66.5, 68.7, 67.3, 69.5, 67.4, 69.1, 67.1, 69.1, 68.5, 69.6, 67.7, 69.4, 68.1, 69.2, 67.6, 68.8, 67.8, 69.0, 69.3, 69.5, 69.1, 69.6, 68.7, 69.4, 69.0, 69.6, 68.9, 69.4, 68.8, 68.9, 69.9, 68.7, 70.5, 69.3, 70.0, 68.7, 69.8, 68.8, 71.4, 69.5, 71.1, 69.7, 72.7, 69.7, 65.3, 70.5, 66.0, 70.0, 65.5, 70.6, 65.6, 70.5, 65.9, 70.0, 67.1, 69.9, 66.9, 70.5, 67.9, 69.9, 67.7, 69.7, 67.5, 70.6, 68.0, 70.2, 68.8, 70.3, 69.4, 70.2, 68.6, 70.5, 69.1, 70.5, 70.5, 70.2, 69.9, 70.0, 70.0, 70.2, 69.6, 70.2, 71.0, 70.0, 70.6, 70.4, 71.5, 70.4, 71.6, 70.2, 73.9, 70.2, 65.0, 70.7, 66.3, 71.5, 65.9, 71.4, 66.9, 71.2, 67.2, 70.9, 68.1, 71.3, 67.6, 71.4, 67.6, 71.2, 69.4, 71.2, 68.9, 71.2, 69.1, 71.0, 69.8, 71.4, 70.0, 71.2, 69.6, 71.6, 70.3, 71.3, 70.7, 71.5, 70.9, 70.9, 72.5, 71.5, 73.0, 71.1, 74.4, 70.9, 67.4, 72.7, 66.5, 71.8, 68.0, 72.6, 68.8, 72.5, 69.3, 71.9, 70.3, 72.2, 70.2, 72.6, 70.8, 72.3, 70.7, 72.1, 72.4, 72.3, 72.4, 72.1, 67.2, 72.8, 67.8, 72.8, 68.9, 73.5, 70.4, 73.7, 71.2, 72.8, 71.4, 73.4, 71.7, 73.0, 72.6, 73.2, 67.6, 74.5, 68.6, 73.8, 71.0, 73.8, 72.0, 73.8, 75.2, 73.8, 73.1, 75.6, 69.9, 77.2, 65.5, 60.1, 72.6, 76.8, 72.2, 66.7, 63.2, 58.8, 73.3, 67.9, 65.8, 61.0, 67.7, 59.8, 67.0, 70.8, 71.3, 68.3, 71.8, 69.3, 70.7, 69.3, 70.3, 67.0, ]
plt.hist(height, alpha=0.5)
plt.show()
```
|
github_jupyter
|
```
from pathlib import Path
import pandas as pd
import numpy as np
import xarray as xr
import gcsfs
from typing import List
import io
import hashlib
import os
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import torch
from torch import nn
import torch.nn.functional as F
import pytorch_lightning as pl
import nowcasting_dataset.time as nd_time
from nowcasting_dataset.dataset import worker_init_fn, NetCDFDataset
from nowcasting_dataset.geospatial import osgb_to_lat_lon
import tilemapbase
from neptune.new.integrations.pytorch_lightning import NeptuneLogger
from neptune.new.types import File
import logging
logging.basicConfig()
logger = logging.getLogger('nowcasting_dataset')
logger.setLevel(logging.DEBUG)
%%time
train_dataset = NetCDFDataset(12_500, 'gs://solar-pv-nowcasting-data/prepared_ML_training_data/v2/train/', '/home/jack/temp/train')
#validation_dataset = NetCDFDataset(1_000, 'gs://solar-pv-nowcasting-data/prepared_ML_training_data/v2/validation/', '/home/jack/temp/validation')
def get_batch():
"""Useful for testing."""
train_dataset.per_worker_init(0)
batch = train_dataset[1]
return batch
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
pin_memory=True,
num_workers=24,
prefetch_factor=8,
worker_init_fn=worker_init_fn,
persistent_workers=True,
# Disable automatic batching because dataset
# returns complete batches.
batch_size=None,
)
```
## Define simple ML model
```
params = dict(
batch_size=32,
history_len=6, #: Number of timesteps of history, not including t0.
forecast_len=12, #: Number of timesteps of forecast.
image_size_pixels=32,
nwp_channels=('t', 'dswrf', 'prate', 'r', 'sde', 'si10', 'vis', 'lcc', 'mcc', 'hcc'),
sat_channels=(
'HRV', 'IR_016', 'IR_039', 'IR_087', 'IR_097', 'IR_108', 'IR_120',
'IR_134', 'VIS006', 'VIS008', 'WV_062', 'WV_073')
)
tilemapbase.init(create=True)
def plot_example(batch, model_output, example_i: int=0, border: int=0):
fig = plt.figure(figsize=(20, 20))
ncols=4
nrows=2
# Satellite data
extent = (
float(batch['sat_x_coords'][example_i, 0].cpu().numpy()),
float(batch['sat_x_coords'][example_i, -1].cpu().numpy()),
float(batch['sat_y_coords'][example_i, -1].cpu().numpy()),
float(batch['sat_y_coords'][example_i, 0].cpu().numpy())) # left, right, bottom, top
def _format_ax(ax):
ax.scatter(
batch['x_meters_center'][example_i].cpu(),
batch['y_meters_center'][example_i].cpu(),
s=500, color='white', marker='x')
ax = fig.add_subplot(nrows, ncols, 1) #, projection=ccrs.OSGB(approx=False))
sat_data = batch['sat_data'][example_i, :, :, :, 0].cpu().numpy()
sat_min = np.min(sat_data)
sat_max = np.max(sat_data)
ax.imshow(sat_data[0], extent=extent, interpolation='none', vmin=sat_min, vmax=sat_max)
ax.set_title('t = -{}'.format(params['history_len']))
_format_ax(ax)
ax = fig.add_subplot(nrows, ncols, 2)
ax.imshow(sat_data[params['history_len']+1], extent=extent, interpolation='none', vmin=sat_min, vmax=sat_max)
ax.set_title('t = 0')
_format_ax(ax)
ax = fig.add_subplot(nrows, ncols, 3)
ax.imshow(sat_data[-1], extent=extent, interpolation='none', vmin=sat_min, vmax=sat_max)
ax.set_title('t = {}'.format(params['forecast_len']))
_format_ax(ax)
ax = fig.add_subplot(nrows, ncols, 4)
lat_lon_bottom_left = osgb_to_lat_lon(extent[0], extent[2])
lat_lon_top_right = osgb_to_lat_lon(extent[1], extent[3])
tiles = tilemapbase.tiles.build_OSM()
lat_lon_extent = tilemapbase.Extent.from_lonlat(
longitude_min=lat_lon_bottom_left[1],
longitude_max=lat_lon_top_right[1],
latitude_min=lat_lon_bottom_left[0],
latitude_max=lat_lon_top_right[0])
plotter = tilemapbase.Plotter(lat_lon_extent, tile_provider=tiles, zoom=6)
plotter.plot(ax, tiles)
############## TIMESERIES ##################
# NWP
ax = fig.add_subplot(nrows, ncols, 5)
nwp_dt_index = pd.to_datetime(batch['nwp_target_time'][example_i].cpu().numpy(), unit='s')
pd.DataFrame(
batch['nwp'][example_i, :, :, 0, 0].T.cpu().numpy(),
index=nwp_dt_index,
columns=params['nwp_channels']).plot(ax=ax)
ax.set_title('NWP')
# datetime features
ax = fig.add_subplot(nrows, ncols, 6)
ax.set_title('datetime features')
datetime_feature_cols = ['hour_of_day_sin', 'hour_of_day_cos', 'day_of_year_sin', 'day_of_year_cos']
datetime_features_df = pd.DataFrame(index=nwp_dt_index, columns=datetime_feature_cols)
for key in datetime_feature_cols:
datetime_features_df[key] = batch[key][example_i].cpu().numpy()
datetime_features_df.plot(ax=ax)
ax.legend()
ax.set_xlabel(nwp_dt_index[0].date())
# PV yield
ax = fig.add_subplot(nrows, ncols, 7)
ax.set_title('PV yield for PV ID {:,d}'.format(batch['pv_system_id'][example_i].cpu()))
pv_actual = pd.Series(
batch['pv_yield'][example_i].cpu().numpy(),
index=nwp_dt_index,
name='actual')
pv_pred = pd.Series(
model_output[example_i].detach().cpu().numpy(),
index=nwp_dt_index[params['history_len']+1:],
name='prediction')
pd.concat([pv_actual, pv_pred], axis='columns').plot(ax=ax)
ax.legend()
# fig.tight_layout()
return fig
SAT_X_MEAN = np.float32(309000)
SAT_X_STD = np.float32(316387.42073603)
SAT_Y_MEAN = np.float32(519000)
SAT_Y_STD = np.float32(406454.17945938)
TOTAL_SEQ_LEN = params['history_len'] + params['forecast_len'] + 1
CHANNELS = 32
N_CHANNELS_LAST_CONV = 4
KERNEL = 3
EMBEDDING_DIM = 16
NWP_SIZE = 10 * 2 * 2 # channels x width x height
N_DATETIME_FEATURES = 4
CNN_OUTPUT_SIZE = N_CHANNELS_LAST_CONV * ((params['image_size_pixels'] - 6) ** 2)
FC_OUTPUT_SIZE = 8
RNN_HIDDEN_SIZE = 16
class LitAutoEncoder(pl.LightningModule):
def __init__(
self,
history_len = params['history_len'],
forecast_len = params['forecast_len'],
):
super().__init__()
self.history_len = history_len
self.forecast_len = forecast_len
self.sat_conv1 = nn.Conv2d(in_channels=len(params['sat_channels'])+5, out_channels=CHANNELS, kernel_size=KERNEL)#, groups=history_len+1)
self.sat_conv2 = nn.Conv2d(in_channels=CHANNELS, out_channels=CHANNELS, kernel_size=KERNEL) #, groups=CHANNELS//2)
self.sat_conv3 = nn.Conv2d(in_channels=CHANNELS, out_channels=N_CHANNELS_LAST_CONV, kernel_size=KERNEL) #, groups=CHANNELS)
#self.maxpool = nn.MaxPool2d(kernel_size=KERNEL)
self.fc1 = nn.Linear(
in_features=CNN_OUTPUT_SIZE,
out_features=256)
self.fc2 = nn.Linear(
in_features=256 + EMBEDDING_DIM,
out_features=128)
#self.fc2 = nn.Linear(in_features=EMBEDDING_DIM + N_DATETIME_FEATURES, out_features=128)
self.fc3 = nn.Linear(in_features=128, out_features=64)
self.fc4 = nn.Linear(in_features=64, out_features=32)
self.fc5 = nn.Linear(in_features=32, out_features=FC_OUTPUT_SIZE)
if EMBEDDING_DIM:
self.pv_system_id_embedding = nn.Embedding(
num_embeddings=940,
embedding_dim=EMBEDDING_DIM)
self.encoder_rnn = nn.GRU(
input_size=FC_OUTPUT_SIZE + N_DATETIME_FEATURES + 1 + NWP_SIZE, # plus 1 for history
hidden_size=RNN_HIDDEN_SIZE,
num_layers=2,
batch_first=True)
self.decoder_rnn = nn.GRU(
input_size=FC_OUTPUT_SIZE + N_DATETIME_FEATURES + NWP_SIZE,
hidden_size=RNN_HIDDEN_SIZE,
num_layers=2,
batch_first=True)
self.decoder_fc1 = nn.Linear(
in_features=RNN_HIDDEN_SIZE,
out_features=8)
self.decoder_fc2 = nn.Linear(
in_features=8,
out_features=1)
### EXTRA CHANNELS
# Center marker
new_batch_size = params['batch_size'] * TOTAL_SEQ_LEN
self.center_marker = torch.zeros(
(
new_batch_size,
1,
params['image_size_pixels'],
params['image_size_pixels']
),
dtype=torch.float32, device=self.device)
half_width = params['image_size_pixels'] // 2
self.center_marker[..., half_width-2:half_width+2, half_width-2:half_width+2] = 1
# pixel x & y
pixel_range = (torch.arange(params['image_size_pixels'], device=self.device) - 64) / 37
pixel_range = pixel_range.unsqueeze(0).unsqueeze(0)
self.pixel_x = pixel_range.unsqueeze(-2).expand(new_batch_size, 1, params['image_size_pixels'], -1)
self.pixel_y = pixel_range.unsqueeze(-1).expand(new_batch_size, 1, -1, params['image_size_pixels'])
def forward(self, x):
# ******************* Satellite imagery *************************
# Shape: batch_size, seq_length, width, height, channel
# TODO: Use optical flow, not actual sat images of the future!
sat_data = x['sat_data']
batch_size, seq_len, width, height, n_chans = sat_data.shape
# Stack timesteps as extra examples
new_batch_size = batch_size * seq_len
# 0 1 2 3
sat_data = sat_data.reshape(new_batch_size, width, height, n_chans)
# Conv2d expects channels to be the 2nd dim!
sat_data = sat_data.permute(0, 3, 1, 2)
# Now shape: new_batch_size, n_chans, width, height
### EXTRA CHANNELS
# geo-spatial x
x_coords = x['sat_x_coords'] # shape: batch_size, image_size_pixels
x_coords = x_coords - SAT_X_MEAN
x_coords = x_coords / SAT_X_STD
x_coords = x_coords.unsqueeze(1).expand(-1, width, -1).unsqueeze(1).repeat_interleave(repeats=TOTAL_SEQ_LEN, dim=0)
# geo-spatial y
y_coords = x['sat_y_coords'] # shape: batch_size, image_size_pixels
y_coords = y_coords - SAT_Y_MEAN
y_coords = y_coords / SAT_Y_STD
y_coords = y_coords.unsqueeze(-1).expand(-1, -1, height).unsqueeze(1).repeat_interleave(repeats=TOTAL_SEQ_LEN, dim=0)
# Concat
if sat_data.device != self.center_marker.device:
self.center_marker = self.center_marker.to(sat_data.device)
self.pixel_x = self.pixel_x.to(sat_data.device)
self.pixel_y = self.pixel_y.to(sat_data.device)
sat_data = torch.cat((sat_data, self.center_marker, x_coords, y_coords, self.pixel_x, self.pixel_y), dim=1)
del x_coords, y_coords
# Pass data through the network :)
out = F.relu(self.sat_conv1(sat_data))
#out = self.maxpool(out)
out = F.relu(self.sat_conv2(out))
#out = self.maxpool(out)
out = F.relu(self.sat_conv3(out))
out = out.reshape(new_batch_size, CNN_OUTPUT_SIZE)
out = F.relu(self.fc1(out))
# ********************** Embedding of PV system ID *********************
if EMBEDDING_DIM:
pv_embedding = self.pv_system_id_embedding(x['pv_system_row_number'].repeat_interleave(TOTAL_SEQ_LEN))
out = torch.cat(
(
out,
pv_embedding
),
dim=1)
# Fully connected layers.
out = F.relu(self.fc2(out))
out = F.relu(self.fc3(out))
out = F.relu(self.fc4(out))
out = F.relu(self.fc5(out))
# ******************* PREP DATA FOR RNN *****************************************
out = out.reshape(batch_size, TOTAL_SEQ_LEN, FC_OUTPUT_SIZE) # TODO: Double-check this does what we expect!
# The RNN encoder gets recent history: satellite, NWP, datetime features, and recent PV history.
# The RNN decoder gets what we know about the future: satellite, NWP, and datetime features.
# *********************** NWP Data **************************************
nwp_data = x['nwp'].float() # Shape: batch_size, channel, seq_length, width, height
nwp_data = nwp_data.permute(0, 2, 1, 3, 4) # RNN expects seq_len to be dim 1.
batch_size, nwp_seq_len, n_nwp_chans, nwp_width, nwp_height = nwp_data.shape
nwp_data = nwp_data.reshape(batch_size, nwp_seq_len, n_nwp_chans * nwp_width * nwp_height)
# Concat
rnn_input = torch.cat(
(
out,
nwp_data,
x['hour_of_day_sin'].unsqueeze(-1),
x['hour_of_day_cos'].unsqueeze(-1),
x['day_of_year_sin'].unsqueeze(-1),
x['day_of_year_cos'].unsqueeze(-1),
),
dim=2)
pv_yield_history = x['pv_yield'][:, :self.history_len+1].unsqueeze(-1)
encoder_input = torch.cat(
(
rnn_input[:, :self.history_len+1],
pv_yield_history
),
dim=2)
encoder_output, encoder_hidden = self.encoder_rnn(encoder_input)
decoder_output, _ = self.decoder_rnn(rnn_input[:, -self.forecast_len:], encoder_hidden)
# decoder_output is shape batch_size, seq_len, rnn_hidden_size
decoder_output = F.relu(self.decoder_fc1(decoder_output))
decoder_output = self.decoder_fc2(decoder_output)
return decoder_output.squeeze()
def _training_or_validation_step(self, batch, is_train_step):
y_hat = self(batch)
y = batch['pv_yield'][:, -self.forecast_len:]
#y = torch.rand((32, 1), device=self.device)
mse_loss = F.mse_loss(y_hat, y)
nmae_loss = (y_hat - y).abs().mean()
# TODO: Compute correlation coef using np.corrcoef(tensor with shape (2, num_timesteps))[0, 1]
# on each example, and taking the mean across the batch?
tag = "Train" if is_train_step else "Validation"
self.log_dict({f'MSE/{tag}': mse_loss}, on_step=is_train_step, on_epoch=True)
self.log_dict({f'NMAE/{tag}': nmae_loss}, on_step=is_train_step, on_epoch=True)
return nmae_loss
def training_step(self, batch, batch_idx):
return self._training_or_validation_step(batch, is_train_step=True)
def validation_step(self, batch, batch_idx):
if batch_idx == 0:
# Plot example
model_output = self(batch)
fig = plot_example(batch, model_output)
self.logger.experiment['validation/plot'].log(File.as_image(fig))
return self._training_or_validation_step(batch, is_train_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=0.001)
return optimizer
model = LitAutoEncoder()
logger = NeptuneLogger(project='OpenClimateFix/predict-pv-yield')
logger.log_hyperparams(params)
print('logger.version =', logger.version)
trainer = pl.Trainer(gpus=1, max_epochs=10_000, logger=logger)
trainer.fit(model, train_dataloader)
```
|
github_jupyter
|
# Exercise Set 5: Python plotting
*Morning, August 15, 2018
In this Exercise set we will work with visualizations in python, using two powerful plotting libraries. We will also quickly touch upon using pandas for exploratory plotting.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
iris = sns.load_dataset('iris')
titanic = sns.load_dataset('titanic')
```
## Exercise Section 5.1: Exploring the data with plots
We will work with the two datasets `iris` and `titanic` both of which you should already have loaded. The goal with the plots you produce in this section is to give yourself and your group members an improved understanding of the datasets.
> **Ex. 5.1.1:**: Show the first five rows of the titanic dataset. What information is in the dataset? Use a barplot to show the probability of survival for men and women within each passenger class. Can you make a boxplot showing the same information (why/why not?). _Bonus:_ show a boxplot for the fare-prices within each passenger class.
>
> Spend five minutes discussing what you can learn about the survival-selection aboard titanic from the figure(s).
>
> > _Hint:_ https://seaborn.pydata.org/generated/seaborn.barplot.html, specifically the `hue` option.
```
# [Answer to Ex. 5.1.1]
# Will be in assignment 1
```
> **Ex. 5.1.2:** Using the iris flower dataset, draw a scatterplot of sepal length and petal length. Include a second order polynomial fitted to the data. Add a title to the plot and rename the axis labels.
> _Discuss:_ Is this a meaningful way to display the data? What could we do differently?
>
> For a better understanding of the dataset this image might be useful:
> <img src="iris_pic.png" alt="Drawing" style="width: 200px;"/>
>
>> _Hint:_ use the `.regplot` method from seaborn.
```
# [Answer to Ex. 5.1.2]
# Will be in assignment 1
```
> **Ex. 5.1.3:** Combine the two of the figures you created above into a two-panel figure similar to the one shown here:
> <img src="Example.png" alt="Drawing" style="width: 600px;"/>
>
> Save the figure as a png file on your computer.
>> _Hint:_ See [this question](https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot) on stackoverflow for inspiration.
```
# [Answer to Ex. 5.1.3]
# Will be in assignment 1
```
> **Ex. 5.1.4:** Use [pairplot with hue](https://seaborn.pydata.org/generated/seaborn.pairplot.html) to create a figure that clearly shows how the different species vary across measurements. Change the color palette and remove the shading from the density plots. _Bonus:_ Try to explain how the `diag_kws` argument works (_hint:_ [read here](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python))
```
# [Answer to Ex. 5.1.4]
# Will be in assignment 1
```
## Exercise Section 5.2: Explanatory plotting
In this section we will only work with the titanic dataset. We will create a simple figure from the bottom using the [_grammar of graphics_](http://vita.had.co.nz/papers/layered-grammar.pdf) framework.
<br>
**_NOTE:_** Because of the way the jupyter notebooks are made, you will have to complete this exercise in a single code cell.
> **Ex. 5.2.1:** Create an empty coordinate system with the *x* axis spanning from 0 to 100 and the *y* axis spanning 0 to 0.05.
<br><br>
> **Ex. 5.2.2:** Add three KDE-curves to the existing axis. The KDEs should estimate the density of passenger age within each passenger class. Add a figure title and axis labels. Make sure the legend entries makes sense. *If* you have time, change the colors.
>
> > _Hint:_ a `for` loop might be useful here.
<br><br>
The following exercises highlight some of the advanced uses of matplotlib and seaborn. These techniques allow you to create customized plots with a lot of versatility. These are **_BONUS_** questions.
> **Ex. 5.2.3:** Add a new subplot that sits within the outer one. Use `[0.55, 0.6, 0.3, 0.2]` the subplots size. At this point your figure should look something like this:
>
> <img src="exampleq3.png" alt="Drawing" style="width: 400px;"/>
>
>> _Hint:_ This [link](https://jakevdp.github.io/PythonDataScienceHandbook/04.08-multiple-subplots.html) has some tips for plotting subplots.
<br><br>
> **Ex. 5.2.4:** Move the legend outside the graph window, and add a barplot of survival probabilities split by class to the small subplot.
>
>> _Hint:_ [Look here](https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot) for examples of how to move the legend box around.
>
> In the end your figure should look similar to this one:
> <img src="final4.png" alt="Drawing" style="width: 400px;"/>
```
# [Answer to Ex. 5.1.5]
# Question 1
fig, ax1 = plt.subplots(1,1)
ax1.set_xlim(0, 100)
ax1.set_ylim(0, 0.05)
# Question 2
for c in set(titanic['class']):
sub_data = titanic.loc[titanic['class'] == c]
sns.kdeplot(sub_data.age, ax = ax1, label = c + ' class')
ax1.set_xlabel("Age")
ax1.set_ylabel("Density")
ax1.set_title("Age densities")
# BONUS QUESTIONS ----------------------------------------
# Question 3
ax2 = fig.add_axes([0.55, 0.6, 0.3, 0.2])
plt.savefig('exampleq3.png')
# Question 4
box = ax1.get_position()
ax1.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, -0.2),
fancybox=True, shadow=True, ncol=5)
# Question 5
sns.barplot(x='class', y='survived', data=titanic, ax = ax2)
plt.savefig('final4.png')
```
|
github_jupyter
|

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/6.Clinical_Context_Spell_Checker.ipynb)
<H1> Context Spell Checker - Medical </H1>
```
import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
license_keys.keys()
license_keys['JSL_VERSION']
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID']= license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = license_keys['JSL_VERSION']
version = license_keys['PUBLIC_VERSION']
! pip install --ignore-installed -q pyspark==2.4.4
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
! pip install --ignore-installed -q spark-nlp==$version
import sparknlp
print (sparknlp.version())
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
spark = sparknlp_jsl.start(secret)
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = RecursiveTokenizer()\
.setInputCols(["document"])\
.setOutputCol("token")\
.setPrefixes(["\"", "(", "[", "\n"])\
.setSuffixes([".", ",", "?", ")","!", "'s"])
spellModel = ContextSpellCheckerModel\
.pretrained('spellcheck_clinical', 'en', 'clinical/models')\
.setInputCols("token")\
.setOutputCol("checked")
pipeline = Pipeline(
stages = [
documentAssembler,
tokenizer,
spellModel
])
empty_ds = spark.createDataFrame([[""]]).toDF("text")
lp = LightPipeline(pipeline.fit(empty_ds))
```
Ok!, at this point we have our spell checking pipeline as expected. Let's see what we can do with it, see these errors,
_
__Witth__ the __hell__ of __phisical__ __terapy__ the patient was __imbulated__ and on posoperative, the __impatient__ tolerating a post __curgical__ soft diet._
_With __paint__ __wel__ controlled on __orall__ pain medications, she was discharged __too__ __reihabilitation__ __facilitay__._
_She is to also call the __ofice__ if she has any __ever__ greater than 101, or __leeding__ __form__ the surgical wounds._
_Abdomen is __sort__, nontender, and __nonintended__._
_Patient not showing pain or any __wealth__ problems._
_No __cute__ distress_
Check that some of the errors are valid English words, only by considering the context the right choice can be made.
```
example = ["Witth the hell of phisical terapy the patient was imbulated and on posoperative, the impatient tolerating a post curgical soft diet.",
"With paint wel controlled on orall pain medications, she was discharged too reihabilitation facilitay.",
"She is to also call the ofice if she has any ever greater than 101, or leeding form the surgical wounds.",
"Abdomen is sort, nontender, and nonintended.",
"Patient not showing pain or any wealth problems.",
"No cute distress"
]
for pairs in lp.annotate(example):
print (list(zip(pairs['token'],pairs['checked'])))
```
|
github_jupyter
|
<font face=楷体 size=6><b>黑人抬棺人脸检测:</b>
<font face=楷体 size=5><b>背景:</b>
<font face=楷体 size=3>黑人抬棺这么火,怎么能不用paddlehub试一试呢?
<br>
<font face=楷体 size=3>临近期末,准备考试,还要准备考研,555,明明有好点子,但是没时间做,先出一个黑人抬棺的视频8
<font face=楷体 size=5><b>结果:</b>
<font face=楷体 size=3>在我的B站上: <a href=https://www.bilibili.com/video/BV1Sk4y1r7Zz>https://www.bilibili.com/video/BV1Sk4y1r7Zz</a>
<font face=楷体 size=5><b>思路和步骤:</b>
<font face=楷体 size=3>思路嘛,再简单不过,一帧一帧拆分,一帧一帧人脸检测
<font face=楷体 size=3>步骤嘛,人脸检测 + ffmpeg拆分合并
<font face=楷体 size=5><b>总结:</b><br>
<font face=楷体 size=3>paddlehub蛮好用的,改日有时间定要搞一番事业<br>
<font face=楷体 size=3>时间太少了,考研党伤不起啊啊啊
```
from IPython.display import HTML
HTML('<iframe style="width:98%;height: 450px;" src="//player.bilibili.com/player.html?bvid=BV1Sk4y1r7Zz" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>')
# ---------------------------------------------------------------------------
# 为使用 `GPU` 设置环境变量 , (仍然报错, 已在github上提交issue——目前已解决)
# ---------------------------------------------------------------------------
%set_env CUDA_VISIBLE_DEVICES = 0
# ---------------------------------------------------------------------------
# 安装视频处理环境
# ---------------------------------------------------------------------------
!pip install moviepy -i https://pypi.tuna.tsinghua.edu.cn/simple
!pip install ffmpeg
# ---------------------------------------------------------------------------
# 安装paddlehub环境和下载模型
# ---------------------------------------------------------------------------
try:
import paddlehub as hub
except ImportError:
!pip install paddlehub==1.6.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
import paddlehub as hub
try:
module = hub.Module(name="ultra_light_fast_generic_face_detector_1mb_640")
# module = hub.Moudle(name="ultra_light_fast_generic_face_detector_1mb_320")
except FileNotFoundError:
!hub install ultra_light_fast_generic_face_detector_1mb_640==1.1.2
module = hub.Module(name="ultra_light_fast_generic_face_detector_1mb_640")
# module = hub.Moudle(name="ultra_light_fast_generic_face_detector_1mb_320")
```
注:
Ultra-Light-Fast-Generic-Face-Detector-1MB提供了两种预训练模型,ultra_light_fast_generic_face_detector_1mb_320和ultra_light_fast_generic_face_detector_1mb_640。
- ultra_light_fast_generic_face_detector_1mb_320,在预测时会将图片输入缩放为320 * 240,预测速度更快。关于该模型更多介绍, 查看[PaddleHub官网介绍](https://www.paddlepaddle.org.cn/hubdetail?name=ultra_light_fast_generic_face_detector_1mb_320&en_category=ObjectDetection)
- ultra_light_fast_generic_face_detector_1mb_640,在预测时会将图片输入缩放为640 * 480,预测精度更高。关于该模型更多介绍, 查看[PaddleHub官网介绍](https://www.paddlepaddle.org.cn/hubdetail?name=ultra_light_fast_generic_face_detector_1mb_640&en_category=ObjectDetection)
用户根据需要,选择具体模型。利用PaddleHub使用该模型时,只需更改指定name,即可实现无缝切换。
```
# ---------------------------------------------------------------------------
# 查看黑人抬棺视频的基本信息
# ---------------------------------------------------------------------------
import os
import cv2
import json
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from tqdm import tqdm
video = cv2.VideoCapture("video.flv")
fps = video.get(cv2.CAP_PROP_FPS) # 视频帧率
frameCount = video.get(cv2.CAP_PROP_FRAME_COUNT) # 获得视频的总帧数
width = video.get(cv2.CAP_PROP_FRAME_WIDTH) # 获得视频的宽度
height = video.get(cv2.CAP_PROP_FRAME_HEIGHT) # 获得视频的高度
print('视频的宽度:{}'.format(width))
print('视频的高度:{}'.format(height))
print('视频帧率:{}'.format(fps))
print('视频的总帧数:{}'.format(frameCount))
cv2.__version__
# ---------------------------------------------------------------------------
# 将视频数据变为帧数据, 并且保存
# ---------------------------------------------------------------------------
if not os.path.exists('frame'):
os.mkdir('frame')
all_img = []
all_img_path_dict = {'image':[]}
success, frame = video.read()
i = 0
while success:
all_img.append(frame)
i += 1
# if not i % 10:print(i)
success, frame = video.read()
path = os.path.join('frame', str(i)+'.jpg')
all_img_path_dict['image'].append(path)
cv2.imwrite(path, frame)
all_img_path_dict['image'].pop()
print('完毕')
# ---------------------------------------------------------------------------
# 预测并打印输出(或者找到已经保存的文件)
# ---------------------------------------------------------------------------
# 读取视频所保存的信息文件
info_path = 'info.json'
# hub版本更新后, 其检测精度浮点数太飘了, 这里赶时间, 就暂时不写了(准备期末考试ing...)
# if os.path.exists(info_path):
# # 读取已经保存的`json`数据
# with open(info_path, 'r') as f:
# json_dict = json.load(f)
# results = json_dict['data']
if False:
pass
else: # 若没有找到`json`数据
# PaddleHub对于支持一键预测的module,可以调用module的相应预测API,完成预测功能。
results = module.face_detection(data=all_img_path_dict,
use_gpu=True,
visualization=True)
# save_json = {'data':results}
# with open(info_path, 'w') as f:
# f.write(json.dumps(save_json))
# ---------------------------------------------------------------------------
# 输出制作视频文件的备用文件
# ---------------------------------------------------------------------------
# 输出视频的size
size = (int(width), int(height))
size = (int(height), int(width))
# 创建写视频对象(不好用)
# videoWriter = cv2.VideoWriter("a.avi", cv2.VideoWriter_fourcc('M','J','P','G'), fps, size)
for i, info in tqdm(enumerate(results)):
num_info = info['data']
if not len(num_info):
# 如果该画面没有人, 则 `frame`变量赋值为原来的图片
frame = all_img[i].copy()[:,:, ::-1]
else:
# frame = mpimg.imread(info['save_path']) # 之前的save_path现在没了......
frame = mpimg.imread(info['path'].replace('frame', 'face_detector_640_predict_output'))
cv2.putText(frame, 'fps: {:.2f}'.format(fps), (20, 370), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 0 ,255), 2)
cv2.putText(frame, 'count: ' + str(len(num_info)), (20, 400), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 0 ,255), 2)
cv2.putText(frame, 'frame: ' + str(i), (20, 430), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255, 0 ,255), 2)
# cv2.putText(frame, 'time: {:.2f}s'.format(i / fps), (20,460), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,0,255), 2)
plt.imsave('./img_out/{}.jpg'.format(i), frame)
# ---------------------------------------------------------------------------
# 输出视频文件(没加配乐, 没有灵魂)(找了一堆python工具, 不如ffmpeg好用)
# ---------------------------------------------------------------------------
if os.path.exists('temp.mp4'):
!rm -f temp.mp4
!ffmpeg -f image2 -i img_out/%d.jpg -vcodec libx264 -r 60.0 temp.mp4
# ---------------------------------------------------------------------------
# 抽离源文件配乐
# ---------------------------------------------------------------------------
if os.path.exists('nb.mp3'):
!rm -f nb.mp3
!ffmpeg -i video.flv -f mp3 nb.mp3
# ---------------------------------------------------------------------------
# 音乐视频合成(由于需要调整视频速度, 使音频和视频时间一样, 命令行不太好调整, 我将合成放在了本地端)
# ---------------------------------------------------------------------------
# # 去掉temp视频音轨
# !ffmpeg -i temp.mp4 -c:v copy -an temp_new.mp4
# # 给视频加背景音乐
# !ffmpeg -i temp_new.mp4 -i nb.mp3 -t 52 -y last.mp4
```
|
github_jupyter
|
# Monte Carlo Methods
In this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms.
While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.
### Part 0: Explore BlackjackEnv
We begin by importing the necessary packages.
```
import sys
import gym
import numpy as np
from collections import defaultdict
from plot_utils import plot_blackjack_values, plot_policy
```
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
```
env = gym.make('Blackjack-v1')
```
Each state is a 3-tuple of:
- the player's current sum $\in \{0, 1, \ldots, 31\}$,
- the dealer's face up card $\in \{1, \ldots, 10\}$, and
- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).
The agent has two potential actions:
```
STICK = 0
HIT = 1
```
Verify this by running the code cell below.
```
print(env.observation_space)
print(env.action_space)
```
Execute the code cell below to play Blackjack with a random policy.
(_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
```
for i_episode in range(3):
state = env.reset()
while True:
print(state)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
```
### Part 1: MC Prediction
In this section, you will write your own implementation of MC prediction (for estimating the action-value function).
We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy.
The function accepts as **input**:
- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.
It returns as **output**:
- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
```
def generate_episode_from_limit_stochastic(bj_env):
episode = []
state = bj_env.reset()
while True:
probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8]
action = np.random.choice(np.arange(2), p=probs)
next_state, reward, done, info = bj_env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
```
Execute the code cell below to play Blackjack with the policy.
(*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
```
for i in range(5):
print(generate_episode_from_limit_stochastic(env))
```
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.
Your algorithm has three arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `generate_episode`: This is a function that returns an episode of interaction.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
```
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0):
# initialize empty dictionaries of arrays
returns_sum = defaultdict(lambda: np.zeros(env.action_space.n))
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode
episode = generate_episode(env)
# obtain the states, actions, and rewards
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
# update the sum of the returns, number of visits, and action-value
# function estimates for each state-action pair in the episode
for i, state in enumerate(states):
returns_sum[state][actions[i]] += sum(rewards[i:]*discounts[:-(1+i)])
N[state][actions[i]] += 1.0
Q[state][actions[i]] = returns_sum[state][actions[i]] / N[state][actions[i]]
return Q
```
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.
To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
```
# obtain the action-value function
Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic)
# obtain the corresponding state-value function
V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \
for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V_to_plot)
```
### Part 2: MC Control
In this section, you will write your own implementation of constant-$\alpha$ MC control.
Your algorithm has four arguments:
- `env`: This is an instance of an OpenAI Gym environment.
- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.
- `alpha`: This is the step-size parameter for the update step.
- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).
The algorithm returns as output:
- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.
(_Feel free to define additional functions to help you to organize your code._)
```
def generate_episode_from_Q(env, Q, epsilon, nA):
""" generates an episode from following the epsilon-greedy policy """
episode = []
state = env.reset()
while True:
action = np.random.choice(np.arange(nA), p=get_probs(Q[state], epsilon, nA)) \
if state in Q else env.action_space.sample()
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward))
state = next_state
if done:
break
return episode
def get_probs(Q_s, epsilon, nA):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
policy_s = np.ones(nA) * epsilon / nA
best_a = np.argmax(Q_s)
policy_s[best_a] = 1 - epsilon + (epsilon / nA)
return policy_s
def update_Q(env, episode, Q, alpha, gamma):
""" updates the action-value function estimate using the most recent episode """
states, actions, rewards = zip(*episode)
# prepare for discounting
discounts = np.array([gamma**i for i in range(len(rewards)+1)])
for i, state in enumerate(states):
old_Q = Q[state][actions[i]]
Q[state][actions[i]] = old_Q + alpha*(sum(rewards[i:]*discounts[:-(1+i)]) - old_Q)
return Q
def mc_control(env, num_episodes, alpha, gamma=1.0, eps_start=1.0, eps_decay=.99999, eps_min=0.05):
nA = env.action_space.n
# initialize empty dictionary of arrays
Q = defaultdict(lambda: np.zeros(nA))
epsilon = eps_start
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# set the value of epsilon
epsilon = max(epsilon*eps_decay, eps_min)
# generate an episode by following epsilon-greedy policy
episode = generate_episode_from_Q(env, Q, epsilon, nA)
# update the action-value function estimate using the episode
Q = update_Q(env, episode, Q, alpha, gamma)
# determine the policy corresponding to the final action-value function estimate
policy = dict((k,np.argmax(v)) for k, v in Q.items())
return policy, Q
```
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
```
# obtain the estimated optimal policy and action-value function
policy, Q = mc_control(env, 500000, 0.02)
```
Next, we plot the corresponding state-value function.
```
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function
plot_blackjack_values(V)
```
Finally, we visualize the policy that is estimated to be optimal.
```
# plot the policy
plot_policy(policy)
```
The **true** optimal policy $\pi_*$ can be found in Figure 5.2 of the [textbook](http://go.udacity.com/rl-textbook) (and appears below). Compare your final estimate to the optimal policy - how close are you able to get? If you are not happy with the performance of your algorithm, take the time to tweak the decay rate of $\epsilon$, change the value of $\alpha$, and/or run the algorithm for more episodes to attain better results.

|
github_jupyter
|
**Aims**:
- extract the omics mentioned in multi-omics articles
**NOTE**: the articles not in PMC/with no full text need to be analysed separately, or at least highlighted.
```
%run notebook_setup.ipynb
import pandas
pandas.set_option('display.max_colwidth', 100)
%vault from pubmed_derived_data import literature, literature_subjects
literature['title_abstract_text_subjects'] = (
literature['title']
+ ' ' + literature['abstract_clean'].fillna('')
+ ' ' + literature_subjects.apply(lambda x: ' '.join(x[x == True].index), axis=1)
+ ' ' + literature['full_text'].fillna('')
)
omics_features = literature.index.to_frame().drop(columns='uid').copy()
from functools import partial
from helpers.text_processing import check_usage
from pandas import Series
check_usage_in_input = partial(
check_usage,
data=literature,
column='title_abstract_text_subjects',
limit=5 # show only first 5 results
)
TERM_IN_AT_LEAST_N_ARTICLES = 5
```
# Omics
## 1. Lookup by words which end with -ome
```
cellular_structures = {
# organelles
'peroxisome',
'proteasome',
'ribosome',
'exosome',
'nucleosome',
'polysome',
'autosome',
'autophagosome',
'endosome',
'lysosome',
# proteins and molecular complexes
'spliceosome',
'cryptochrome',
# chromosmes
'autosome',
'chromosome',
'x-chromosome',
'y-chromosome',
}
species = {
'trichome'
}
tools_and_methods = {
# dry lab
'dphenome',
'dgenome',
'reactome',
'rexposome',
'phytozome',
'rgenome',
'igenome', # iGenomes
# wet lab
'microtome'
}
not_an_ome = {
'outcome',
'middle-income',
'welcome',
'wellcome', # :)
'chrome',
'some',
'cumbersome',
'become',
'home',
'come',
'overcome',
'cytochrome',
'syndrome',
'ubiome',
'biome', # this IS an ome, but more into envrionmental studies, rather than molecular biology!
'fluorochrome',
'post-genome',
'ubiquitin-proteasome', # UPS
*tools_and_methods,
*cellular_structures,
*species
}
from omics import get_ome_regexp
ome_re = get_ome_regexp()
get_ome_regexp??
ome_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(ome_re)[0]
.to_frame('term').reset_index()
)
ome_occurrences = ome_occurrences[~ome_occurrences.term.isin(not_an_ome)]
ome_occurrences.head(3)
```
### 1.1 Harmonise hyphenation
```
from helpers.text_processing import report_hyphenation_trends, harmonise_hyphenation
hyphenation_rules = report_hyphenation_trends(ome_occurrences.term)
hyphenation_rules
ome_occurrences.term = harmonise_hyphenation(ome_occurrences.term, hyphenation_rules)
```
### 1.2 Fix typos
```
from helpers.text_processing import find_term_typos, create_typos_map
ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_ome_typos = find_term_typos(ome_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_ome_typos
check_usage_in_input('1-metabolome')
check_usage_in_input('miRNAome')
check_usage_in_input('miRome')
check_usage_in_input('rexposome')
check_usage_in_input('glycol-proteome')
check_usage_in_input('rgenome')
check_usage_in_input('iGenomes')
check_usage_in_input('cancergenome')
is_typo_subset_or_variant = {
('transcritome', 'transcriptome'): True,
('transciptome', 'transcriptome'): True,
('tanscriptome', 'transcriptome'): True,
('trascriptome', 'transcriptome'): True,
('microbome', 'microbiome'): True,
('protenome', 'proteome'): True,
# (neither n- nor o- is frequent enough on its own)
('o-glycoproteome', 'glycoproteome'): True,
('n-glycoproteome', 'glycoproteome'): True,
('glycol-proteome', 'glycoproteome'): True, # note "glycol" instead of "glyco"
('mirome', 'mirnome'): True,
('1-metabolome', 'metabolome'): True
}
ome_typos_map = create_typos_map(potential_ome_typos, is_typo_subset_or_variant)
replaced = ome_occurrences.term[ome_occurrences.term.isin(ome_typos_map)]
replaced.value_counts()
len(replaced)
ome_occurrences.term = ome_occurrences.term.replace(ome_typos_map)
```
### 1.3 Replace synonymous and narrow terms
```
ome_replacements = {}
```
#### miRNAomics → miRNomics
miRNAome is more popular name for -ome, while miRNomics is more popular for -omics.
```
ome_occurrences.term.value_counts().loc[['mirnome', 'mirnaome']]
```
As I use -omcis for later on, for consistency I will change miRNAome → miRNome
```
ome_replacements['miRNAome'] = 'miRNome'
```
#### Cancer genome → genome
```
ome_occurrences.term.value_counts().loc[['genome', 'cancer-genome']]
ome_replacements['cancer-genome'] = 'genome'
```
#### Host microbiome → microbiome
```
ome_occurrences.term.value_counts().loc[['microbiome', 'host-microbiome']]
ome_replacements['host-microbiome'] = 'microbiome'
```
#### Replace the values
```
ome_occurrences.term = ome_occurrences.term.replace(
{k.lower(): v.lower() for k, v in ome_replacements.items()}
)
```
### 1.4 Summarise popular \*ome terms
```
ome_counts = ome_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
ome_common_counts = ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES]
ome_common_counts
ome_common_terms = Series(ome_common_counts.index)
ome_common_terms[ome_common_terms.str.endswith('some')]
```
### 2. Lookup by omics and adjectives
```
from omics import get_omics_regexp
omics_re = get_omics_regexp()
get_omics_regexp??
check_usage_in_input('integromics')
check_usage_in_input('meta-omics')
check_usage_in_input('post-genomic')
check_usage_in_input('3-omics')
multi_omic = {
'multi-omic',
'muti-omic',
'mutli-omic',
'multiomic',
'cross-omic',
'panomic',
'pan-omic',
'trans-omic',
'transomic',
'four-omic',
'multiple-omic',
'inter-omic',
'poly-omic',
'polyomic',
'integromic',
'integrated-omic',
'integrative-omic',
'3-omic'
}
tools = {
# MixOmics
'mixomic',
# MetaRbolomics
'metarbolomic',
# MinOmics
'minomic',
# LinkedOmics - TCGA portal
'linkedomic',
# Mergeomics - https://doi.org/10.1186/s12864-016-3198-9
'mergeomic'
}
vague = {
'single-omic'
}
adjectives = {
'economic',
'socio-economic',
'socioeconomic',
'taxonomic',
'syndromic',
'non-syndromic',
'agronomic',
'anatomic',
'autonomic',
'atomic',
'palindromic',
# temporal
'postgenomic',
'post-genomic'
}
not_an_omic = {
'non-omic', # this on was straightforward :)
*adjectives,
*multi_omic,
*tools,
*vague
}
omic_occurrences = (
literature['title_abstract_text_subjects'].str.lower()
.str.extractall(omics_re)[0]
.to_frame('term').reset_index()
)
omic_occurrences = omic_occurrences[~omic_occurrences.term.isin(not_an_omic)]
omic_occurrences.head(2)
```
### 2.1 Harmonise hyphenation
```
hyphenation_rules = report_hyphenation_trends(omic_occurrences.term)
hyphenation_rules
omic_occurrences.term = harmonise_hyphenation(omic_occurrences.term, hyphenation_rules)
```
### 2.2 Fix typos
```
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
potential_omic_typos = find_term_typos(omic_counts, TERM_IN_AT_LEAST_N_ARTICLES - 1)
potential_omic_typos
check_usage_in_input('non-omic')
check_usage_in_input('C-metabolomics')
```
Not captured in the text abstract, but full version has 13C, so carbon-13, so type of metabolomics.
```
check_usage_in_input('miRNAomics')
check_usage_in_input('miRomics')
check_usage_in_input('MinOmics')
check_usage_in_input('onomic', words=True)
literature.loc[omic_occurrences[omic_occurrences.term == 'onomic'].uid].title_abstract_text_subjects
check_usage_in_input(r'\bonomic', words=False, highlight=' onomic')
check_usage_in_input(' ionomic', words=False)
check_usage_in_input('integratomic', words=False)
```
Note: integratomics has literally three hits in PubMed, two because of http://www.integratomics-time.com/
```
is_typo_subset_or_variant = {
('phoshphoproteomic', 'phosphoproteomic'): True,
('transriptomic', 'transcriptomic'): True,
('transcripomic', 'transcriptomic'): True,
('transciptomic', 'transcriptomic'): True,
('trancriptomic', 'transcriptomic'): True,
('trascriptomic', 'transcriptomic'): True,
('metageonomic', 'metagenomic'): True,
('metaobolomic', 'metabolomic'): True,
('metabotranscriptomic', 'metatranscriptomic'): False,
('mirnaomic', 'mirnomic'): True,
('metranscriptomic', 'metatranscriptomic'): True,
('metranscriptomic', 'transcriptomic'): False,
('miromic', 'mirnomic'): True,
('n-glycoproteomic', 'glycoproteomic'): True,
('onomic', 'ionomic'): False,
('c-metabolomic', 'metabolomic'): True,
('integratomic', 'interactomic'): False,
('pharmacoepigenomic', 'pharmacogenomic'): False,
('metobolomic', 'metabolomic'): True,
# how to treat single-cell?
('scepigenomic', 'epigenomic'): True,
#('epitranscriptomic', 'transcriptomic'): False
('epigenomomic', 'epigenomic'): True,
}
omic_typos_map = create_typos_map(potential_omic_typos, is_typo_subset_or_variant)
replaced = omic_occurrences.term[omic_occurrences.term.isin(omic_typos_map)]
replaced.value_counts()
len(replaced)
omic_occurrences.term = omic_occurrences.term.replace(omic_typos_map)
```
### 2.3 Popular *omic(s) terms:
```
omic_counts = omic_occurrences.drop_duplicates(['uid', 'term']).term.sorted_value_counts()
omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].add_suffix('s')
```
### Crude overview
```
ome_terms = Series(ome_counts[ome_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)
omic_terms = Series(omic_counts[omic_counts >= TERM_IN_AT_LEAST_N_ARTICLES].index)
assert omics_features.index.name == 'uid'
for term in ome_terms:
mentioned_by_uid = set(ome_occurrences[ome_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)
for term in omic_terms:
mentioned_by_uid = set(omic_occurrences[omic_occurrences.term == term].uid)
omics_features['mentions_' + term] = omics_features.index.isin(mentioned_by_uid)
from helpers.text_processing import prefix_remover
ome_terms_mentioned = omics_features['mentions_' + ome_terms].rename(columns=prefix_remover('mentions_'))
omic_terms_mentioned = omics_features['mentions_' + omic_terms].rename(columns=prefix_remover('mentions_'))
%R library(ComplexUpset);
%%R -i ome_terms_mentioned -w 800 -r 100
upset(ome_terms_mentioned, colnames(ome_terms_mentioned), min_size=10, width_ratio=0.1)
```
## Merge -ome and -omic terms
```
from warnings import warn
terms_associated_with_omic = {
omic + 's': [omic]
for omic in omic_terms
}
for ome in ome_terms:
assert ome.endswith('ome')
auto_generate_omic_term = ome[:-3] + 'omics'
omic = auto_generate_omic_term
if omic not in terms_associated_with_omic:
if omic in omic_counts.index:
warn(f'{omic} was removed at thresholding, but it is a frequent -ome!')
else:
print(f'Creating omic {omic}')
terms_associated_with_omic[omic] = []
terms_associated_with_omic[omic].append(ome)
from omics import add_entities_to_features
add_entities_to_omic_features = partial(
add_entities_to_features,
features=omics_features,
omics_terms=terms_associated_with_omic
)
omics = {k: [k] for k in terms_associated_with_omic}
add_entities_to_omic_features(omics, entity_type='ome_or_omic')
from omics import omics_by_entity, omics_by_entity_group
```
interactomics is a proper "omics", but it is difficult to assign to a single entity - by definition
```
check_usage_in_input('interactomics')
```
phylogenomics is not an omic on its own, but if used in context of metagenomics it can refer to actual omics data
```
check_usage_in_input('phylogenomics')
```
regulomics is both a name of a tool, group (@MIM UW), and omics:
```
check_usage_in_input('regulomics')
from functools import reduce
omics_mapped_to_entities = reduce(set.union, omics_by_entity.values())
set(terms_associated_with_omic) - omics_mapped_to_entities
assert omics_mapped_to_entities - set(terms_associated_with_omic) == set()
omics_mapped_to_entities_groups = reduce(set.union, omics_by_entity_group.values())
set(terms_associated_with_omic) - omics_mapped_to_entities_groups
add_entities_to_omic_features(omics_by_entity, entity_type='entity')
add_entities_to_omic_features(omics_by_entity_group, entity_type='entity_group')
```
### Visualize the entities & entities groups
```
omic_entities = omics_features['entity_' + Series(list(omics_by_entity.keys()))].rename(columns=prefix_remover('entity_'))
omic_entities_groups = omics_features['entity_group_' + Series(list(omics_by_entity_group.keys()))].rename(columns=prefix_remover('entity_group_'))
%%R -i omic_entities -w 800 -r 100
upset(omic_entities, colnames(omic_entities), min_size=10, width_ratio=0.1)
%%R -i omic_entities_groups -w 800 -r 100
upset(omic_entities_groups, colnames(omic_entities_groups), min_size=10, width_ratio=0.1)
```
### Number of omics mentioned in abstract vs the multi-omic term used
```
omes_or_omics_df = omics_features['ome_or_omic_' + Series(list(omics.keys()))].rename(columns=prefix_remover('ome_or_omic_'))
literature['omic_terms_detected'] = omes_or_omics_df.sum(axis=1)
lt = literature[['term', 'omic_terms_detected']]
literature.sort_values('omic_terms_detected', ascending=False)[['title', 'omic_terms_detected']].head(10)
%%R -i lt -w 800
(
ggplot(lt, aes(x=term, y=omic_terms_detected))
+ geom_violin(adjust=2)
+ geom_point()
+ theme_bw()
)
%vault store omics_features in pubmed_derived_data
```
# Current limitations
## Patchy coverage
Currently I only detected omic-describing terms in less than 70% of abstracts:
```
omic_entities.any(axis=1).mean()
```
Potential solution: select a random sample of 50 articles, annotate manually, calculate sensitivity and specificity.
If any omic is consistently omitted, reconsider how search terms are created.
## Apostrophes
Are we missing out on \*'omic terms, such us meta'omic used in [here](https://doi.org/10.1053/j.gastro.2014.01.049)?
```
check_usage_in_input(
r'\w+\'omic',
words=False,
highlight='\'omic'
)
```
unlikely (but would be nice to get it in!)
## Fields of study
```
'genetics', 'epigenetics'
```
Some authors may prefer to say "we integrated genetic and proteomic data" rather than "genomic and proteomic"
|
github_jupyter
|
```
from django.template import Context
from django.template.base import Token
from django.template.base import Parser
from django.template.base import Template
from django.template.base import TokenType
from django.core.management import call_command
from wagtail_srcset.templatetags.wagtail_srcset_tags import srcset_image
from django.core.files.uploadedfile import SimpleUploadedFile
from wagtail.images.models import Image as WagtailImage
```
# setup db
```
call_command("migrate")
```
# create image
```
import io
from PIL import Image
def create_small_rgb():
# this is a small test jpeg
img = Image.new('RGB', (200, 200), (255, 0, 0, 0))
return img
def small_jpeg_io():
rgb = create_small_rgb()
im_io = io.BytesIO()
rgb.save(im_io, format="JPEG", quality=60, optimize=True, progressive=True)
im_io.seek(0)
im_io.name = "testimage.jpg"
return im_io
def small_uploaded_file(small_jpeg_io):
simple_png = SimpleUploadedFile(
name="test.png", content=small_jpeg_io.read(), content_type="image/png"
)
small_jpeg_io.seek(0)
return simple_png
simple_png = small_uploaded_file(small_jpeg_io())
from django.conf import settings
print(settings.DATABASES)
image = WagtailImage(file=simple_png)
image.save()
```
# render template
```
template_text = """
{% load wagtailimages_tags %}
{% load wagtail_srcset_tags %}
{% image img width-300 %}
{% srcset_image img width-300 jpegquality-90 %}
"""
t = Template(template_text)
print(t.render(Context({"img": image})))
template_text = """
{% load wagtailimages_tags %}
{% image img width-300 %}
"""
t = Template(template_text)
t.render(Context({"img": image}))
image_tag = "{% image block.value width-300 %}"
image_tag = "block.value width-300}"
token = Token(TokenType.BLOCK, image_tag)
parser = Parser(token.split_contents())
t = Template(template_text)
t.render(Context({}))
```
# Get image size in tag
```
from django import template
from django.conf import settings
from wagtail.images.templatetags.wagtailimages_tags import image
register = template.Library()
@register.tag(name="srcset_image2")
def srcset_image(parser, token):
image_node = image(parser, token)
print(image_node)
print(dir(image_node))
image_node.attrs["srcset"] = SrcSet(image_node)
return image_node
class SrcSet:
def __init__(self, image_node):
self.image_node = image_node
srcset = image_node.attrs.get("srcset", None)
print("image node attrs: ", image_node.attrs)
print("image node width: ", image_node.attrs.get("width"))
print("image node filter: ", image_node.filter.operations)
if srcset is None:
self.renditions = self.default_renditions
else:
self.renditions = self.renditions_from_srcset(srcset.token)
@property
def default_renditions(self):
if hasattr(settings, "DEFAULT_SRCSET_RENDITIONS"):
return settings.DEFAULT_SRCSET_RENDITIONS
else:
return [
"width-2200|jpegquality-60",
"width-1100|jpegquality-60",
"width-768|jpegquality-60",
"width-500|jpegquality-60",
"width-300|jpegquality-60",
]
def renditions_from_srcset(self, srcset):
srcset = srcset.strip('"').strip("'")
return srcset.split(" ")
def resolve(self, context):
image = self.image_node.image_expr.resolve(context)
out_renditions = []
for rendition in self.renditions:
rendered_image = image.get_rendition(rendition)
out_renditions.append(f"{rendered_image.url} {rendered_image.width}w")
srcset_string = ", ".join(out_renditions)
return srcset_string
template_text = """
{% load wagtailimages_tags %}
{% load wagtail_srcset_tags %}
{% image img width-300 %}
{% srcset_image2 img width-300 %}
"""
t = Template(template_text)
t.render(Context({"img": image}))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/kylehounslow/gdg_workshop/blob/master/notebooks/hello_tensorflow.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Hello TensorFlow!
This notebook is a gentle introduction to TensorFlow.
Mostly taken from [here](https://github.com/aymericdamien/TensorFlow-Examples/tree/master/examples)
___
In this notebook we will learn about:
* How to run jupyter notebook cells
* How to build and execute a computational graph in Tensorflow
* How to visualize the computational graph in a notebook cell
```
import numpy as np
import tensorflow as tf
from IPython.display import HTML
# Create a Constant op
# The op is added as a node to the default graph.
hello = tf.constant('Hello, TensorFlow!')
# Start tf session
with tf.Session() as sess:
# Run the op
print(sess.run(hello))
# Basic constant operations
# The value returned by the constructor represents the output
# of the Constant op.
a = tf.constant(7)
b = tf.constant(6)
# Launch the default graph.
with tf.Session() as sess:
print("Addition with constants: %i" % sess.run(a+b))
print("Multiplication with constants: %i" % sess.run(a*b))
```
## Define some helper functions to render the computational graph in a notebook cell
```
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
# Basic Operations with variable as graph input
# The value returned by the constructor represents the output
# of the Variable op. (define as input when running session)
# tf Graph input
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
# Define some operations
add = tf.add(a, b)
mul = tf.multiply(a, b)
# Launch the default graph.
with tf.Session() as sess:
# Run every operation with variable input
print("Addition with variables: %i" % sess.run(add, feed_dict={a: 2, b: 3}))
print("Multiplication with variables: %i" % sess.run(mul, feed_dict={a: 2, b: 3}))
show_graph(tf.get_default_graph())
# ----------------
# More in details:
# Matrix Multiplication from TensorFlow official tutorial
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of threes ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
with tf.Session() as sess:
result = sess.run(product)
print(result)
```
## To reset the graph, use `tf.reset_default_graph()`
```
tf.reset_default_graph()
a = tf.constant(7)
b = tf.constant(6)
op = tf.add(a, b)
show_graph(tf.get_default_graph())
```
|
github_jupyter
|
## Experiment
```
experiment_label = 'rforest01'
```
### Aim:
* compare basic random forest to best logreg
### Findings:
* ROC on training hugs the top left; overfitting.
* Next: increase min samples per leaf.
## Set up
```
import pandas as pd
import numpy as np
from joblib import dump, load # simpler than pickle!
import matplotlib.pyplot as plt
import seaborn as sns
```
## Data
```
#load data
data_path = '../data/raw/uts-advdsi-nba-career-prediction'
train_raw = pd.read_csv(data_path + '/train.csv')
test_raw = pd.read_csv(data_path + '/test.csv')
#shapes & head
print(train_raw.shape)
print(test_raw.shape)
train_raw.head()
test_raw.head()
# info
train_raw.info()
#variable descriptions
train_raw.describe()
test_raw.describe()
```
## Cleaning
```
train = train_raw.copy()
test = test_raw.copy()
cols_drop = ['Id_old', 'Id'] #, 'MIN', 'FGM', 'FGA', 'TOV', '3PA', 'FTM', 'FTA', 'REB']
train.drop(cols_drop, axis=1, inplace=True)
test.drop(cols_drop, axis=1, inplace=True)
train.head()
test.head()
train_target = train.pop('TARGET_5Yrs')
```
# Modelling
```
#transformations
# fit scaler to training data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
train = scaler.fit_transform(train)
dump(scaler, '../models/aj_' + experiment_label + '_scaler.joblib')
# transform test data
test = scaler.transform(test)
#examine shapes
print('train:' + str(train.shape))
print('test:' + str(test.shape))
# split training into train & validation
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(train, train_target, test_size=0.2, random_state=8)
# in this case we will use the Kaggle submission as our test
#X_train, y_train = train, train_target
#import models
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
# Define model
model = RandomForestClassifier(class_weight='balanced',random_state=8)
#fit model to training data
model.fit(X_train, y_train)
#save model to file
dump(model, '../models/aj_' + experiment_label + '.joblib')
#predictions for test and validation sets
y_train_preds = model.predict(X_train)
y_val_preds = model.predict(X_val)
```
## Evaluation
```
import sys
import os
sys.path.append(os.path.abspath('..'))
from src.models.aj_metrics import confusion_matrix
print("Training:")
print(confusion_matrix(y_train, y_train_preds))
print('')
print("Validation:")
print(confusion_matrix(y_val, y_val_preds))
from sklearn import metrics
print("Training:")
print(metrics.classification_report(y_train, y_train_preds))
print('')
print("Validation:")
print(metrics.classification_report(y_val, y_val_preds))
print("Training:")
print(metrics.roc_auc_score(y_train, model.decision_function(X_train)))
print('')
print("Validation:")
print(metrics.roc_auc_score(y_val, model.decision_function(X_val)))
import matplotlib.pyplot as plt
from sklearn import metrics
metrics.plot_roc_curve(model, X_train, y_train)
plt.show()
metrics.plot_roc_curve(model, X_val, y_val)
plt.show()
```
# Apply to test data for submission
```
y_test_preds = model.predict(test)
y_test_preds
y_test_probs = model.predict_proba(test)
y_test_probs
len(y_test_probs)
test_raw.shape
test_raw['Id'].shape
submission = pd.DataFrame({'Id': range(0,3799), 'TARGET_5Yrs': [p[1] for p in y_test_probs]})
submission.head()
submission.to_csv('../reports/aj_' + experiment_label + 'submission.csv',
index=False,
)
```
|
github_jupyter
|
# 训练你的物体检测器
```
!pip install gluoncv
import gluoncv as gcv
import mxnet as mx
```
# 准备训练集
```
import os
class DetectionDataset(gcv.data.VOCDetection):
CLASSES = ['cocacola', 'noodles', 'hand']
def __init__(self, root):
self._im_shapes = {}
self._root = os.path.expanduser(root)
self._transform = None
self._items = [(self._root, x.strip('.jpg')) for x in os.listdir(self._root) if x.endswith('.jpg')]
self._anno_path = os.path.join('{}', '{}.xml')
self._image_path = os.path.join('{}', '{}.jpg')
self.index_map = dict(zip(self.classes, range(self.num_class)))
self._label_cache = self._preload_labels()
def __str__(self):
detail = self._root
return self.__class__.__name__ + '(' + detail + ')'
@property
def classes(self):
return self.CLASSES
@property
def num_class(self):
return len(self.classes)
train_dataset = DetectionDataset('../images/shenzhen_v1')
print('class_names:', train_dataset.classes)
print('num_images:', len(train_dataset))
```
# 可视化数据
```
from matplotlib import pyplot as plt
from gluoncv.utils import viz
sample = train_dataset[0]
train_image = sample[0]
train_label = sample[1]
ax = viz.plot_bbox(
train_image.asnumpy(),
train_label[:, :4],
labels=train_label[:, 4:5],
class_names=train_dataset.classes)
plt.show()
# for i in range(len(train_dataset)):
# sample = train_dataset[i]
# train_image = sample[0]
# train_label = sample[1]
# ax = viz.plot_bbox(
# train_image.asnumpy(),
# train_label[:, :4],
# labels=train_label[:, 4:5],
# class_names=train_dataset.classes)
# plt.show()
```
# 定义训练过程
```
import time
from datetime import datetime
from mxnet import autograd
from gluoncv.data.batchify import Tuple, Stack, Pad
def train_model(train_dataset, epochs=50):
ctx = mx.gpu(0)
# ctx = mx.cpu(0)
net = gcv.model_zoo.get_model('ssd_512_resnet50_v1_custom', classes=train_dataset.classes, transfer='coco')
# net.load_parameters('object_detector_epoch200_10_22_2019_20_28_41.params') # TODO continue training
net.collect_params().reset_ctx(ctx)
width, height = 512, 512 # suppose we use 512 as base training size
train_transform = gcv.data.transforms.presets.ssd.SSDDefaultTrainTransform(width, height)
gcv.utils.random.seed(233)
# batch_size = 4
batch_size = 32 # 32 for p3.2xlarge, 16 for p2.2xlarge
# you can make it larger(if your CPU has more cores) to accelerate data loading
num_workers = 4
with autograd.train_mode():
_, _, anchors = net(mx.nd.zeros((1, 3, height, width), ctx))
anchors = anchors.as_in_context(mx.cpu())
train_transform = gcv.data.transforms.presets.ssd.SSDDefaultTrainTransform(width, height, anchors)
batchify_fn = Tuple(Stack(), Stack(), Stack())
train_loader = mx.gluon.data.DataLoader(
train_dataset.transform(train_transform),
batch_size,
shuffle=True,
batchify_fn=batchify_fn,
last_batch='rollover',
num_workers=num_workers)
mbox_loss = gcv.loss.SSDMultiBoxLoss()
ce_metric = mx.metric.Loss('CrossEntropy')
smoothl1_metric = mx.metric.Loss('SmoothL1')
for k, v in net.collect_params().items():
if 'convpredictor' not in k:
# freeze upper layers
v.grad_req = 'null'
trainer = mx.gluon.Trainer(
net.collect_params(), 'sgd',
{'learning_rate': 0.001, 'wd': 0.0005, 'momentum': 0.9})
net.hybridize(static_alloc=True, static_shape=True)
for epoch in range(epochs):
tic = time.time()
btic = time.time()
for i, batch in enumerate(train_loader):
data = mx.gluon.utils.split_and_load(batch[0], ctx_list=[ctx], batch_axis=0)
cls_targets = mx.gluon.utils.split_and_load(batch[1], ctx_list=[ctx], batch_axis=0)
box_targets = mx.gluon.utils.split_and_load(batch[2], ctx_list=[ctx], batch_axis=0)
with autograd.record():
cls_preds = []
box_preds = []
for x in data:
cls_pred, box_pred, _ = net(x)
cls_preds.append(cls_pred)
box_preds.append(box_pred)
sum_loss, cls_loss, box_loss = mbox_loss(
cls_preds, box_preds, cls_targets, box_targets)
autograd.backward(sum_loss)
# since we have already normalized the loss, we don't want to normalize
# by batch-size anymore
trainer.step(1)
ce_metric.update(0, [l * batch_size for l in cls_loss])
smoothl1_metric.update(0, [l * batch_size for l in box_loss])
name1, loss1 = ce_metric.get()
name2, loss2 = smoothl1_metric.get()
print('[Epoch {}][Batch {}], Speed: {:.3f} samples/sec, {}={:.3f}, {}={:.3f}'.format(
epoch, i, batch_size/(time.time()-btic), name1, loss1, name2, loss2))
btic = time.time()
return net
```
# 开始训练
```
epochs = 300
net = train_model(train_dataset, epochs=epochs)
save_file = 'object_detector_epoch{}_{}.params'.format(epochs, datetime.now().strftime("%m_%d_%Y_%H_%M_%S"))
net.save_parameters(save_file)
print('Saved model to disk: ' + save_file)
```
|
github_jupyter
|
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# Hugging Face - Ask boolean question to T5
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Hugging%20Face/Hugging_Face_Ask_boolean_question_to_T5.ipynb" target="_parent"><img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/></a>
## T5-base finetuned on BoolQ (superglue task)
This notebook is for demonstrating the training and use of the text-to-text-transfer-transformer (better known as T5) on boolean questions (BoolQ). The example use case is a validator indicating if an idea is environmentally friendly. Nearly any question can be passed into the `query` function (see below) as long as a context to a question is given.
Author: Maximilian Frank ([script4all.com](//script4all.com)) - Copyleft license
Notes:
- The model from [huggingface.co/mrm8488/t5-base-finetuned-boolq](//huggingface.co/mrm8488/t5-base-finetuned-boolq) is used in this example as it is an already trained t5-base model on boolean questions (BoolQ task of superglue).
- Documentation references on [huggingface.co/transformers/model_doc/t5.html#training](//huggingface.co/transformers/model_doc/t5.html#training), template script on [programming-review.com/machine-learning/t5](//programming-review.com/machine-learning/t5)
- The greater the model, the higher the accuracy on BoolQ (see [arxiv.org/pdf/1910.10683.pdf](//arxiv.org/pdf/1910.10683.pdf)):
t5-small|t5-base|t5-large|t5-3B|t5-11B
-|-|-|-|-
76.4%|81.4%|85.4%|89.9%|91.2%
## Loading the model
If here comes an error, install the packages via `python3 -m pip install … --user`.
You can also load a T5 plain model (not finetuned). Just replace the following code
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('mrm8488/t5-base-finetuned-boolq')
model = AutoModelForSeq2SeqLM.from_pretrained('mrm8488/t5-base-finetuned-boolq')…
```
with
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
```
where `t5-small` is one of the names in the table above.
```
!pip install transformers
!pip install sentencepiece
import json
import torch
from operator import itemgetter
from distutils.util import strtobool
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# load model
tokenizer = AutoTokenizer.from_pretrained('mrm8488/t5-base-finetuned-boolq')
model = AutoModelForSeq2SeqLM.from_pretrained('mrm8488/t5-base-finetuned-boolq').to(torch.device('cuda' if torch.cuda.is_available() else 'cpu'))
try:model.parallelize()
except:pass
```
## Training
> **Optional:** You can leave the following out, if you don't have custom datasets. By default the number of training epochs equals 0, so nothing is trained.
> **Warning:** This option consumes a lot of runtime and thus *naas.ai* credits. Make sure to have enough credits on your account.
For each dataset a stream-opener has to be provided which is readable line by line (e.g. file, database). In the array with key `keys` are all dictionary keys which exist in the jsonl-line. So in this example the first training dataset has the keys `question` for the questions (string),`passage` for the contexts (string) and `answer` for the answers (boolean). Adjust these keys to your dataset.
At last you have to adjust the number of epochs to be trained (see comment `# epochs`).
```
srcs = [
{ 'stream': lambda:open('boolq/train.jsonl', 'r'),
'keys': ['question', 'passage', 'answer'] },
{ 'stream': lambda:open('boolq/dev.jsonl', 'r'),
'keys': ['question', 'passage', 'answer'] },
{ 'stream': lambda:open('boolq-nat-perturb/train.jsonl', 'r'),
'keys': ['question', 'passage', 'roberta_hard'] }
]
model.train()
for _ in range(0): # epochs
for src in srcs:
with src['stream']() as s:
for d in s:
q, p, a = itemgetter(src['keys'][0], src['keys'][1], src['keys'][2])(json.loads(d))
tokens = tokenizer('question:'+q+'\ncontext:'+p, return_tensors='pt')
if len(tokens.input_ids[0]) > model.config.n_positions:
continue
model(input_ids=tokens.input_ids,
labels=tokenizer(str(a), return_tensors='pt').input_ids,
attention_mask=tokens.attention_mask,
use_cache=True
).loss.backward()
model.eval(); # ; suppresses long output on jupyter
```
## Define query function
As the model is ready, define the querying function.
```
def query(q='question', c='context'):
return strtobool(
tokenizer.decode(
token_ids=model.generate(
input_ids=tokenizer.encode('question:'+q+'\ncontext:'+c, return_tensors='pt')
)[0],
skip_special_tokens=True,
max_length=3)
)
```
## Querying on the task
Now the actual task begins: Query the model with your ideas (see list `ideas`).
```
if __name__ == '__main__':
ideas = [ 'The idea is to pollute the air instead of riding the bike.', # should be false
'The idea is to go cycling instead of driving the car.', # should be true
'The idea is to put your trash everywhere.', # should be false
'The idea is to reduce transport distances.', # should be true
'The idea is to put plants on all the roofs.', # should be true
'The idea is to forbid opensource vaccines.', # should be true
'The idea is to go buy an Iphone every five years.', # should be false
'The idea is to walk once every week in the nature.', # should be true
'The idea is to go buy Green bonds.', # should be true
'The idea is to go buy fast fashion.', # should be false
'The idea is to buy single-use items.', # should be false
'The idea is to drink plastic bottled water.', # should be false
'The idea is to use import goods.', # should be false
'The idea is to use buy more food than you need.', # should be false
'The idea is to eat a lot of meat.', # should be false
'The idea is to eat less meat.', # should be false
'The idea is to always travel by plane.', # should be false
'The idea is to opensource vaccines.' # should be false
]
for idea in ideas:
print('🌏 Idea:', idea)
print('\t✅ Good idea' if query('Is the idea environmentally friendly?', idea) else '\t❌ Bad idea' )
```
|
github_jupyter
|
## Goes over modeling, starting from modeling tables.
### We're using modeling tables which were prepared based on 12 hours worth of vital sign data from each patient, as well as medication history during the stay, and patient characteristics.
### The model predicts the probability of having a rapid response team event in 1 hour's time from the time of prediction. A RRT event is called after personnel identify that a patient has an urgent need for medical service.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy as sp
# import datetime as datetime
import cPickle as pickle
%matplotlib inline
plt.style.use('ggplot')
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split, KFold
from sklearn.metrics import confusion_matrix, roc_auc_score, precision_score, recall_score, classification_report
from sklearn.ensemble import GradientBoostingClassifier #, RandomForestClassifier,
from sklearn.ensemble.partial_dependence import plot_partial_dependence, partial_dependence
from sklearn.grid_search import GridSearchCV
```
### function definitions
```
def score_printout(X_test, y_test, fittedModel):
print "AUC-ROC Score of model: ", roc_auc_score(y_test, fittedModel.predict_proba(X_test)[:,1])
print "Precision Score of model: ", precision_score(y_test, fittedModel.predict(X_test))
print "Recall Score of model: ", recall_score(y_test, fittedModel.predict(X_test))
def make_feature_importance_plot(featuresAndImportances, numFeatures):
topN = featuresAndImportances[:numFeatures]
labels = [pair[0] for pair in topN]
values = [pair[1] for pair in topN]
ind = np.arange(len(values)+2)
width = 0.35
plt.barh(range(numFeatures),values)
ax = plt.subplot(111)
ax.set_yticks(ind+width)
ax.set_yticklabels(labels, rotation=0, size=12)
plt.ylabel('Feature', size=20)
plt.xlabel('Importance', size=20)
plt.show()
```
### Read in data
We did not share our modeling data, so you will have to create your own. The pipeline tool can help you do this. If you save the results to a csv, `masterdf_rrt` and `masterdf_nonrrt` are dataframes with the modeling data for each of the positive and negative classes, respectively.
```
masterdf_rrt = pd.read_csv('RRT_modeling_table_13hr_raw.csv')
masterdf_nonrrt = pd.read_csv('NonRRT_modeling_table_13hr_raw.csv')
```
### Look at summary statistics for numeric columns for rrt & non-rrt tables (35 cols)
```
masterdf_rrt.columns
masterdf_rrt.describe().T
masterdf_nonrrt.describe().T
```
### We have a good amount of nan values in some columns. Lets plot the nan values to get a sense of how many there are
```
def show_df_nans(masterdf, collist=None):
'''
Create a data frame for features which may be nan.
Make nan values be 1, numeric values be 0
A heat map where dark squares/lines show where data is missing.
'''
if not collist:
plot_cols = ['obese','DBP_mean', 'DBP_recent', 'SBP_mean', 'SBP_recent', 'HR_mean', 'HR_recent',
'MAP_mean', 'MAP_recent', 'temp_mean', 'temp_recent', 'SPO2_mean',
'SPO2_recent', 'RR_mean', 'RR_recent', 'pulse_mean', 'pulse_recent',
'CO2_mean', 'CO2_recent', 'GCS_mean', 'GCS_recent']
else:
plot_cols = collist
df_viznan = pd.DataFrame(data = 1,index=masterdf.index,columns=plot_cols)
df_viznan[~pd.isnull(masterdf[plot_cols])] = 0
plt.figure(figsize=(10,8))
plt.title('Dark values are nans')
return sns.heatmap(df_viznan.astype(float))
# subset of numeric columns we'll use in modeling (sufficient data available)
plot_cols_good = ['obese','DBP_mean', 'DBP_recent', 'SBP_mean', 'SBP_recent',
'MAP_mean', 'MAP_recent', 'temp_mean', 'temp_recent', 'SPO2_mean',
'SPO2_recent', 'RR_mean', 'RR_recent', 'pulse_mean', 'pulse_recent']
show_df_nans(masterdf_nonrrt) # show all columns that may have nans
# show_df_nans(masterdf_nonrrt, plot_cols_good) # show the columns whch we plan to use for modeling
show_df_nans(masterdf_rrt)
# show_df_nans(masterdf_rrt, plot_cols_good)
```
### Let's not use those columns where there are significant nans: drop HR (heart rate; we have pulse rate instead), CO2, and GCS, which leaves us with 28 features.
```
col_use = ['age', 'sex', 'obese', 'smoker', 'prev_rrt', 'on_iv', 'bu-nal', 'DBP_mean',
'DBP_recent', 'SBP_mean', 'SBP_recent',
'MAP_mean', 'MAP_recent', 'temp_mean', 'temp_recent', 'SPO2_mean',
'SPO2_recent', 'RR_mean', 'RR_recent', 'pulse_mean', 'pulse_recent',
'anticoagulants', 'narcotics', 'narc-ans', 'antipsychotics',
'chemo', 'dialysis', 'race']
X_rrt = masterdf_rrt[col_use]
X_notrrt = masterdf_nonrrt[col_use]
```
### We need to deal with these nans before we can start modeling. (There should not be any nans in the modeling table)
```
# let's look at getting rid of the data rows where vitals signs are all nans
vitals_cols = ['DBP_mean', 'DBP_recent', # take the mean of all the measurements & the most recently observed point
'SBP_mean', 'SBP_recent',
'MAP_mean', 'MAP_recent', # mean arterial pressure
'temp_mean', 'temp_recent',# temperature
'SPO2_mean', 'SPO2_recent',
'RR_mean', 'RR_recent', # respiratory rate
'pulse_mean', 'pulse_recent']
# Write out rows that are not all 0/NaNs across. (if all nans, remove this sample)
X_rrt = X_rrt.loc[np.where(X_rrt.ix[:, vitals_cols].sum(axis=1, skipna=True)!=0)[0]]
X_rrt = X_rrt.reset_index(drop=True)
X_notrrt = X_notrrt.loc[np.where(X_notrrt.ix[:, vitals_cols].sum(axis=1, skipna=True)!=0)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
# if 'obese' is Nan, then set the patient to be not obese.
X_rrt.loc[np.where(pd.isnull(X_rrt['obese']))[0], 'obese'] = 0
X_notrrt.loc[np.where(pd.isnull(X_notrrt['obese']))[0], 'obese'] = 0
```
### Let's see how X_rrt & X_notrrt look
```
show_df_nans(X_rrt, vitals_cols)
show_df_nans(X_notrrt, vitals_cols)
```
### Some columns have significant missing values.
```
print X_rrt[['pulse_mean', 'pulse_recent']].describe().T
print "size of X_rrt: "+str(len(X_rrt))
print
print X_notrrt[['pulse_mean', 'pulse_recent']].describe().T
print "size of X_notrrt: " + str(len(X_notrrt))
```
### We have plenty of samples for the non-RRT case. We can delete off rows with values that are missing without concern that we'll lose negtive examples for RRT events for modeling.
```
# DROP THE ROWS WHERE PULSE IS NAN
X_notrrt = X_notrrt.ix[np.where(pd.isnull(X_notrrt['pulse_mean'])!=True)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
# And similarly for all rows with significant nans:
X_notrrt = X_notrrt.ix[np.where(pd.isnull(X_notrrt['RR_mean'])!=True)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
X_notrrt = X_notrrt.ix[np.where(pd.isnull(X_notrrt['MAP_mean'])!=True)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
X_notrrt = X_notrrt.ix[np.where(pd.isnull(X_notrrt['temp_mean'])!=True)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
X_notrrt = X_notrrt.ix[np.where(pd.isnull(X_notrrt['SPO2_mean'])!=True)[0]]
X_notrrt = X_notrrt.reset_index(drop=True)
all_cols = ['age', 'sex', 'obese', 'smoker', 'prev_rrt', 'on_iv', 'bu-nal',
'DBP_mean', 'DBP_recent', 'SBP_mean', 'SBP_recent', 'MAP_mean',
'MAP_recent', 'temp_mean', 'temp_recent', 'SPO2_mean',
'SPO2_recent', 'RR_mean', 'RR_recent', 'pulse_mean', 'pulse_recent',
'anticoagulants', 'narcotics', 'narc-ans', 'antipsychotics',
'chemo', 'dialysis', 'race']
show_df_nans(X_notrrt, all_cols)
```
### Still need to deal with nans in X_rrt. Temp & pulse are the most of concern
```
X_rrt[['temp_mean', 'pulse_mean']].describe().T
```
### We'll impute missing values in X_rrt after combining that data with X_notrrt, and use the mean from each column after merging to fill the values.
```
# add labels to indicate positive or negative class
X_rrt['label'] = 1
X_notrrt['label'] = 0
# Combine the tables
XY = pd.concat([X_rrt, X_notrrt])
XY = XY.reset_index(drop=True)
y = XY.pop('label')
X = XY
# Fill nans with mean of columns
X = X.fillna(X.mean())
# map genders to 1/0
X['is_male'] = X['sex'].map({'M': 1, 'F': 0})
X.pop('sex')
X.race.value_counts()
# we won't use race in modeling
X.pop('race')
show_df_nans(X, vitals_cols)
X.columns
X.describe().T
```
# Modeling
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print len(y_train)
print len(y_train[y_train]==1)
len(y_test[y_test==1])
Xscaled = StandardScaler().fit_transform(X)
Xs_train, Xs_test, ys_train, ys_test = train_test_split(Xscaled, y, test_size=0.3)
```
## Gradient Boosting Classifier - Unscaled (with partial dependence plots below)
```
paramGrid = {'n_estimators': [100, 200, 300],
'learning_rate': [0.1, 0.05, 0.01, 0.2],
'max_depth': [3, 4, 5, 6],
'min_samples_leaf': [1, 2],
'subsample': [0.75, 1.0, 0.85],
'loss': ['deviance'],
'max_features': [None, 'auto']
}
gs = GridSearchCV(GradientBoostingClassifier(),
param_grid=paramGrid,
scoring='roc_auc',
n_jobs=-1,
cv=5,
verbose=10)
gs.fit(X_train, y_train)
# Result:
# GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance',
# max_depth=3, max_features=None, max_leaf_nodes=None,
# min_samples_leaf=2, min_samples_split=2,
# min_weight_fraction_leaf=0.0, n_estimators=300,
# presort='auto', random_state=None, subsample=0.75, verbose=0,
# warm_start=False)
```
## Grid search for best GBC - Scaled (with partial dependece plots below)
```
paramGrid = {'n_estimators': [100, 200, 300],
'learning_rate': [0.1, 0.05, 0.01, 0.2],
'max_depth': [3, 4, 5, 6],
'min_samples_leaf': [1, 2],
'subsample': [0.75, 1.0, 0.85],
'loss': ['deviance'],
'max_features': [None, 'auto']
}
gss = GridSearchCV(GradientBoostingClassifier(),
param_grid=paramGrid,
scoring='roc_auc',
n_jobs=-1,
cv=5,
verbose=10)
gss.fit(Xs_train, ys_train)
# Result:
# GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance',
# max_depth=3, max_features='auto', max_leaf_nodes=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0, n_estimators=300,
# presort='auto', random_state=None, subsample=0.75, verbose=0,
# warm_start=False)
```
## How different are best estimators for scaled & unscaled data?
```
gbc = GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance',
max_depth=3, max_features=None, max_leaf_nodes=None,
min_samples_leaf=2, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=300,
presort='auto', random_state=None, subsample=0.75, verbose=0,
warm_start=False)
gbc.fit(X_train, y_train)
score_printout(X_test, y_test, gbc)
print classification_report(y_test, gbc.predict(X_test))
confusion_matrix(y_test, gbc.predict(X_test))
# gbcs = gss.best_estimator_
# gbcs.fit(Xs_train, ys_train)
# score_printout(Xs_test, ys_test, gbc)
# print classification_report(ys_test, gbcs.predict(Xs_test))
# confusion_matrix(ys_test, gbcs.predict(Xs_test))
```
### Use unscaled data -- better results & easier interpretability
```
# Let's plot the confusion matrix so it's a little clearer
plt.figure()
sns.set(font_scale=1.5)
sns.heatmap(confusion_matrix(y_test, gbc.predict(X_test)), annot=True, fmt='d')
```
## Let's look at the most important features in this model
```
gbcRankedFeatures = sorted(zip(X.columns, gbc.feature_importances_),
key=lambda pair: pair[1],
reverse=False)
plt.figure()
make_feature_importance_plot(gbcRankedFeatures, 27) # note - we have 27 features currently
```
### Let's look a partial dependence plots
#### If the partial dependence is high, then the model for that given value of that given feature is more likely to predict an rrt result.
#### Will not show more complex interactions -- if importance is high but partial dependence is marginal, this may be due to interactions
```
fig, axs = plot_partial_dependence(gbc, X_train, range(0, 6, 1), feature_names=X.columns.get_values(), n_jobs=-1, grid_resolution=50)
plt.subplots_adjust(top=0.9)
fig, axs = plot_partial_dependence(gbc, X_train, range(6, 12, 1), feature_names=X.columns.get_values(), n_jobs=-1, grid_resolution=50)
plt.subplots_adjust(top=0.9)
fig, axs = plot_partial_dependence(gbc, X_train, range(12, 18, 1), feature_names=X.columns.get_values(), n_jobs=-1, grid_resolution=50)
plt.subplots_adjust(top=0.9)
fig, axs = plot_partial_dependence(gbc, X_train, range(18, 24, 1), feature_names=X.columns.get_values(), n_jobs=-1, grid_resolution=50)
plt.subplots_adjust(top=0.9)
fig, axs = plot_partial_dependence(gbc, X_train, range(24, 27, 1), feature_names=X.columns.get_values(), n_jobs=-1, grid_resolution=50)
plt.subplots_adjust(top=0.9)
```
## Use 3-D plot to investigate feature interactions for weak partial dependence plots... (weak effect may be masked by stronger interaction with other features)
```
names = X_train.columns
zip(range(len(names)), names)
from mpl_toolkits.mplot3d import Axes3D
# not all features may work for this viz
fig = plt.figure(figsize=(10,8))
target_feature = (16, 18) # <-- change the two numbers here to determine what to plot up
pdp, (x_axis, y_axis) = partial_dependence(gbc, target_feature, X=X_train, grid_resolution=50)
XX, YY = np.meshgrid(x_axis, y_axis)
Z = pdp.T.reshape(XX.shape).T
ax = Axes3D(fig)
surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu)
ax.set_xlabel(names[target_feature[0]])
ax.set_ylabel(names[target_feature[1]])
ax.set_zlabel('Partial dependence')
# pretty init view
ax.view_init(elev=22, azim=122)
plt.colorbar(surf)
plt.suptitle('')
plt.subplots_adjust(top=0.9)
plt.show()
```
## From Model to Risk Score
```
# Return probabilities from the model, rather than predictions
y_proba = gbc.predict_proba(X_test)
# note - y_proba contains probabilities for class 0 in column 0 & probabilities for class 1 in column 1.
# we're only interested in the probability for class 1
y_proba
pred_probs = pd.DataFrame(data=y_proba[:,1], columns =["model_probability_of_rrt"], index = X_test.index)
X_test.head()
y_test.head()
pred_probs['model_probability_of_rrt'] = pd.to_numeric(pred_probs.model_probability_of_rrt)
pred_probs.hist(bins = 20, xlabelsize = 16, ylabelsize=16)
plt.tick_params(labelsize=14)
plt.title("Model output probabilities")
plt.ylabel('Count', fontsize=14)
```
### We see that although we see more values close to 0 and 1, we also see that the model outputs a full range of probabilities, which would translate well into risk scores.
### Patient Risk Score = model probability * 10
The score should be rounded to whole values to give the sense that this is not an exact measure.
```
pred_probs['score'] = pred_probs['model_probability_of_rrt'].apply(lambda x: int(round(x*10.0, 0)))
pred_probs.head()
pred_probs.score.value_counts()
```
### Save model
```
from sklearn.externals import joblib
# joblib.dump(gbc, 'gbc_base.pkl') # note - if left uncompressed, this writes a whole lot of supporting numpy files.
joblib.dump(gbc, 'my_trained_model.compressed', compress=True)
# to unpack: joblib.load(filename)
```
### Save modeling table
```
# Create combined data frame including modeling table, rrt label, and proability associated with result
df = pd.concat([X_test, pred_probs, y_test],axis=1, join_axes=[X_test.index])
df.head()
# May need to rename columns to get rid of dash in name...
df.rename(columns={'bu-nal': 'bu_nal', 'narc-ans': 'narc_ans'}, inplace=True)
df.to_csv('ModelingTable_with_results.csv')
```
|
github_jupyter
|
```
import numpy as np
import pickle
import scipy
import combo
import os
import urllib
import ssl
import matplotlib.pyplot as plt
%matplotlib inline
ssl._create_default_https_context = ssl._create_unverified_context
def download():
if not os.path.exists('data/s5-210.csv'):
if not os.path.exists('data'):
os.mkdir('data')
print('Downloading...')
with urllib.request.urlopen("http://www.tsudalab.org/files/s5-210.csv") as response, open('data/s5-210.csv', 'wb') as out_file:
out_file.write(response.read())
print('Done')
def load_data():
download()
A = np.asarray(np.loadtxt('data/s5-210.csv',skiprows=1,delimiter=',') )
X = A[:,0:3]
t = -A[:,3]
return X, t
# Load the data.
# X is the N x d dimensional matrix. Each row of X denotes the d-dimensional feature vector of search candidate.
# t is the N-dimensional vector that represents the corresponding negative energy of search candidates.
# ( It is of course unknown in practice. )
X, t = load_data()
# Normalize the mean and standard deviation along the each column of X to 0 and 1, respectively
X = combo.misc.centering( X )
# Declare the class for calling the simulator.
# In this tutorial, we simply refer to the value of t.
# If you want to apply combo to other problems, you have to customize this class.
class simulator:
def __init__( self ):
_, self.t = load_data()
def __call__( self, action ):
return self.t[action]
# Design of policy
# Declaring the policy by
policy = combo.search.discrete.policy(test_X=X)
# test_X is the set of candidates which is represented by numpy.array.
# Each row vector represents the feature vector of the corresponding candidate
# set the seed parameter
policy.set_seed( 0 )
# If you want to perform the initial random search before starting the Bayesian optimization,
# the random sampling is performed by
res = policy.random_search(max_num_probes=20, simulator=simulator())
# Input:
# max_num_probes: number of random search
# simulator = simulator
# output: combo.search.discreate.results (class)
# single query Bayesian search
# The single query version of COMBO is performed by
res = policy.bayes_search(max_num_probes=80, simulator=simulator(), score='TS',
interval=20, num_rand_basis=5000)
# Input
# max_num_probes: number of searching by Bayesian optimization
# simulator: the class of simulator which is defined above
# score: the type of aquision funciton. TS, EI and PI are available
# interval: the timing for learning the hyper parameter.
# In this case, the hyper parameter is learned at each 20 steps
# If you set the negative value to interval, the hyper parameter learning is not performed
# If you set zero to interval, the hyper parameter learning is performed only at the first step
# num_rand_basis: the number of basis function. If you choose 0, ordinary Gaussian process runs
# The result of searching is summarized in the class combo.search.discrete.results.history()
# res.fx: observed negative energy at each step
# res.chosed_actions: history of choosed actions
# fbest, best_action= res.export_all_sequence_best_fx(): current best fx and current best action
# that has been observed until each step
# res.total_num_search: total number of search
print('f(x)=')
print(res.fx[0:res.total_num_search])
best_fx, best_action = res.export_all_sequence_best_fx()
print('current best')
print (best_fx)
print ('current best action=')
print (best_action)
print ('history of chosed actions=')
print (res.chosed_actions[0:res.total_num_search])
# save the results
res.save('test.npz')
del res
# load the results
res = combo.search.discrete.results.history()
res.load('test.npz')
```
|
github_jupyter
|
# Multipitch tracking using Echo State Networks
## Introduction
In this notebook, we demonstrate how the ESN can deal with multipitch tracking, a challenging multilabel classification problem in music analysis.
As this is a computationally expensive task, we have pre-trained models to serve as an entry point.
At first, we import all packages required for this task. You can find the import statements below.
```
import time
import numpy as np
import os
import csv
from sklearn.base import clone
from sklearn.metrics import make_scorer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from joblib import dump, load
import librosa
from madmom.processors import SequentialProcessor, ParallelProcessor
from madmom.audio import SignalProcessor, FramedSignalProcessor
from madmom.audio.stft import ShortTimeFourierTransformProcessor
from madmom.audio.filters import LogarithmicFilterbank
from madmom.audio.spectrogram import FilteredSpectrogramProcessor, LogarithmicSpectrogramProcessor, SpectrogramDifferenceProcessor
from pyrcn.util import FeatureExtractor
from pyrcn.echo_state_network import SeqToSeqESNClassifier
from pyrcn.datasets import fetch_maps_piano_dataset
from pyrcn.metrics import accuracy_score
from pyrcn.model_selection import SequentialSearchCV
from matplotlib import pyplot as plt
from matplotlib import ticker
plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = 10
%matplotlib inline
import pandas as pd
import seaborn as sns
from mir_eval import multipitch
```
## Feature extraction
The acoustic features extracted from the input signal are obtained by filtering short-term spectra (window length 4096 samples and hop size 10 ms) with a bank of triangular filters in the frequency domain with log-spaced frequencies. The frequency range was 30 Hz to 17 000 Hz and we used 12 filters per octave. We used logarithmic magnitudes and added 1 inside the logarithm to ensure a minimum value of 0 for a frame without energy. The first derivative between adjacent frames was added in order to enrich the features by temporal information. Binary labels indicating absent (value 0) or present (value 1) pitches for each frame are assigned to each frame. Note that this task is a multilabel classification. Each MIDI pitch is a separate class, and multiple or no classes can be active at a discrete frame index.
For a more detailed description, please have a look in our repository ([https://github.com/TUD-STKS/Automatic-Music-Transcription](https://github.com/TUD-STKS/Automatic-Music-Transcription)) with several detailed examples for music analysis tasks.
```
def create_feature_extraction_pipeline(sr=44100, frame_sizes=[1024, 2048, 4096], fps_hz=100.):
audio_loading = Pipeline([("load_audio", FeatureExtractor(librosa.load, sr=sr, mono=True)),
("normalize", FeatureExtractor(librosa.util.normalize, norm=np.inf))])
sig = SignalProcessor(num_channels=1, sample_rate=sr)
multi = ParallelProcessor([])
for frame_size in frame_sizes:
frames = FramedSignalProcessor(frame_size=frame_size, fps=fps_hz)
stft = ShortTimeFourierTransformProcessor() # caching FFT window
filt = FilteredSpectrogramProcessor(filterbank=LogarithmicFilterbank, num_bands=12, fmin=30, fmax=17000,
norm_filters=True, unique_filters=True)
spec = LogarithmicSpectrogramProcessor(log=np.log10, mul=5, add=1)
diff = SpectrogramDifferenceProcessor(diff_ratio=0.5, positive_diffs=True, stack_diffs=np.hstack)
# process each frame size with spec and diff sequentially
multi.append(SequentialProcessor([frames, stft, filt, spec, diff]))
feature_extractor = FeatureExtractor(SequentialProcessor([sig, multi, np.hstack]))
feature_extraction_pipeline = Pipeline([("audio_loading", audio_loading),
("feature_extractor", feature_extractor)])
return feature_extraction_pipeline
```
## Load and preprocess the dataset
This might require a large amount of time and memory.
```
# Load and preprocess the dataset
feature_extraction_pipeline = create_feature_extraction_pipeline(sr=44100, frame_sizes=[2048], fps_hz=100)
# New object -> PyTorch dataloader / Matlab datastore
X_train, X_test, y_train, y_test = fetch_maps_piano_dataset(data_origin="/projects/p_transcriber/MAPS",
data_home=None, preprocessor=feature_extraction_pipeline,
force_preprocessing=False, label_type="pitch")
def tsplot(ax, data,**kw):
x = np.arange(data.shape[1])
est = np.mean(data, axis=0)
sd = np.std(data, axis=0)
cis = (est - sd, est + sd)
ax.fill_between(x,cis[0],cis[1],alpha=0.2, **kw)
ax.plot(x,est,**kw)
ax.margins(x=0)
fig, ax = plt.subplots()
fig.set_size_inches(4, 1.25)
tsplot(ax, np.concatenate(np.hstack((X_train, X_test))))
ax.set_xlabel('Feature Index')
ax.set_ylabel('Magnitude')
plt.grid()
plt.savefig('features_statistics.pdf', bbox_inches='tight', pad_inches=0)
```
## Set up a ESN
To develop an ESN model for multipitch tracking, we need to tune several hyper-parameters, e.g., input_scaling, spectral_radius, bias_scaling and leaky integration.
We follow the way proposed in the paper for multipitch tracking and for acoustic modeling of piano music to optimize hyper-parameters sequentially.
We define the search spaces for each step together with the type of search (a grid search in this context).
At last, we initialize a SeqToSeqESNClassifier with the desired output strategy and with the initially fixed parameters.
```
initially_fixed_params = {'hidden_layer_size': 500,
'input_activation': 'identity',
'k_in': 10,
'bias_scaling': 0.0,
'reservoir_activation': 'tanh',
'leakage': 1.0,
'bi_directional': False,
'k_rec': 10,
'wash_out': 0,
'continuation': False,
'alpha': 1e-5,
'random_state': 42}
step1_esn_params = {'leakage': np.linspace(0.1, 1.0, 10)}
kwargs_1 = {'random_state': 42, 'verbose': 2, 'n_jobs': 70, 'pre_dispatch': 70, 'n_iter': 14,
'scoring': make_scorer(accuracy_score)}
step2_esn_params = {'input_scaling': np.linspace(0.1, 1.0, 10),
'spectral_radius': np.linspace(0.0, 1.5, 16)}
step3_esn_params = {'bias_scaling': np.linspace(0.0, 2.0, 21)}
kwargs_2_3 = {'verbose': 2, 'pre_dispatch': 70, 'n_jobs': 70,
'scoring': make_scorer(accuracy_score)}
# The searches are defined similarly to the steps of a sklearn.pipeline.Pipeline:
searches = [('step1', GridSearchCV, step1_esn_params, kwargs_1),
('step2', GridSearchCV, step2_esn_params, kwargs_2_3),
('step3', GridSearchCV, step3_esn_params, kwargs_2_3)]
base_esn = SeqToSeqESNClassifier(**initially_fixed_params)
```
## Optimization
We provide a SequentialSearchCV that basically iterates through the list of searches that we have defined before. It can be combined with any model selection tool from scikit-learn.
```
try:
sequential_search = load("sequential_search_ll.joblib")
except FileNotFoundError:
print(FileNotFoundError)
sequential_search = SequentialSearchCV(base_esn, searches=searches).fit(X_train, y_train)
dump(sequential_search, "sequential_search_ll.joblib")
```
## Visualize hyper-parameter optimization
```
df = pd.DataFrame(sequential_search.all_cv_results_["step1"])
fig = plt.figure()
fig.set_size_inches(2, 1.25)
ax = sns.lineplot(data=df, x="param_leakage", y="mean_test_score")
plt.xlabel("Leakage")
plt.ylabel("Score")
# plt.xlim((0, 1))
tick_locator = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(tick_locator)
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%.4f'))
plt.grid()
plt.savefig('optimize_leakage.pdf', bbox_inches='tight', pad_inches=0)
df = pd.DataFrame(sequential_search.all_cv_results_["step2"])
pvt = pd.pivot_table(df,
values='mean_test_score', index='param_input_scaling', columns='param_spectral_radius')
pvt.columns = pvt.columns.astype(float)
pvt2 = pd.DataFrame(pvt.loc[pd.IndexSlice[0:1], pd.IndexSlice[0.0:1.0]])
fig = plt.figure()
ax = sns.heatmap(pvt2, xticklabels=pvt2.columns.values.round(2), yticklabels=pvt2.index.values.round(2), cbar_kws={'label': 'Score'})
ax.invert_yaxis()
plt.xlabel("Spectral Radius")
plt.ylabel("Input Scaling")
fig.set_size_inches(4, 2.5)
tick_locator = ticker.MaxNLocator(10)
ax.yaxis.set_major_locator(tick_locator)
ax.xaxis.set_major_locator(tick_locator)
plt.savefig('optimize_is_sr.pdf', bbox_inches='tight', pad_inches=0)
df = pd.DataFrame(sequential_search.all_cv_results_["step3"])
fig = plt.figure()
fig.set_size_inches(2, 1.25)
ax = sns.lineplot(data=df, x="param_bias_scaling", y="mean_test_score")
plt.xlabel("Bias Scaling")
plt.ylabel("Score")
plt.xlim((0, 2))
tick_locator = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(tick_locator)
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('%.5f'))
plt.grid()
plt.savefig('optimize_bias_scaling.pdf', bbox_inches='tight', pad_inches=0)
```
## Test the ESN
Finally, we test the ESN on unseen data.
```
def _midi_to_frequency(p):
return 440. * (2 ** ((p-69)/12))
def get_mir_eval_rows(y, fps=100.):
time_t = np.arange(len(y)) / fps
freq_hz = [_midi_to_frequency(np.asarray(np.nonzero(row))).ravel() for row in y]
return time_t, freq_hz
esn = sequential_search.best_estimator_
y_test_pred = esn.predict_proba(X=X_test)
scores = np.zeros(shape=(10, 14))
for k, thr in enumerate(np.linspace(0.1, 0.9, 9)):
res = []
for y_true, y_pred in zip(y_test, y_test_pred):
times_res, freqs_hz_res = get_mir_eval_rows(y_pred[:, 1:]>thr, fps=100.)
times_ref, freqs_hz_ref = get_mir_eval_rows(y_true[:, 1:]>thr, fps=100.)
res.append(multipitch.metrics(ref_time=times_ref, ref_freqs=freqs_hz_ref, est_time=times_res, est_freqs=freqs_hz_res))
scores[k, :] = np.mean(res, axis=0)
plt.plot(np.linspace(0.1, 1, 10), scores[:, :3])
plt.plot(np.linspace(0.1, 1, 10), 2*scores[:, 0]*scores[:, 1] / (scores[:, 0] + scores[:, 1]))
plt.xlabel("Threshold")
plt.ylabel("Scores")
plt.xlim((0.1, 0.9))
plt.legend(("Precision", "Recall", "Accuracy", "F1-Score"))
np.mean(list(sequential_search.all_refit_time_.values()))
t1 = time.time()
esn = clone(sequential_search.best_estimator_).fit(X_train, y_train, n_jobs=8)
print("Fitted in {0} seconds".format(time.time() - t1))
t1 = time.time()
esn = clone(sequential_search.best_estimator_).fit(X_train, y_train)
print("Fitted in {0} seconds".format(time.time() - t1))
```
|
github_jupyter
|
```
%pylab inline
import pandas as pd
import plotnine as p
p.theme_set(p.theme_classic())
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = False
counts = pd.read_parquet('mca_brain_counts.parquet')
sample_info = pd.read_parquet('mca_brain_cell_info.parquet')
```
### Differential expression
Now let us investigate how this count depth effect plays in to a differential expression analysis. With all published large scale experiments cataloging cell types, it is getting increasingly easy to simply fetch some data and do quick comparisons. We will use data from the recent [single cell Mouse Cell Atlas][paper link]. To get something easy to compare, we use the samples called "Brain" and focus on the cells annotated as "Microglia" and "Astrocyte". Out of the ~400,000 cells in the study, these two cell types have 338 and 199 representative cells. On average they have about 700 total UMI counts each, so while the entire study is a pretty large scale, the individual cell types and cells are on a relatively small scale. The final table has 537 cells and 21,979 genes.
[paper link]: http://www.cell.com/cell/abstract/S0092-8674(18)30116-8
```
sample_info['super_cell_type'].value_counts()
sub_samples = sample_info.query('super_cell_type in ["Microglia", "Astrocyte"]').copy()
sub_counts = counts.reindex(index=sub_samples.index)
sub_counts.shape
sub_samples['is_astrocyte'] = sub_samples['super_cell_type'] == 'Astrocyte'
import NaiveDE
sub_samples['total_count'] = sub_counts.sum(1)
figsize(11, 3)
sub_samples.total_count.hist(grid=False, fc='w', ec='k')
sub_samples.total_count.median(), sub_samples.total_count.mean()
print(sub_samples.head())
```
In a differential expression test you simply include a covariate in the design matrix that informs the linear model about the different conditions you want to compare. Here we are comparing microglia and astrocytes.
```
%%time
lr_results = NaiveDE.lr_tests(sub_samples, np.log1p(sub_counts.T),
alt_model='C(is_astrocyte) + np.log(total_count) + 1',
null_model='np.log(total_count) + 1')
lr_results.pval = lr_results.pval.clip_lower(lr_results.query('pval != 0')['pval'].min())
lr_results.qval = lr_results.qval.clip_lower(lr_results.query('qval != 0')['qval'].min())
print(lr_results.sort_values('pval').head())
example_genes = ['Apoe', 'Sparcl1', 'Tmsb4x', 'C1qa']
examples = lr_results.loc[example_genes]
img = \
p.qplot('C(is_astrocyte)[T.True]', '-np.log10(pval)', lr_results) \
+ p.annotate('text',
x=examples['C(is_astrocyte)[T.True]'] + 0.33,
y=-np.log10(examples['pval']),
label=examples.index) \
+ p.labs(title='Brain cell data')
img.save('4.png', verbose=False)
img
img = \
p.qplot('C(is_astrocyte)[T.True]', 'np.log(total_count)', lr_results) \
+ p.annotate('text',
x=examples['C(is_astrocyte)[T.True]'] + 0.33,
y=examples['np.log(total_count)'],
label=examples.index) \
+ p.labs(title='Brain cell data')
img.save('5.png', verbose=False)
img
print(lr_results.sort_values('C(is_astrocyte)[T.True]').head())
print(lr_results.sort_values('C(is_astrocyte)[T.True]').tail())
```
Also in this case we can see that the count depth weights are deflated for lowly abundant genes.
```
img = \
p.qplot(sub_counts.sum(0).clip_lower(1), lr_results['np.log(total_count)'],
log='x') \
+ p.labs(x='Gene count across dataset', y='np.log(total_count)',
title='Brain cell data')
img.save('6.png', verbose=False)
img
xx = np.linspace(np.log(sub_samples.total_count.min()),
np.log(sub_samples.total_count.max()))
def linres(gene):
yy = \
lr_results.loc[gene, 'np.log(total_count)'] * xx \
+ lr_results.loc[gene, 'Intercept']
yy1 = np.exp(yy)
yy2 = np.exp(yy + lr_results.loc[gene, 'C(is_astrocyte)[T.True]'])
return yy1, yy2
```
Similar to above, we can look at the relation between count depth and observed counts for a few genes, but we can also make sure to plot the stratifiction into the two cell types and how the regression models are predicting the counts.
```
figsize(11, 3)
ax = plt.gca()
for i, gene in enumerate(['Apoe', 'Sparcl1', 'Tmsb4x', 'C1qa']):
sub_samples['gene'] = counts[gene]
plt.subplot(1, 4, i + 1, sharey=ax)
if i == 0:
plt.ylabel('Counts + 1')
plt.loglog()
plt.scatter(sub_samples.loc[~sub_samples.is_astrocyte]['total_count'],
sub_samples.loc[~sub_samples.is_astrocyte]['gene'] + 1,
c='grey', marker='o', label='Microglia')
plt.scatter(sub_samples.loc[sub_samples.is_astrocyte]['total_count'],
sub_samples.loc[sub_samples.is_astrocyte]['gene'] + 1,
c='k', marker='x', label='Astrocyte')
yy1, yy2 = linres(gene)
plt.plot(np.exp(xx), yy1, c='w', lw=5)
plt.plot(np.exp(xx), yy1, c='r', lw=3, ls=':')
plt.plot(np.exp(xx), yy2, c='w', lw=5)
plt.plot(np.exp(xx), yy2, c='r', lw=3)
plt.title(gene)
plt.xlabel('Total counts')
plt.legend(scatterpoints=3);
plt.tight_layout()
plt.savefig('7.png', bbox_inches='tight')
```
Again we can see the overall abundance is related to the slope of the lines. Another thing which seem to pop out in these plots is an interaction between cell type and slope. For example looking at C1qa the slope for the microglia seem underestimated. This makes sense, if this is an effect of count noise at low abundances.
My takeaway from this is that OLS regression might be OK if counts are large, but at lower levels model parameters are not estimated correctly due to the count nature of the data.
Notebooks of the analysis in this post are available [here](https://github.com/vals/Blog/tree/master/180226-count-offsets).
|
github_jupyter
|
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="50%" align="left"> </a></td>
<td width="70%" style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Maksim Dimitrijev(<a href="http://qworld.lu.lv/index.php/qlatvia/">QLatvia</a>)
and Özlem Salehi (<a href="http://qworld.lu.lv/index.php/qturkey/">QTurkey</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h2> <font color="blue"> Solutions for </font>Grover's Search: Implementation</h2>
<a id="task2"></a>
<h3>Task 2</h3>
Let $N=4$. Implement the query phase and check the unitary matrix for the query operator. Note that we are interested in the top-left $4 \times 4$ part of the matrix since the remaining parts are due to the ancilla qubit.
You are given a function $f$ and its corresponding quantum operator $U_f$. First run the following cell to load operator $U_f$. Then you can make queries to $f$ by applying the operator $U_f$ via the following command:
<pre>Uf(circuit,qreg).
```
%run ../include/quantum.py
```
Now use phase kickback to flip the sign of the marked element:
<ul>
<li>Set output qubit (qreg[2]) to $\ket{-}$ by applying X and H.</li>
<li>Apply operator $U_f$
<li>Set output qubit (qreg[2]) back.</li>
</ul>
(Can you guess the marked element by looking at the unitary matrix?)
<h3>Solution</h3>
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
#No need to define classical register as we are not measuring
mycircuit = QuantumCircuit(qreg)
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
Uf(mycircuit,qreg)
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
job = execute(mycircuit,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(mycircuit,decimals=3)
#We are interested in the top-left 4x4 part
for i in range(4):
s=""
for j in range(4):
val = str(u[i][j].real)
while(len(val)<5): val = " "+val
s = s + val
print(s)
mycircuit.draw(output='mpl')
```
<a id="task3"></a>
<h3>Task 3</h3>
Let $N=4$. Implement the inversion operator and check whether you obtain the following matrix:
$\mymatrix{cccc}{-0.5 & 0.5 & 0.5 & 0.5 \\ 0.5 & -0.5 & 0.5 & 0.5 \\ 0.5 & 0.5 & -0.5 & 0.5 \\ 0.5 & 0.5 & 0.5 & -0.5}$.
<h3>Solution</h3>
```
def inversion(circuit,quantum_reg):
#step 1
circuit.h(quantum_reg[1])
circuit.h(quantum_reg[0])
#step 2
circuit.x(quantum_reg[1])
circuit.x(quantum_reg[0])
#step 3
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[2])
#step 4
circuit.x(quantum_reg[1])
circuit.x(quantum_reg[0])
#step 5
circuit.x(quantum_reg[2])
#step 6
circuit.h(quantum_reg[1])
circuit.h(quantum_reg[0])
```
Below you can check the matrix of your inversion operator and how the circuit looks like. We are interested in top-left $4 \times 4$ part of the matrix, the remaining parts are because we used ancilla qubit.
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg1 = QuantumRegister(3)
mycircuit1 = QuantumCircuit(qreg1)
#set ancilla qubit
mycircuit1.x(qreg1[2])
mycircuit1.h(qreg1[2])
inversion(mycircuit1,qreg1)
#set ancilla qubit back
mycircuit1.h(qreg1[2])
mycircuit1.x(qreg1[2])
job = execute(mycircuit1,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(mycircuit1,decimals=3)
for i in range(4):
s=""
for j in range(4):
val = str(u[i][j].real)
while(len(val)<5): val = " "+val
s = s + val
print(s)
mycircuit1.draw(output='mpl')
```
<a id="task4"></a>
<h3>Task 4: Testing Grover's search</h3>
Now we are ready to test our operations and run Grover's search. Suppose that there are 4 elements in the list and try to find the marked element.
You are given the operator $U_f$. First run the following cell to load it. You can access it via <pre>Uf(circuit,qreg).</pre>
qreg[2] is the ancilla qubit and it is shared by the query and the inversion operators.
Which state do you observe the most?
```
%run ..\include\quantum.py
```
<h3>Solution</h3>
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
#Grover
#initial step - equal superposition
for i in range(2):
mycircuit.h(qreg[i])
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
mycircuit.barrier()
#change the number of iterations
iterations=1
#Grover's iterations.
for i in range(iterations):
#query
Uf(mycircuit,qreg)
mycircuit.barrier()
#inversion
inversion(mycircuit,qreg)
mycircuit.barrier()
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
mycircuit.measure(qreg[0],creg[0])
mycircuit.measure(qreg[1],creg[1])
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
# print the outcome
for outcome in counts:
print(outcome,"is observed",counts[outcome],"times")
mycircuit.draw(output='mpl')
```
<a id="task5"></a>
<h3>Task 5 (Optional, challenging)</h3>
Implement the inversion operation for $n=3$ ($N=8$). This time you will need 5 qubits - 3 for the operation, 1 for ancilla, and one more qubit to implement not gate controlled by three qubits.
In the implementation the ancilla qubit will be qubit 3, while qubits for control are 0, 1 and 2; qubit 4 is used for the multiple control operation. As a result you should obtain the following values in the top-left $8 \times 8$ entries:
$\mymatrix{cccccccc}{-0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75 & 0.25 \\ 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & -0.75}$.
<h3>Solution</h3>
```
def big_inversion(circuit,quantum_reg):
for i in range(3):
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[i])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
for i in range(3):
circuit.x(quantum_reg[i])
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[3])
```
Below you can check the matrix of your inversion operator. We are interested in the top-left $8 \times 8$ part of the matrix, the remaining parts are because of additional qubits.
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
big_qreg2 = QuantumRegister(5)
big_mycircuit2 = QuantumCircuit(big_qreg2)
#set ancilla
big_mycircuit2.x(big_qreg2[3])
big_mycircuit2.h(big_qreg2[3])
big_inversion(big_mycircuit2,big_qreg2)
#set ancilla back
big_mycircuit2.h(big_qreg2[3])
big_mycircuit2.x(big_qreg2[3])
job = execute(big_mycircuit2,Aer.get_backend('unitary_simulator'))
u=job.result().get_unitary(big_mycircuit2,decimals=3)
for i in range(8):
s=""
for j in range(8):
val = str(u[i][j].real)
while(len(val)<6): val = " "+val
s = s + val
print(s)
```
<a id="task6"></a>
<h3>Task 6: Testing Grover's search for 8 elements (Optional, challenging)</h3>
Now we will test Grover's search on 8 elements.
You are given the operator $U_{f_8}$. First run the following cell to load it. You can access it via:
<pre>Uf_8(circuit,qreg)</pre>
Which state do you observe the most?
```
%run ..\include\quantum.py
```
<h3>Solution</h3>
```
def big_inversion(circuit,quantum_reg):
for i in range(3):
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[i])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
circuit.ccx(quantum_reg[2],quantum_reg[4],quantum_reg[3])
circuit.ccx(quantum_reg[1],quantum_reg[0],quantum_reg[4])
for i in range(3):
circuit.x(quantum_reg[i])
circuit.h(quantum_reg[i])
circuit.x(quantum_reg[3])
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg8 = QuantumRegister(5)
creg8 = ClassicalRegister(3)
mycircuit8 = QuantumCircuit(qreg8,creg8)
#set ancilla
mycircuit8.x(qreg8[3])
mycircuit8.h(qreg8[3])
#Grover
for i in range(3):
mycircuit8.h(qreg8[i])
mycircuit8.barrier()
#Try 1,2,6,12 8iterations of Grover
for i in range(2):
Uf_8(mycircuit8,qreg8)
mycircuit8.barrier()
big_inversion(mycircuit8,qreg8)
mycircuit8.barrier()
#set ancilla back
mycircuit8.h(qreg8[3])
mycircuit8.x(qreg8[3])
for i in range(3):
mycircuit8.measure(qreg8[i],creg8[i])
job = execute(mycircuit8,Aer.get_backend('qasm_simulator'),shots=10000)
counts8 = job.result().get_counts(mycircuit8)
# print the reverse of the outcome
for outcome in counts8:
print(outcome,"is observed",counts8[outcome],"times")
mycircuit8.draw(output='mpl')
```
<a id="task8"></a>
<h3>Task 8</h3>
Implement an oracle function which marks the element 00. Run Grover's search with the oracle you have implemented.
```
def oracle_00(circuit,qreg):
```
<h3>Solution</h3>
```
def oracle_00(circuit,qreg):
circuit.x(qreg[0])
circuit.x(qreg[1])
circuit.ccx(qreg[0],qreg[1],qreg[2])
circuit.x(qreg[0])
circuit.x(qreg[1])
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
qreg = QuantumRegister(3)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
#Grover
#initial step - equal superposition
for i in range(2):
mycircuit.h(qreg[i])
#set ancilla
mycircuit.x(qreg[2])
mycircuit.h(qreg[2])
mycircuit.barrier()
#change the number of iterations
iterations=1
#Grover's iterations.
for i in range(iterations):
#query
oracle_00(mycircuit,qreg)
mycircuit.barrier()
#inversion
inversion(mycircuit,qreg)
mycircuit.barrier()
#set ancilla back
mycircuit.h(qreg[2])
mycircuit.x(qreg[2])
mycircuit.measure(qreg[0],creg[0])
mycircuit.measure(qreg[1],creg[1])
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts(mycircuit)
# print the reverse of the outcome
for outcome in counts:
reverse_outcome = ''
for i in outcome:
reverse_outcome = i + reverse_outcome
print(reverse_outcome,"is observed",counts[outcome],"times")
mycircuit.draw(output='mpl')
```
|
github_jupyter
|
Foreign Function Interface
====
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
```
Wrapping functions written in C
----
### Steps
- Write the C header and implementation files
- Write the Cython `.pxd` file to declare C function signatures
- Write the Cython `.pyx` file to wrap the C functions for Python
- Write `setup.py` to automate buiding of the Python extension module
- Run `python setup.py build_ext --inplace` to build the module
- Import module in Python like any other Python module
### C header file
```
%%file c_math.h
#pragma once
double plus(double a, double b);
double mult(double a, double b);
double square(double a);
double acc(double *xs, int size);
```
### C implementation file
```
%%file c_math.c
#include <math.h>
#include "c_math.h"
double plus(double a, double b) {
return a + b;
};
double mult(double a, double b) {
return a * b;
};
double square(double a) {
return pow(a, 2);
};
double acc(double *xs, int size) {
double s = 0;
for (int i=0; i<size; i++) {
s += xs[i];
}
return s;
};
```
### Cython "header" file
The `.pxd` file is similar to a header file for Cython. In other words, we can `cimport <filename>.pxd` in the regular Cython `.pyx` files to get access to functions declared in the `.pxd` files.
```
%%file cy_math.pxd
cdef extern from "c_math.h":
double plus(double a, double b)
double mult(double a, double b)
double square(double a)
double acc(double *xs, int size)
```
### Cython "implementation" file
Here is whhere we actually wrap the C code for use in Python. Note especially how we handle passing in of arrays to a C function expecting a pointer to double using `typed memoryviews`.
```
%%file cy_math.pyx
cimport cy_math
def py_plus(double a, double b):
return cy_math.plus(a, b)
def py_mult(double a, double b):
return cy_math.mult(a, b)
def py_square(double a):
return cy_math.square(a)
def py_sum(double[::1] xs):
cdef int size = len(xs)
return cy_math.acc(&xs[0], size)
```
### Build script `setup.py`
This is a build script for Python, similar to a Makefile
```
%%file setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
import numpy as np
ext = Extension("cy_math",
sources=["cy_math.pyx", "c_math.c"],
libraries=["m"],
extra_compile_args=["-w", "-std=c99"])
setup(name = "Math Funcs",
ext_modules = cythonize(ext))
```
### Building an extension module
```
! python setup.py clean
! python setup.py -q build_ext --inplace
! ls cy_math*
```
### Using the extension module in Python
```
import cy_math
import numpy as np
print(cy_math.py_plus(3, 4))
print(cy_math.py_mult(3, 4))
print(cy_math.py_square(3))
xs = np.arange(10, dtype='float')
print(cy_math.py_sum(xs))
```
### Confirm that we are getting C speedups by comparing with pure Python accumulator
```
def acc(xs):
s = 0
for x in xs:
s += x
return s
import cy_math
xs = np.arange(1000000, dtype='float')
%timeit -r3 -n3 acc(xs)
%timeit -r3 -n3 cy_math.py_sum(xs)
```
C++
----
This is similar to C. We will use Cython to wrap a simple funciton.
```
%%file add.hpp
#pragma once
int add(int a, int b);
%%file add.cpp
int add(int a, int b) {
return a+b;
}
%%file plus.pyx
cdef extern from 'add.cpp':
int add(int a, int b)
def plus(a, b):
return add(a, b)
```
#### Note that essentially the only difference from C is `language="C++"` and the flag `-std=c++11`
```
%%file setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
ext = Extension("plus",
sources=["plus.pyx", "add.cpp"],
extra_compile_args=["-w", "-std=c++11"])
setup(
ext_modules = cythonize(
ext,
language="c++",
))
%%bash
python setup.py -q build_ext --inplace
import plus
plus.plus(3, 4)
```
Wrap an R function from libRmath using `ctypes`
----
R comes with a standalone C library of special functions and distributions, as described in the official documentation. These functions can be wrapped for use in Python.
### Building the Rmath standalone library
```bash
git clone https://github.com/JuliaLang/Rmath-julia.git
cd Rmath-julia/src
make
cd ../..
```
#### Functions to wrap
```
! grep "\s.norm(" Rmath-julia/include/Rmath.h
from ctypes import CDLL, c_int, c_double
%%bash
ls Rmath-julia/src/*so
lib = CDLL('Rmath-julia/src/libRmath-julia.so')
def rnorm(mu=0, sigma=1):
lib.rnorm.argtypes = [c_double, c_double]
lib.rnorm.restype = c_double
return lib.rnorm(mu, sigma)
def dnorm(x, mean=0, sd=1, log=0):
lib.dnorm4.argtypes = [c_double, c_double, c_double, c_int]
lib.dnorm4.restype = c_double
return lib.dnorm4(x, mean, sd, log)
def pnorm(q, mu=0, sd=1, lower_tail=1, log_p=0):
lib.pnorm5.argtypes = [c_double, c_double, c_double, c_int, c_int]
lib.pnorm5.restype = c_double
return lib.pnorm5(q, mu, sd, lower_tail, log_p)
def qnorm(p, mu=0, sd=1, lower_tail=1, log_p=0):
lib.qnorm5.argtypes = [c_double, c_double, c_double, c_int, c_int]
lib.qnorm5.restype = c_double
return lib.qnorm5(p, mu, sd, lower_tail, log_p)
pnorm(0, mu=2)
qnorm(0.022750131948179212, mu=2)
plt.hist([rnorm() for i in range(100)])
pass
xs = np.linspace(-3,3,100)
plt.plot(xs, list(map(dnorm, xs)))
pass
```
### Using Cython to wrap standalone library
```
%%file rmath.pxd
cdef extern from "Rmath-julia/include/Rmath.h":
double dnorm(double, double, double, int)
double pnorm(double, double, double, int, int)
double qnorm(double, double, double, int, int)
double rnorm(double, double)
%%file rmath.pyx
cimport rmath
def rnorm_(mu=0, sigma=1):
return rmath.rnorm(mu, sigma)
def dnorm_(x, mean=0, sd=1, log=0):
return rmath.dnorm(x, mean, sd, log)
def pnorm_(q, mu=0, sd=1, lower_tail=1, log_p=0):
return rmath.pnorm(q, mu, sd, lower_tail, log_p)
def qnorm_(p, mu=0, sd=1, lower_tail=1, log_p=0):
return rmath.qnorm(p, mu, sd, lower_tail, log_p)
%%file setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
ext = Extension("rmath",
sources=["rmath.pyx"],
include_dirs=["Rmath-julia/include"],
library_dirs=["Rmath-julia/src"],
libraries=["Rmath-julia"],
runtime_library_dirs=["Rmath-julia/src"],
extra_compile_args=["-w", "-std=c99", "-DMATHLIB_STANDALONE"],
extra_link_args=[],
)
setup(
ext_modules = cythonize(
ext
))
! python setup.py build_ext --inplace
import rmath
plt.hist([rmath.rnorm_() for i in range(100)])
pass
xs = np.linspace(-3,3,100)
plt.plot(xs, list(map(rmath.dnorm_, xs)))
pass
```
### `Cython` wrappers are faster than `ctypes`
```
%timeit pnorm(0, mu=2)
%timeit rmath.pnorm_(0, mu=2)
```
Fortran
----
```
! pip install fortran-magic
%load_ext fortranmagic
%%fortran
subroutine fort_sum(N, s)
integer*8, intent(in) :: N
integer*8, intent(out) :: s
integer*8 i
s = 0
do i = 1, N
s = s + i*i
end do
end
fort_sum(10)
```
#### Another example from the [documentation](http://nbviewer.ipython.org/github/mgaitan/fortran_magic/blob/master/documentation.ipynb)
```
%%fortran --link lapack
subroutine solve(A, b, x, n)
! solve the matrix equation A*x=b using LAPACK
implicit none
real*8, dimension(n,n), intent(in) :: A
real*8, dimension(n), intent(in) :: b
real*8, dimension(n), intent(out) :: x
integer :: pivot(n), ok
integer, intent(in) :: n
x = b
! find the solution using the LAPACK routine SGESV
call DGESV(n, 1, A, n, pivot, x, n, ok)
end subroutine
A = np.array([[1, 2.5], [-3, 4]])
b = np.array([1, 2.5])
solve(A, b)
```
Interfacing with R
----
```
%load_ext rpy2.ipython
%%R
library(ggplot2)
suppressPackageStartupMessages(
ggplot(mtcars, aes(x=wt, y=mpg)) + geom_point() + geom_smooth(method=loess)
)
```
#### Converting between Python and R
```
%R -o mtcars
```
#### `mtcars` is now a Python dataframe
```
mtcars.head(n=3)
```
#### We can also pass data from Python to R
```
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
%%R -i x,y
plot(x, y, main="Sine curve in R base graphics")
```
|
github_jupyter
|
```
# from google.colab import drive
# drive.mount('/content/drive')
# path = "/content/drive/MyDrive/Research/cods_comad_plots/sdc_task/mnist/"
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))])
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
foreground_classes = {'zero','one'}
fg_used = '01'
fg1, fg2 = 0,1
all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
background_classes = all_classes - foreground_classes
background_classes
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle = False)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle = False)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(6000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.reshape(npimg, (28,28)))
plt.show()
foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape
val, idx = torch.max(background_data, dim=0, keepdims= True,)
# torch.abs(val)
mean_bg = torch.mean(background_data, dim=0, keepdims= True)
std_bg, _ = torch.max(background_data, dim=0, keepdims= True)
mean_bg.shape, std_bg.shape
foreground_data = (foreground_data - mean_bg) / std_bg
background_data = (background_data - mean_bg) / torch.abs(std_bg)
foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape
torch.sum(torch.isnan(foreground_data)), torch.sum(torch.isnan(background_data))
imshow(foreground_data[0])
imshow(background_data[0])
```
## generating CIN train and test data
```
m = 5
desired_num = 1000
np.random.seed(0)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
bg_idx, fg_idx
for i in background_data[bg_idx]:
imshow(i)
imshow(torch.sum(background_data[bg_idx], axis = 0))
imshow(foreground_data[fg_idx])
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] )/m
tr_data.shape
imshow(tr_data)
foreground_label[fg_idx]
train_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
train_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
train_images.append(tr_data)
train_label.append(label)
train_images = torch.stack(train_images)
train_images.shape, len(train_label)
imshow(train_images[0])
test_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
np.random.seed(i)
fg_idx = np.random.randint(0,12665)
tr_data = ( foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
test_images.append(tr_data)
test_label.append(label)
test_images = torch.stack(test_images)
test_images.shape, len(test_label)
imshow(test_images[0])
torch.sum(torch.isnan(train_images)), torch.sum(torch.isnan(test_images))
np.unique(train_label), np.unique(test_label)
```
## creating dataloader
```
class CIN_Dataset(Dataset):
"""CIN_Dataset dataset."""
def __init__(self, list_of_images, labels):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.image = list_of_images
self.label = labels
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.image[idx] , self.label[idx]
batch = 250
train_data = CIN_Dataset(train_images, train_label)
train_loader = DataLoader( train_data, batch_size= batch , shuffle=True)
test_data = CIN_Dataset( test_images , test_label)
test_loader = DataLoader( test_data, batch_size= batch , shuffle=False)
train_loader.dataset.image.shape, test_loader.dataset.image.shape
```
## model
```
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(28*28, 2)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.zeros_(self.fc1.bias)
def forward(self, x):
x = x.view(-1, 28*28)
x = self.fc1(x)
return x
```
## training
```
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001 ) #, momentum=0.9)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
nos_epochs = 200
tr_loss = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
epoch_loss = []
cnt=0
iteration = desired_num // batch
running_loss = 0
#training data set
for i, data in enumerate(train_loader):
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
inputs = inputs.double()
# zero the parameter gradients
optimizer_classify.zero_grad()
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_classify.step()
running_loss += loss.item()
mini = 1
if cnt % mini == mini-1: # print every 40 mini-batches
# print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
tr_loss.append(np.mean(epoch_loss))
if(np.mean(epoch_loss) <= 0.001):
break;
else:
print('[Epoch : %d] loss: %.3f' %(epoch + 1, np.mean(epoch_loss) ))
print('Finished Training')
plt.plot(tr_loss)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
```
|
github_jupyter
|
# Let's post a message to Slack
In this session, we're going to use Python to post a message to Slack. I set up [a team for us](https://ire-cfj-2017.slack.com/) so we can mess around with the [Slack API](https://api.slack.com/).
We're going to use a simple [_incoming webhook_](https://api.slack.com/incoming-webhooks) to accomplish this.
### Hello API
API stands for "Application Programming Interface." An API is a way to interact programmatically with a software application.
If you want to post a message to Slack, you could open a browser and navigate to your URL and sign in with your username and password (or open the app), click on the channel you want, and start typing.
OR ... you could post your Slack message with a Python script.
### Hello environmental variables
The code for this boot camp [is on the public internet](https://github.com/ireapps/cfj-2017). We don't want anyone on the internet to be able to post messages to our Slack channels, so we're going to use an [environmental variable](https://en.wikipedia.org/wiki/Environment_variable) to store our webhook.
The environmental variable we're going to use -- `IRE_CFJ_2017_SLACK_HOOK` -- should already be stored on your computer.
Python has a standard library module for working with the operating system called [`os`](https://docs.python.org/3/library/os.html). The `os` module has a data attribute called `environ`, a dictionary of environmental variables stored on your computer.
(Here is a new thing: Instead of using brackets to access items in a dictionary, you can use the `get()` method. The advantage to doing it this way: If the item you're trying to get doesn't exist in your dictionary, it'll return `None` instead of throwing an exception, which is sometimes a desired behavior.)
```
from os import environ
slack_hook = environ.get('IRE_CFJ_2017_SLACK_HOOK', None)
```
### Hello JSON
So far we've been working with tabular data -- CSVs with columns and rows. Most modern web APIs prefer to shake hands with a data structure called [JSON](http://www.json.org/) (**J**ava**S**cript **O**bject **N**otation), which is more like a matryoshka doll.

Python has a standard library module for working with JSON data called [`json`](https://docs.python.org/3/library/json.html). Let's import it.
```
import json
```
### Using `requests` to post data
We're also going to use the `requests` library again, except this time, instead of using the `get()` method to get something off the web, we're going to use the `post()` method to send data _to_ the web.
```
import requests
```
### Formatting the data correctly
The JSON data we're going to send to the Slack webhook will start its life as a Python dictionary. Then we'll use the `json` module's `dumps()` method to turn it into a string of JSON.
```
# build a dictionary of payload data
payload = {
'channel': '#general',
'username': 'IRE Python Bot',
'icon_emoji': ':ire:',
'text': 'helllllllo!'
}
# turn it into a string of JSON
payload_as_json = json.dumps(payload)
```
### Send it off to Slack
```
# check to see if you have the webhook URL
if slack_hook:
# send it to slack!
requests.post(slack_hook, data=payload_as_json)
else:
# if you don't have the webhook env var, print a message to the terminal
print("You don't have the IRE_CFJ_2017_SLACK_HOOK"
" environmental variable")
```
### _Exercise_
Read through the [Slack documentation](https://api.slack.com/incoming-webhooks) and post a message to a Slack channel ...
- with a different emoji
- with an image URL instead of an emoji
- with a link in it
- with an attachment
- with other kinds of fancy formatting
### _Extra credit: Slack alert_
Scenario: You cover the Fort Calhoun Nuclear Power Station outside of Omaha, Nebraska. Every day, you'd like to check [an NRC website](https://www.nrc.gov/reading-rm/doc-collections/event-status/event/) to see if your plant had any "Event Notifications" in the agency's most recent report. You decide to write a Slack script to do this for you. (Ignore, for now, the problem of setting up the script to run daily.)
Breaking down your problem, you need to:
- Fetch [the page with the latest reports](https://www.nrc.gov/reading-rm/doc-collections/event-status/event/en.html) using `requests`
- Look through the text and see if your reactor's name appears in the page text (you could just use an `if` statement with `in`)
- If it's there, use `requests` to send a message to Slack
Notice that we don't need to parse the page with BeautifulSoup -- we're basically just checking for the presence of a string inside a bigger string.
### _Extra, extra credit_
Let's extend the script you just wrote with a function that would allow you to check for the presence of _any string_ on an NRc page for _any date_ of reports -- most days have their own page, though I think weekends are grouped together.
Let's break it down. Inside our function, we need to:
- Figure out the URL pattern for each day's report page. [Here's the page for Sept. 29, 2017](https://www.nrc.gov/reading-rm/doc-collections/event-status/event/2017/20170929en.html)
- Decide how you want to accept the two arguments in your function -- one for the date and one for the string to search for (me, I'd use a date object for the default date argument to keep things explicit, but you could also pass a string)
- Fill in the URL using `format()` and the date being passed to the function
- Fetch the page using `requests`
- Not every day has a page, so you'll need to check to see if the request was successful (hint: use the requests [`status_code` attribute](http://docs.python-requests.org/en/master/user/quickstart/#response-status-codes) -- 200 means success)
- If the request was successful, check for the presence of the string in the page text
- If the text we're looking for is there, send a message to Slack
|
github_jupyter
|
```
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
# ilk terimin covarianci hep 1 cikar -2 - -2
x = np.array([-2, -1, 0, 3.5, 4,]);
y = np.array([4.1, 0.9, 2, 12.3, 15.8])
N = len(x)
m = np.zeros((N))
print(x)
print(y)
print(m)
sigma = np.cov(x, y)
print(sigma)
def cov(x1, x2):
return np.exp(-1.0/2 * np.power(x1 - x2, 2))
K = np.zeros((N, N))
print(K)
for i in range(N):
for j in range(i, N):
K[i][j] = cov(x[i], x[j])
K[j][i] = K[i][j]
print(K)
plt.scatter(x, y)
cov(4, 4) # expected to be 1
def pred(xp):
K2 = np.zeros((N+1, N+1))
N2 = N + 1
sigma22 = cov(xp, xp)
K2[N2-1][N2-1] = sigma22
for i in range(N):
for j in range(i, N):
K2[i][j] = cov(x[i], y[j])
K2[j][i] = K2[i][j]
for i in range(N):
K2[N2-1][i]= cov(xp, x[i])
K2[i][N2-1]= cov(xp, x[i])
sigma12 = np.array(K2[:N2-1,N2-1])
sigma11 = np.mat(K)
sigma21 = K[N-1:]
print(sigma12)
mp = (sigma12.T * sigma11.I) * np.mat(y).T
# sigmap = sigma22 - np.mat(sigma12).T * sigma11.I * np.mat(sigma12)
# return mp, sigmap
return mp, sigma22
pred(4)
plt.scatter(x, y)
def p():
x = np.linspace(-5, 20, 200)
y = np.zeros(200)
yu = np.zeros(200)
yb = np.zeros(200)
for i in range(len(x)):
yp, sigmap = pred(x[i])
yp = np.asarray(yp)[0][0]
yu[i] = yp - np.sqrt(sigmap)
y[i] = yp
# plt.plot(x, yu)
plt.plot(x, y)
p()
np.asarray(np.mat([[ 9.11765304e+27]]))[0][0]
def cov(x1, x2):
return np.exp(-1.0/2 * np.abs(x1 - x2))
def pred(xp):
K2 = np.zeros((N+1, N+1))
N2 = N + 1
sigma22 = cov(xp, xp)
K2[N2-1][N2-1] = sigma22
for i in range(N):
for j in range(i, N):
K2[i][j] = cov(x[i], y[j])
K2[j][i] = K2[i][j]
for i in range(N):
K2[N2-1][i]= cov(xp, x[i])
K2[i][N2-1]= cov(xp, x[i])
sigma12 = np.array(K2[:N2-1,N2-1])
sigma11 = np.mat(K)
sigma21 = K[N-1:]
# print(sigma12)
# print(sigma11)
mp = (sigma12.T * sigma11.I) * np.mat(y).T
# sigmap = sigma11 - np.mat(sigma12) * sigma21.T
return mp, sigma22
plt.scatter(x, y)
def p():
x = np.linspace(-10, 10, 200)
y = np.zeros(200)
yu = np.zeros(200)
yb = np.zeros(200)
for i in range(len(x)):
yp, sigmap = pred(x[i])
yp = np.asarray(yp)[0][0]
yu[i] = yp - np.sqrt(sigmap) * 3
y[i] = yp
# plt.plot(x, yu)
plt.plot(x, y)
p()
K
K[N-1:]
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
from hfnet.datasets.hpatches import Hpatches
from hfnet.evaluation.loaders import sift_loader, export_loader, fast_loader, harris_loader
from hfnet.evaluation.local_descriptors import evaluate
from hfnet.utils import tools
%load_ext autoreload
%autoreload 2
%matplotlib inline
data_config = {'make_pairs': True, 'shuffle': True}
dataset = Hpatches(**data_config)
all_configs = {
'sift': {
'predictor': sift_loader,
'root': True,
},
'superpoint': {
'experiment': 'super_point_pytorch/hpatches',
'predictor': export_loader,
'do_nms': True,
'nms_thresh': 4,
'remove_borders': 4,
},
'superpoint_harris-kpts': {
'experiment': 'super_point_pytorch/hpatches',
'predictor': export_loader,
'keypoint_predictor': harris_loader,
'keypoint_config': {
'do_nms': True,
'nms_thresh': 4,
},
},
'netvlad_conv3-3': {
'experiment': 'netvlad/hpatches',
'predictor': export_loader,
'keypoint_predictor': export_loader,
'keypoint_config': {
'experiment': 'super_point_pytorch/hpatches',
'do_nms': True,
'nms_thresh': 4,
'remove_borders': 4,
},
'binarize': False,
},
'lfnet': {
'experiment': 'lfnet/hpatches_kpts-500',
'predictor': export_loader,
},
}
eval_config = {
'num_features': 300,
'do_ratio_test': True,
'correct_match_thresh': 3,
'correct_H_thresh': 3,
}
methods = ['sift', 'lfnet', 'superpoint', 'superpoint_harris-kpts', 'netvlad_conv3-3']
configs = {m: all_configs[m] for m in methods}
pose_recalls, nn_pr = {}, {}
for method, config in configs.items():
config = tools.dict_update(config, eval_config)
data_iter = dataset.get_test_set()
metrics, nn_precision, nn_recall, distances, pose_recall = evaluate(data_iter, config, is_2d=True)
print('> {}'.format(method))
for k, v in metrics.items():
print('{:<25} {:.3f}'.format(k, v))
print(config)
pose_recalls[method] = pose_recall
nn_pr[method] = (nn_precision, nn_recall, distances)
# NMS=4, N=300
# NMS=8, N=500
error_names = list(list(pose_recalls.values())[0].keys())
expers = list(pose_recalls.keys())
lim = {'homography': 7}
thresh = {'homography': [1, 3, 5]}
f, axes = plt.subplots(1, len(error_names), figsize=(8, 4), dpi=150)
if len(error_names) == 1:
axes = [axes]
for error_name, ax in zip(error_names, axes):
for exper in expers:
steps, recall = pose_recalls[exper][error_name]
ax.set_xlim([0, lim[error_name]])
ax.plot(steps, recall*100, label=exper);
s = f'{error_name:^15} {exper:^25}'
s += ''.join([f' {t:^5}: {recall[np.where(steps>t)[0].min()]:.2f} ' for t in thresh[error_name]])
print(s)
ax.grid(color=[0.85]*3);
ax.set_xlabel(error_name+' error');
ax.set_ylabel('Correctly localized queries (%)');
ax.legend(loc=4);
plt.tight_layout()
plt.gcf().subplots_adjust(left=0);
# NMS=4, N=300
# NMS=8, N=500
```
|
github_jupyter
|
# COCO Reader
Reader operator that reads a COCO dataset (or subset of COCO), which consists of an annotation file and the images directory.
`DALI_EXTRA_PATH` environment variable should point to the place where data from [DALI extra repository](https://github.com/NVIDIA/DALI_extra) is downloaded. Please make sure that the proper release tag is checked out.
```
from __future__ import print_function
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
import numpy as np
from time import time
import os.path
test_data_root = os.environ['DALI_EXTRA_PATH']
file_root = os.path.join(test_data_root, 'db', 'coco', 'images')
annotations_file = os.path.join(test_data_root, 'db', 'coco', 'instances.json')
num_gpus = 1
batch_size = 16
class COCOPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(COCOPipeline, self).__init__(batch_size, num_threads, device_id, seed = 15)
self.input = ops.COCOReader(file_root = file_root, annotations_file = annotations_file,
shard_id = device_id, num_shards = num_gpus, ratio=True)
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
def define_graph(self):
inputs, bboxes, labels = self.input()
images = self.decode(inputs)
return (images, bboxes, labels)
start = time()
pipes = [COCOPipeline(batch_size=batch_size, num_threads=2, device_id = device_id) for device_id in range(num_gpus)]
for pipe in pipes:
pipe.build()
total_time = time() - start
print("Computation graph built and dataset loaded in %f seconds." % total_time)
pipe_out = [pipe.run() for pipe in pipes]
images_cpu = pipe_out[0][0].as_cpu()
bboxes_cpu = pipe_out[0][1]
labels_cpu = pipe_out[0][2]
```
Bounding boxes returned by the operator are lists of floats containing composed of **\[x, y, width, height]** (`ltrb` is set to `False` by default).
```
bboxes = bboxes_cpu.at(4)
bboxes
```
Let's see the ground truth bounding boxes drawn on the image.
```
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import random
img_index = 4
img = images_cpu.at(img_index)
H = img.shape[0]
W = img.shape[1]
fig,ax = plt.subplots(1)
ax.imshow(img)
bboxes = bboxes_cpu.at(img_index)
labels = labels_cpu.at(img_index)
categories_set = set()
for label in labels:
categories_set.add(label[0])
category_id_to_color = dict([ (cat_id , [random.uniform(0, 1) ,random.uniform(0, 1), random.uniform(0, 1)]) for cat_id in categories_set])
for bbox, label in zip(bboxes, labels):
rect = patches.Rectangle((bbox[0]*W,bbox[1]*H),bbox[2]*W,bbox[3]*H,linewidth=1,edgecolor=category_id_to_color[label[0]],facecolor='none')
ax.add_patch(rect)
plt.show()
```
|
github_jupyter
|
```
# The magic commands below allow reflecting the changes in an imported module without restarting the kernel.
%load_ext autoreload
%autoreload 2
import sys
print(f'Python version: {sys.version.splitlines()[0]}')
print(f'Environment: {sys.exec_prefix}')
```
Shell commands are prefixed with `!`
```
!pwd
!echo hello jupyter
hello = !echo hello jupyter
type(hello)
```
More info at IPython [docs](https://ipython.readthedocs.io/en/stable/api/generated/IPython.utils.text.html#IPython.utils.text.SList)
Alternatively, try inline help or `?`
```
from IPython.utils.text import SList
help(SList)
SList??
!catt
readme = !cat /home/keceli/README
print(readme)
files = !ls ./ezHPC/
files
files = !ls -l ./ezHPC/
files
files.fields(0)
files.fields(-1)
pyvar = 'demo'
!echo $pyvar
!echo $HOSTNAME
!echo $(whoami)
!echo ~
```
* So far we have used basic linux commands.
* Now, we will demonstrate job submission on JupyterHub.
* Job submission on Theta is handled with [Cobalt](https://trac.mcs.anl.gov/projects/cobalt/).
* We need to write an executable script to submit a job.
* Check ALCF Theta [documentation](https://alcf.anl.gov/support-center/theta/submit-job-theta) for more details.
* "Best Practices for Queueing and Running Jobs on Theta" [webinar](https://alcf.anl.gov/events/best-practices-queueing-and-running-jobs-theta)
```
jobscript="""#!/bin/bash -x
aprun -n 1 -N 1 echo hello jupyter"""
!echo -e "$jobscript" > testjob.sh
!chmod u+x testjob.sh
!cat testjob.sh
!echo -e "#!/bin/bash -x \n aprun -n 1 -N 1 echo hello jupyter" > testjob.sh
!chmod u+x testjob.sh
!cat testjob.sh
#!man qsub
!qsub -n 1 -t 10 -A datascience -q debug-flat-quad testjob.sh
# Earlier job id: 475482
!qstat -u `whoami`
jobid = 475482
!ls "$jobid".*
!head "$jobid".*
# %load https://raw.githubusercontent.com/keceli/ezHPC/main/ez_cobalt.py
#%%writefile ezHPC/ez_cobalt.py #Uncomment to write the file
#%load https://raw.githubusercontent.com/keceli/ezHPC/main/ez_theta.py #Uncomment to load the file
def qstat(user='', jobid='',
header='JobID:User:Score:WallTime:RunTime:Nodes:Queue:Est_Start_Time',
extra='',
verbose=False):
"""
Query about jobs submitted to queue manager with `qstat`.
Parameters:
------------
user: str, username
jobid: str, cobalt job id, if more than one, seperate with a space
header: str, customize info using headers
other header options: QueuedTime:TimeRemaining:State:Location:Mode:Procs:Preemptable:Index
"""
import os
import getpass
cmd = ''
if jobid:
cmd = f'--header={header} {jobid}'
else:
if user == '':
user = getpass.getuser() #user = '$(whoami)'
cmd = f'-u {user} --header={header}'
elif user.lower() == 'all':
cmd = f'--header={header}'
else:
cmd = f'-u {user} --header={header}'
if verbose:
cmd = f'qstat -f -l {cmd}'
else:
cmd = f'qstat {cmd}'
if extra:
cmd += ' ' + extra
print(f'Running...\n {cmd}\n')
stream = os.popen(cmd).read()
if stream:
print(stream)
else:
print('No active jobs')
return
def i_qstat():
"""
Query about jobs submitted to queue manager with `qstat`.
"""
from ipywidgets import interact_manual, widgets
import getpass
im = interact_manual(qstat, user=getpass.getuser())
app_button = im.widget.children[5]
app_button.description = 'qstat'
return
def qdel(jobid=''):
"""
Delete job(s) with the given id(s).
"""
cmd = f'qdel {jobid}'
process = Popen(cmd.split(), stdout=PIPE, stderr=PIPE)
out, err = process.communicate()
print(f'stdout: {out}')
print(f'stderr: {err}')
return
def i_qdel():
"""
Delete job(s) with the given id(s).
"""
from ipywidgets import interact_manual, widgets
im = interact_manual(qdel)
app_button = im.widget.children[1]
app_button.description = 'qdel'
return
def i_show_logs(job_prefix):
"""
"""
from ipywidgets import widgets, Layout
from IPython.display import display, clear_output
from os.path import isfile
outfile = f'{job_prefix}.output'
errfile = f'{job_prefix}.error'
logfile = f'{job_prefix}.cobaltlog'
if (isfile(outfile)):
with open(outfile, 'r') as f:
out = f.read()
with open(errfile, 'r') as f:
err = f.read()
with open(logfile, 'r') as f:
log = f.read()
children = [widgets.Textarea(value=val, layout=Layout(flex= '1 1 auto', width='100%',height='400px'))
for name,val in [(outfile,out), (errfile,err), (logfile,log)]]
tab = widgets.Tab(children=children,layout=Layout(flex= '1 1 auto', width='100%',height='auto'))
#ow = widgets.Textarea(value=out,description=outfile)
#ew = widgets.Textarea(value=err,description=errfile)
#lw = widgets.Textarea(value=log,description=logfile)
tab.set_title(0, outfile)
tab.set_title(1, errfile)
tab.set_title(2, logfile)
display(tab)
return
def parse_cobaltlog(prefix='', verbose=True):
"""
Return a dictionary with the content parsed from <prefix>.cobaltlog file
"""
from os.path import isfile
from dateutil.parser import parse
from pprint import pprint
logfile = f'{prefix}.cobaltlog'
d = {}
if isfile(logfile):
with open(logfile, 'r') as f:
lines = f.readlines()
for line in lines:
if line.startswith('Jobid'):
jobid = line.split()[-1].strip()
d['jobid'] = jobid
elif line.startswith('qsub'):
cmd = line.strip()
d['qsub_cmd'] = cmd
elif 'submitted with cwd set to' in line:
d['work_dir'] = line.split()[-1].strip()
d['submit_time'] = parse(line.split('submitted')[0].strip())
elif 'INFO: Starting Resource_Prologue' in line:
d['init_time'] = parse(line.split('INFO:')[0].strip())
d['queue_time'] = d['init_time'] - d['submit_time'].replace(tzinfo=None)
d['queue_seconds'] = d['queue_time'].seconds
elif 'Command:' in line:
d['script'] = line.split(':')[-1].strip()
d['start_time'] = parse(line.split('Command:')[0].strip())
d['boot_time'] = d['start_time'].replace(tzinfo=None) - d['init_time']
d['boot_seconds'] = d['boot_time'].seconds
elif 'COBALT_PARTCORES' in line:
d['partcores'] = line.split('=')[-1].strip()
elif 'SHELL=' in line:
d['shell'] = line.split('=')[-1].strip()
elif 'COBALT_PROJECT' in line:
d['project'] = line.split('=')[-1].strip()
elif 'COBALT_PARTNAME' in line:
d['partname'] = line.split('=')[-1].strip()
elif 'LOGNAME=' in line:
d['logname'] = line.split('=')[-1].strip()
elif 'USER=' in line:
d['user'] = line.split('=')[-1].strip()
elif 'COBALT_STARTTIME' in line:
d['cobalt_starttime'] = line.split('=')[-1].strip()
elif 'COBALT_ENDTIME' in line:
d['cobalt_endtime'] = line.split('=')[-1].strip()
elif 'COBALT_PARTSIZE' in line:
d['partsize'] = line.split('=')[-1].strip()
elif 'HOME=' in line:
d['home'] = line.split('=')[-1].strip()
elif 'COBALT_JOBSIZE' in line:
d['jobsize'] = line.split('=')[-1].strip()
elif 'COBALT_QUEUE' in line:
d['queue'] = line.split('=')[-1].strip()
elif 'Info: stdin received from' in line:
d['stdin'] = line.split()[-1].strip()
elif 'Info: stdout sent to' in line:
d['stdout'] = line.split()[-1].strip()
elif 'Info: stderr sent to' in line:
d['stderr'] = line.split()[-1].strip()
elif 'with an exit code' in line:
d['exit_code'] = line.split(';')[-1].split()[-1]
d['end_time'] = parse(line.split('Info:')[0].strip())
d['job_time'] = d['end_time'] - d['start_time']
d['wall_seconds'] = d['job_time'].seconds
else:
print(f'{logfile} is not found.')
if verbose:
pprint(d)
return d
def print_cobalt_times(prefix=''):
"""
Print timings from Cobalt logfile
"""
d = parse_cobaltlog(prefix=prefix, verbose=False)
for key, val in d.items():
if '_time' in key or 'seconds' in key:
print(f'{key}: {val}')
def get_job_script(nodes=1, ranks_per_node=1, affinity='-d 1 -j 1 --cc depth', command='',verbose=True):
"""
Returns Cobalt job script with the given parameters
TODO: add rules for affinity
"""
script = '#!/bin/bash -x \n'
ranks = ranks_per_node * nodes
script += f'aprun -n {ranks} -N {ranks_per_node} {affinity} {command}'
if verbose: print(script)
return script
def i_get_job_script_manual():
from ipywidgets import widgets, Layout, interact_manual
from IPython.display import display, clear_output
from os.path import isfile
inodes = widgets.BoundedIntText(value=1, min=1, max=4394, step=1, description='nodes', disabled=False)
iranks_per_node = widgets.BoundedIntText(value=1, min=1, max=64, step=1, description='rank_per_nodes', disabled=False)
im = interact_manual(get_job_script, nodes=inodes, ranks_per_node=irank_per_nodes)
get_job_script_button = im.widget.children[4]
get_job_script_button.description = 'get_job_script'
return
def i_get_job_script():
from ipywidgets import widgets, Layout, interact_manual
from IPython.display import display, clear_output
from os.path import isfile
inodes = widgets.BoundedIntText(value=1, min=1, max=4394, step=1, description='nodes', disabled=False)
iranks_per_node = widgets.BoundedIntText(value=1, min=1, max=64, step=1, description='ranks/node', disabled=False)
iaffinity = widgets.Text(value='-d 1 -j 1 --cc depth',description='affinity')
icommand = widgets.Text(value='',description='executable and args')
out = widgets.interactive_output(get_job_script, {'nodes': inodes,
'ranks_per_node': iranks_per_node,
'affinity': iaffinity,
'command': icommand})
box = widgets.VBox([widgets.VBox([inodes, iranks_per_node, iaffinity, icommand]), out])
display(box)
return
def validate_theta_job(queue='', nodes=1, wall_minutes=10):
"""
Return True if given <queue> <nodes> <wall_minutes> are valid for a job on Theta,
Return False and print the reason otherwise.
See https://alcf.anl.gov/support-center/theta/job-scheduling-policy-theta
Parameters
----------
queue: str, queue name, can be: 'default', 'debug-cache-quad', 'debug-flat-quad', 'backfill'
nodes: int, Number of nodes, can be an integer from 1 to 4096 depending on the queue.
wall_minutes: int, max wall time in minutes, depends on the queue and the number of nodes, max 1440 minutes
"""
isvalid = True
if queue.startswith('debug'):
if wall_minutes > 60:
print(f'Max wall time for {queue} queue is 60 minutes')
isvalid = False
if nodes > 8:
print(f'Max number of nodes for {queue} queue is 8')
isvalid = False
else:
if nodes < 128:
print(f'Min number of nodes for {queue} queue is 128')
isvalid = False
else:
if wall_minutes < 30:
print(f'Min wall time for {queue} queue is 30 minutes')
isvalid = False
if nodes < 256 and wall_minutes > 180:
print(f'Max wall time for {queue} queue is 180 minutes')
isvalid = False
elif nodes < 384 and wall_minutes > 360:
print(f'Max wall time for {queue} queue is 360 minutes')
isvalid = False
elif nodes < 640 and wall_minutes > 540:
print(f'Max wall time for {queue} queue is 540 minutes')
isvalid = False
elif nodes < 902 and wall_minutes > 720:
print(f'Max wall time for {queue} queue is 720 minutes')
isvalid = False
elif wall_minutes > 1440:
print('Max wall time on Theta is 1440 minutes')
isvalid = False
else:
isvalid = True
return isvalid
def qsub(project='',
script='',
script_file='',
queue='debug-cache-quad',
nodes=1,
wall_minutes=30,
attrs='ssds=required:ssd_size=128',
workdir='',
jobname='',
stdin='',
stdout=''):
"""
Submits a job to the queue with the given parameters.
Returns Cobalt Job Id if submitted succesfully.
Returns 0 otherwise.
Parameters
----------
project: str, name of the project to be charged
queue: str, queue name, can be: 'default', 'debug-cache-quad', 'debug-flat-quad', 'backfill'
nodes: int, Number of nodes, can be an integer from 1 to 4096 depending on the queue.
wall_minutes: int, max wall time in minutes, depends on the queue and the number of nodes, max 1440 minutes
"""
import os
import stat
import time
from subprocess import Popen, PIPE
from os.path import isfile
valid = validate_theta_job(queue=queue, nodes=nodes, wall_minutes=wall_minutes)
if not valid:
print('Job is not valid, change queue, nodes, or wall_minutes.')
return 0
with open(script_file, 'w') as f:
f.write(script)
time.sleep(1)
exists = isfile(script_file)
if exists:
print(f'Created {script_file} on {os.path.abspath(script_file)}.')
st = os.stat(script_file)
os.chmod(script_file, st.st_mode | stat.S_IEXEC)
time.sleep(1)
cmd = f'qsub -A {project} -q {queue} -n {nodes} -t {wall_minutes} --attrs {attrs} '
if workdir:
cmd += f' --cwd {workdir}'
if jobname:
cmd += f' --jobname {jobname}'
if stdin:
cmd += f' -i {stdin}'
if stdout:
cmd += f' -o {stdout}'
cmd += f' {script_file}'
print(f'Submitting: \n {cmd} ...\n')
process = Popen(cmd.split(), stdout=PIPE, stderr=PIPE)
out, err = process.communicate()
print(f'job id: {out.decode("utf-8")}')
print(f'stderr: {err.decode("utf-8")}')
return out.decode("utf-8")
def i_qsub():
"""
Submits a job to the queue with the given parameters.
"""
from ipywidgets import widgets, Layout, interact_manual
from IPython.display import display, clear_output
from os.path import isfile
inodes = widgets.BoundedIntText(value=1, min=1, max=4394, step=1, description='nodes', disabled=False)
iranks_per_node = widgets.BoundedIntText(value=1, min=1, max=64, step=1, description='rank/nodes', disabled=False)
iqueue = widgets.Dropdown(options=['debug-flat-quad','debug-cache-quad','default', 'backfill'],
description='queue',
value='debug-cache-quad')
iwall_minutes = widgets.BoundedIntText(value=10, min=10, max=1440, step=10, description='wall minutes', disabled=False)
iscript = widgets.Textarea(value='#!/bin/bash -x \n',
description='job script',
layout=Layout(flex= '0 0 auto', width='auto',height='200px'))
iscript_file= widgets.Text(value='',description='job script file name')
iproject= widgets.Text(value='',description='project')
isave = widgets.Checkbox(value=False,description='save', indent=True)
isubmit = widgets.Button(
value=False,
description='submit',
disabled=False,
button_style='success',
tooltip='submit job',
icon='')
output = widgets.Output()
display(iproject, inodes, iqueue, iwall_minutes, iscript_file, iscript, isubmit, output)
jobid = ''
def submit_clicked(b):
with output:
clear_output()
jobid = qsub(project=iproject.value,
script=iscript.value,
script_file=iscript_file.value,
queue=iqueue.value,
nodes=inodes.value,
wall_minutes=iwall_minutes.value)
isubmit.on_click(submit_clicked)
return
qstat??
i_qstat()
i_get_job_script()
i_qsub()
jobid=475487
i_show_logs(job_prefix=jobid)
jobinfo = parse_cobaltlog(prefix=jobid)
print_cobalt_times(prefix=jobid)
```
* There is 1 min overhead for a single node job, more with more number of nodes
|
github_jupyter
|
```
#week-4,l-10
#DICTIONARY:-
# A Simple dictionary
alien_0={'color': 'green','points': 5}
print(alien_0['color'])
print(alien_0['points'])
#accessing value in a dictionary:
alien_0={'color':'green','points': 5}
new_points=alien_0['points']
print(f"you just eand {new_points} points")
#adding new.key-value pairs:-
alien_0={'color':'green','points': 5}
print(alien_0)
alien_0['x_position']=0
alien_0['t_position']=25
print(alien_0)
# empyt dictionary:-
alien_0={}
alien_0['color']='green'
alien_0['points']=5
print(alien_0)
# Modifie value in a dictionary:-
alien_0={'color': 'green','points': 5}
print(f"the alien is {alien_0['color']}")
alien_0['color']='yellow'
print(f"the alien is new {alien_0['color']}")
# Example:-
alien_0={'x_position': 0,'y_position': 25,'speed': 'medium'}
print(f"original position {alien_0['x_position']}")
if alien_0['speed']=='slow':
x_increment=1
elif alien_0['speed']=='medium':
x_increment=1
else:
x_increment=3
alien_0['x_position']=alien_0['x_position']+x_increment
print(f"new position {alien_0['x_position']}")
alien_0={'color': 'green','points': 5}
print(alien_0)
del alien_0['points']
print(alien_0)
# Example-
favorite_language={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
language=favorite_language['jen']
print(f"jen's favorite language is {language}")
# error value of no key in list:-
alien_0={'color': 'green','speed': 'slow'}
print(alien_0['points']
# Example:-
alien_0={'color': 'green','speed': 'slow'}
points_value=alien_0.get('points','no points value aasigned.')
print(points_value)
# Loop through a dictionary:-
# Example:- 1
user_0={
'username': 'efermi',
'first': 'enrika',
'last': 'fermi'
}
for key,value in user_0.items():
print(f"\nkey: {key}")
print(f"value: {value}")
#Example:-2
favorite_language={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phin': 'python'
}
for name,language in favorite_language.items():
print(f"\n{name.title()}'s favrote language is {language.title()}")
# if you print only keys:-
favorite_language={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
for name in favorite_language.keys():
print(name.title())
# if you print a message with the any value:-
favorite_languages={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
friends=['phil','sarah']
for name in favorite_languages.keys():
print(name.title())
if name in friends:
language=(favorite_languages[name].title())
print(f"{name.title()},i see you love {language}")
# print a message if not keys in dict....
favorite_languages={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
if 'earimed' not in favorite_languages.keys():
print("earimed, pleaseb take our poll.")
# if you print in ordered:-
favorite_languages={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
for name in sorted(favorite_language.keys()):
print(f"{name.title()}.thank you for taking the poll.")
# if you print only value:-
favorite_languages={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
print("The following language have been mentioned")
for language in favorite_languages.values():
print(language.title())
# use set methon and print unique language:-
favorite_languages={
'jen': 'python',
'sarah': 'c',
'edward': 'ruby',
'phil': 'python'
}
print("The following language have been mentioned")
for language in set(favorite_languages.values()):
print(language.title())
# NESTING(MULTIPLE DICT):-
# A list of Dictionaries-
alien_0={'color': 'green','points': 5}
alien_1={'color': 'yellow','points': 10}
alien_2={'color':'red','points': 15}
aliens=[alien_0,alien_1,alien_2]
for alien in aliens:
print(alien)
# For empty list:-
aliens=[]
for alien_number in range(30):
new_alien={'color':'green','points': 5,'speed': 'slow'}
aliens.append(new_alien)
for alien in aliens[:5]:
print(alien)
print(f"\nTotal no of alien {len(aliens)}")
# if you want to prin 1st 3 aliens is yellow:-
liens=[]
for alien_number in range(30):
new_alien={'color':'green','points': 5,'speed': 'slow'}
aliens.append(new_alien)
for alien in aliens[:3]:
if alien['color']=='green':
alien['color']='yellow'
alien['speed']='mediam'
alien['points']='10'
for alien in aliens[:5]:
print(alien)
# if you want to prin 1st 3 aliens is yellow:-
liens=[]
for alien_number in range(30):
new_alien={'color':'green','points': 5,'speed': 'slow'}
aliens.append(new_alien)
for alien in aliens[:3]:
if alien['color']=='green':
alien['color']='yellow'
alien['speed']='mediam'
alien['points']=10
for alien in aliens[:5]:
print(alien)
# store imfomation about a pizza bieng ordered:-
pizza={
'crust': 'thick',
'topping': ['moshroom','extracheese']
}
print(f"you ordered a {pizza['crust']}-crust pizza with the following topping" )
for topping in pizza['topping']:
print("\t" + topping)
# for multiple favorite languagees:-
favorite_languages={
'jen': ['python','ruby'],
'sahar': ['c'],
'edward': ['ruby','go'],
'phil': ['python','hashell']
}
for name,language in favorite_languages.items():
print(f"{name}'s favorite language are")
for language in language:
print(language)
# A DICTIONAY IN A DICTIONARY:-
user={
'aeintein':{
'first': 'albert',
'last': 'aeintein',
'location':'princetion',
},
'mcurie':{
'first': 'marie',
'last': 'curie',
'location': 'paris',
}
}
for username,user_info in user.items():
print(f"\nusername:{username}")
full_name=(f"{user_info['first']}{user_info['last']}")
print(f"full_name: {full_name.title()}")
print(f"location: {location.title()}")
#OPERATION ON DICTIONARY:-
capital={'India': 'New delhi','Usa': 'Washington dc','Franch': 'Paris','Sri lanka': 'Colombo'}
print(capital['India'])
print(capital.get('Uk', 'unknown'))
capital['Uk']='London'
print(capital['Uk'])
print(capital.keys())
print(capital.values())
print(len(capital))
print('Usa' in capital)
print('russia' in capital)
del capital['Usa']
print(capital)
capital['Sri lanka']='Sri Jayawardenepura Kotte'
print(capital)
countries=[]
for k in capital:
countries.append(k)
countries.sort()
print(countries)
# L-12.
# USER INPUT AND WHILOE LOOPS:-
# How the input() funtion:-
message=input("Tell me something,and i will repeat it back to you:")
name=input("please enter your name:")
print(f"\nHello,{name}!")
prompt="if you tell us who are you,we can personalize the message you se."
prompt="\nwhat is you name?"
name=input(prompt)
print(f"\nHello,{name}!")
#accept numerical input
age=input("how old are ypu?.")
age=int(age)
print(age>=18)
# example:-
height=input("How tall are you, in inches")
height=int(height)
if height>=48:
print("\nyou are tall enough to ride")
else:
print("\nyou'll be able to ride when you're a little older")
# print even number or odd:-
number=input("Enter the number,and i'll tell you if it's even or odd number")
number=int(number)
if number%2==0:
print(f"\nThe number {number} is even.")
else:
print(f"\nThe number {number} is odd")
#INTRIDUCSCING WHILE LOOPS-
# The while loop in action:
current_number=1
while current_number<=5:
print(current_number)
current_number +=1
# example:-
prompt="Tell me something, and i will repeat it back to you:"
prompt+="\nEnter 'quit' to end the program."
message=""
while message!='quit':
message=input(prompt)
if message!='quit':
print(message)
#Using break to exot a loop
prompt="\nPlease enter the name of a city you have visited."
prompt+="\nenter 'quit' when you are finished"
while True:
city=input(prompt)
if city=='quit':
break
else:
print(f"i'd love to go to {city.title()}")
#Example:-
current_number=0
while current_number<10:
current_number+=1
if current_number%2==0:
continue
print(current_number)
x=1
while x<=5:
print(x)
x+=1
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
df = pd.read_csv('boston_house_prices.csv')
```
<b>Explanation of Features</b>
* CRIM: per capita crime rate per town (assumption: if CRIM high, target small)
* ZN: proportion of residential land zoned for lots over 25,000 sq. ft (assumption: if ZN high, target big)
* INDUS: proportion of non-retail business acres per town (assumption: if INDUS high, target small)
* CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) (categorical! assumption: if 1, target high)
* NOX: nitrogen oxides concentration (parts per 10 million) (assumption: if NOX high, target small)
* RM: average number of rooms per dwelling.(assumption: if RM high, target big)
* AGE: proportion of owner-occupied units built prior to 1940. (assumption: if AGE high, target big)
* DIS: weighted mean of distances to five Boston employment centres. (assumption: if DIS high, target small)
* RAD: index of accessibility to radial highways. (assumption: if RAD high, target big)
* TAX: full-value property-tax rate per \$10,000. (assumption: if TAX high, target big)
* PTRATIO: pupil-teacher ratio by town. (assumption: if PTRATIO high, target big)
* B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. (assumption: if B high, target small)
* LSTAT: lower status of the population (percent). (assumption: if LSTAT high, target small)
* MEDV: median value of owner-occupied homes in \$1000s. (target)
```
df.head()
#get number of rows and columns
df.shape
#get overview of dataset values
df.describe()
df.info()
df.isnull().sum()
#check distribution of target variable
#looks like normal distribution, no need to do logarithm
sns.distplot(df.MEDV, kde=False)
#get number of rows in df
n = len(df)
#calculate proportions for training, validation and testing datasets
n_val = int(0.2 * n)
n_test = int(0.2 * n)
n_train = n - (n_val + n_test)
#fix the random seed, so that results are reproducible
np.random.seed(2)
#create a numpy array with indices from 0 to (n-1) and shuffle it
idx = np.arange(n)
np.random.shuffle(idx)
#use the array with indices 'idx' to get a shuffled dataframe
#idx now becomes the index of the df,
#and order of rows in df is according to order of rows in idx
df_shuffled = df.iloc[idx]
#split shuffled df into train, validation and test
#e.g. for train: program starts from index 0
#until the index, that is defined by variable (n_train -1)
df_train = df_shuffled.iloc[:n_train].copy()
df_val = df_shuffled.iloc[n_train:n_train+n_val].copy()
df_test = df_shuffled.iloc[n_train+n_val:].copy()
#keep df's with target value
df_train_incl_target = df_shuffled.iloc[:n_train].copy()
df_val_incl_target = df_shuffled.iloc[n_train:n_train+n_val].copy()
df_test_incl_target = df_shuffled.iloc[n_train+n_val:].copy()
#create target variable arrays
y_train = df_train.MEDV.values
y_val = df_val.MEDV.values
y_test = df_test.MEDV.values
#remove target variable form df's
del df_train['MEDV']
del df_val['MEDV']
del df_test['MEDV']
#define first numerical features
#new training set only contains the selected base columns
#training set is transformed to matrix array with 'value' method
base = ['CRIM', 'ZN', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD']
df_num = df_train[base]
X_train = df_num.values
#return the weights
def linear_regression(X, y):
ones = np.ones(X.shape[0])
X = np.column_stack([ones, X])
XTX = X.T.dot(X)
XTX_inv = np.linalg.inv(XTX)
w = XTX_inv.dot(X.T).dot(y)
return w[0], w[1:]
w_0, w = linear_regression(X_train, y_train)
#prediction of target variable, based on training set
y_pred = w_0 + X_train.dot(w)
#the plot shows difference between distribution of
#real target variable and predicted target variable
sns.distplot(y_pred, label='pred')
sns.distplot(y_train, label='target')
plt.legend()
#calculation of root mean squared error
#based on difference between distribution of
#real target variable and predicted target variable
def rmse(y, y_pred):
error = y_pred - y
mse = (error ** 2).mean()
return np.sqrt(mse)
rmse(y_train, y_pred)
```
Validating the Model
```
#create X_val matrix array
df_num = df_val[base]
X_val = df_num.values
#take the bias and the weights (w_0 and w), what we got from the linear regression
#and get the prediction of the target variable for the validation dataset
y_pred = w_0 + X_val.dot(w)
#compare y_pred with real target values 'y_val'
#that number should be used for comparing models
rmse(y_val, y_pred)
```
<b>prepare_X</b> function converts dataframe to matrix array
```
#this function takes in feature variables (base),
#and returns a matrix array with 'values' method
def prepare_X(df):
df_num = df[base]
X = df_num.values
return X
#traub the model by calculating the weights
X_train = prepare_X(df_train)
w_0, w = linear_regression(X_train, y_train)
#apply model to validation dataset
X_val = prepare_X(df_val)
y_pred = w_0 + X_val.dot(w)
#compute RMSE on validation dataset
print('validation', rmse(y_val, y_pred))
```
Feature engineering: Add more features to the model<br>
We use the validation framework to see whether more features improve the model
```
#use prepare_X function to add more features
def prepare_X(df):
df = df.copy()
base_02 = ['CRIM', 'ZN', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT']
df_num = df[base_02]
X = df_num.values
return X
#check if adding 4 more numerical features can improve the model
#X_train should now be a matrix array with totally 12 numerical features
#train the model
X_train = prepare_X(df_train)
w_0, w = linear_regression(X_train, y_train)
#apply model to validation dataset
X_val = prepare_X(df_val)
y_pred = w_0 + X_val.dot(w)
#computer RMSE on validation dataset
print('validation:', rmse(y_val, y_pred))
#above we can see that the RMSE decreased a bit
#plot distribution of real target values (target)
#and the predicted target values (pred)
#after we considered 12 feature variables
sns.distplot(y_pred, label='pred')
sns.distplot(y_val, label='target')
plt.legend()
```
Feature engineering: Add the CHAS feature to the model <br>
Actually it is a categorical variable, but it has only 2 values (0 and 1) <br>
So there is no need to do one-hot encoding <br>
We use the validation framework to see whether this additional feature improves the model
```
base_02 = ['CRIM', 'ZN', 'INDUS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT']
#use prepare_X function to add CHAS as a feature
def prepare_X(df):
df = df.copy()
features = base_02.copy()
features.append('CHAS')
df_num = df[features]
X = df_num.values
return X
#check if adding 'CHAS' as a feature can improve the model
#X_train should now be a matrix array with totally 12 numerical features and 1 categorical feature
#train the model
X_train = prepare_X(df_train)
w_0, w = linear_regression(X_train, y_train)
#apply model to validation dataset
X_val = prepare_X(df_val)
y_pred = w_0 + X_val.dot(w)
#computer RMSE on validation dataset
print('validation:', rmse(y_val, y_pred))
#above we can see that the RMSE decreased a bit
#compared to the plot above, the amount of predicted values for '30'
#gets closer to the amount of real values for '30'
#plot distribution of real target values (target)
#and the predicted target values (pred)
#after we considered 12 feature variables
sns.distplot(y_pred, label='pred')
sns.distplot(y_val, label='target')
plt.legend()
#we could try regularization in case the data is 'noisy'
#regularize with the parameter r
def linear_regression_reg(X, y, r=0.01):
ones = np.ones(X.shape[0])
X = np.column_stack([ones, X])
XTX = X.T.dot(X)
#add r to main diagonal of XTX
reg = r * np.eye(XTX.shape[0])
XTX = XTX + reg
XTX_inv = np.linalg.inv(XTX)
w = XTX_inv.dot(X.T).dot(y)
return w[0], w[1:]
#the bigger r (alpha), the smaller the weights (the denominator becomes bigger)
#on the left 'column', you can see r, that growths with each step
#in the other columns, there are written the weights
for r in [0, 0.001, 0.01, 0.1, 1, 10]:
w_0, w = linear_regression_reg(X_train, y_train, r=r)
print('%5s, %.2f, %.2f, %.2f' % (r, w_0, w[3], w[5]))
#calculate the RMSE after we used ridge regression
X_train = prepare_X(df_train)
w_0, w = linear_regression_reg(X_train, y_train, r=0.001)
X_val = prepare_X(df_val)
y_pred = w_0 + X_val.dot(w)
print('validation:', rmse(y_val, y_pred))
#run a grid search to identify the best value of r
X_train = prepare_X(df_train)
X_val = prepare_X(df_val)
for r in [0.000001, 0.0001, 0.001, 0.01, 0.1, 1, 5, 10]:
w_0, w = linear_regression_reg(X_train, y_train, r=r)
y_pred = w_0 + X_val.dot(w)
print('%6s' %r, rmse(y_val, y_pred))
```
as we can see from the new rmse, the ridge regression has no positive effect
Now we can help the user to predict the price of a real estate in Boston
```
df_test_incl_target.head(10)
#create a dictionary from rows
#delete target value
pred_price_list = []
z = 0
while z < 10:
ad = df_test_incl_target.iloc[z].to_dict()
del ad['MEDV']
#dt_test is a dataframe with one row (contains above dict info)
df_test = pd.DataFrame([ad])
X_test = prepare_X(df_test)
#train model without ridge regression
w_0, w = linear_regression(X_train, y_train)
#prediction of the price
y_pred = w_0 + X_test.dot(w)
pred_price_list.append(y_pred)
z = z + 1
pred_price_list
real_price = df_test_incl_target.MEDV.tolist()
#get average of difference between real price and predicted price
y = 0
diff_list = []
while y < 10:
diff = real_price[y] - pred_price_list[y]
diff_list.append(diff)
y += 1
sum(diff_list) / len(diff_list)
```
later on, we can also try other models and see, if the rmse can be further reduced<br>
Lastly, I want to check how increased or decreaesed feature variables will influence the target variable
```
ad = df_test_incl_target.iloc[0].to_dict()
ad
ad_test = {'CRIM': 0.223,
'ZN': 0,
'INDUS': 9.69,
'CHAS': 0,
'NOX': 0.585,
'RM': 6.025,
'AGE': 79.9,
'DIS': 2.4982,
'RAD': 6.0,
'TAX': 391.0,
'PTRATIO': 19.2,
'B': 396.9,
'LSTAT': 14.33}
#dt_test is a dataframe with one row (contains above dict info)
df_test = pd.DataFrame([ad_test])
X_test = prepare_X(df_test)
#train model without ridge regression
w_0, w = linear_regression(X_train, y_train)
#prediction of the price
y_pred = w_0 + X_test.dot(w)
y_pred
```
<b>Explanation of Features</b>
* CRIM: per capita crime rate per town (assumption: if CRIM high, target small --> correct)
* ZN: proportion of residential land zoned for lots over 25,000 sq. ft (assumption: if ZN high, target big --> correct)
* INDUS: proportion of non-retail business acres per town (assumption: if INDUS high, target small --> correct)
* CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) (categorical! assumption: if 1, target high --> correct)
* NOX: nitrogen oxides concentration (parts per 10 million) (assumption: if NOX high, target small --> correct)
* RM: average number of rooms per dwelling.(assumption: if RM high, target big --> correct)
* AGE: proportion of owner-occupied units built prior to 1940. (assumption: if AGE high, target big --> not clear)
* DIS: weighted mean of distances to five Boston employment centres. (assumption: if DIS high, target small --> correct)
* RAD: index of accessibility to radial highways. (assumption: if RAD high, target big --> correct)
* TAX: full-value property-tax rate per \$10,000. (assumption: if TAX high, target big --> not correct)
* PTRATIO: pupil-teacher ratio by town. (assumption: if PTRATIO high, target small--> correct)
* B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. (assumption: if B high, target small--> not correct)
* LSTAT: lower status of the population (percent). (assumption: if LSTAT high, target small --> correct)
* MEDV: median value of owner-occupied homes in \$1000s. (target)
```
#check the against test dataset to see if model works
X_train = prepare_X(df_train)
w_0, w = linear_regression(X_train, y_train)
X_val = prepare_X(df_val)
y_pred = w_0 + X_val.dot(w)
print('validation:', rmse(y_val, y_pred))
X_test = prepare_X(df_test)
y_pred = w_0 + X_test.dot(w)
print('test:', rmse(y_test, y_pred))
```
|
github_jupyter
|
Let's go through the known systems in [Table 1](https://www.aanda.org/articles/aa/full_html/2018/01/aa30655-17/T1.html) of Jurysek+(2018)
```
# 11 systems listed in their Table 1
systems = ['RW Per', 'IU Aur', 'AH Cep', 'AY Mus',
'SV Gem', 'V669 Cyg', 'V685 Cen',
'V907 Sco', 'SS Lac', 'QX Cas', 'HS Hya']
P_EB = [13.1989, 1.81147, 1.7747, 3.2055, 4.0061, 1.5515,
1.19096, 3.77628, 14.4161, 6.004709, 1.568024]
```
I already know about some...
- [HS Hya](https://github.com/jradavenport/HS-Hya) (yes, the final eclipses!)
- [IU Aur](IU_Aur.ipynb) (yes, still eclipsing)
- [QX Cas](https://github.com/jradavenport/QX-Cas) (yes, but not eclipsing, though new eclipses present...)
- V907 Sco (yes, not sure if eclipsing still)
1. Go through each system. Check [MAST](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html) (has 2-min data), data could be pulled with [lightkurve](https://docs.lightkurve.org/tutorials/),
2. if not check for general coverage with the [Web Viewing Tool](https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py)
3. and try to generate a 30-min lightcurve from pixel-level data with [Eleanor](https://adina.feinste.in/eleanor/getting_started/tutorial.html)
4. For every system w/ TESS data, make some basic light curves. Is eclipse still there? Is there rotation?
5. For each, find best paper(s) that characterize the system. Start w/ references in Table 1
```
from IPython.display import Image
import warnings
warnings.filterwarnings('ignore')
import eleanor
import numpy as np
from astropy import units as u
import matplotlib.pyplot as plt
from astropy.coordinates import SkyCoord
import matplotlib
matplotlib.rcParams.update({'font.size':18})
matplotlib.rcParams.update({'font.family':'serif'})
for k in range(len(systems)):
try:
star = eleanor.Source(name=systems[k])
print(star.name, star.tic, star.gaia, star.tess_mag)
data = eleanor.TargetData(star)
q = (data.quality == 0)
plt.figure()
plt.plot(data.time[q], data.raw_flux[q]/np.nanmedian(data.raw_flux[q]), 'k')
# plt.plot(data.time[q], data.corr_flux[q]/np.nanmedian(data.corr_flux[q]) + 0.03, 'r')
plt.ylabel('Normalized Flux')
plt.xlabel('Time [BJD - 2457000]')
plt.title(star.name)
plt.show()
plt.figure()
plt.scatter((data.time[q] % P_EB[k])/P_EB[k], data.raw_flux[q]/np.nanmedian(data.raw_flux[q]))
# plt.plot(data.time[q], data.corr_flux[q]/np.nanmedian(data.corr_flux[q]) + 0.03, 'r')
plt.ylabel('Normalized Flux')
plt.xlabel('Phase (P='+str(P_EB[k])+')')
plt.title(star.name)
plt.show()
except:
print('Sorry '+systems[k])
```
|
github_jupyter
|
# Parameters in QCoDeS
A `Parameter` is the basis of measurements and control within QCoDeS. Anything that you want to either measure or control within QCoDeS should satisfy the `Parameter` interface. You may read more about the `Parameter` [here](http://qcodes.github.io/Qcodes/user/intro.html#parameter).
```
import numpy as np
from qcodes.instrument.parameter import Parameter, ArrayParameter, MultiParameter, ManualParameter
from qcodes.utils import validators
```
QCoDeS provides the following classes of built-in parameters:
- `Parameter` represents a single value at a time
- Example: voltage
- `ParameterWithSetpoints` is intended for array-values parameters.
This Parameter class is intended for anything where a call to the instrument
returns an array of values. [This notebook](Simple-Example-of-ParameterWithSetpoints.ipynb)
gives more detailed examples of how this parameter can be used.
- `ArrayParameter` represents an array of values of all the same type that are returned all at once.
- Example: voltage vs time waveform
- **NOTE:** This is an older base class for array-valued parameters. For any new driver we strongly recommend using `ParameterWithSetpoints` class which is both more flexible and significantly easier to use. Refer to notebook on [writing drivers with ParameterWithSetpoints](Simple-Example-of-ParameterWithSetpoints.ipynb)
- `MultiParameter` represents a collection of values with different meaning and possibly different dimension
- Example: I and Q, or I vs time and Q vs time
Parameters are described in detail in the [Creating Instrument Drivers](../writing_drivers/Creating-Instrument-Drivers.ipynb) tutorial.
## Parameter
Most of the time you can use `Parameter` directly; even if you have custom `get`/`set` functions, but sometimes it's useful to subclass `Parameter`. Note that since the superclass `Parameter` actually wraps these functions (to include some extra nice-to-have functionality), your subclass should define `get_raw` and `set_raw` rather than `get` and `set`.
```
class MyCounter(Parameter):
def __init__(self, name):
# only name is required
super().__init__(name, label='Times this has been read',
vals=validators.Ints(min_value=0),
docstring='counts how many times get has been called '
'but can be reset to any integer >= 0 by set')
self._count = 0
# you must provide a get method, a set method, or both.
def get_raw(self):
self._count += 1
return self._count
def set_raw(self, val):
self._count = val
c = MyCounter('c')
c2 = MyCounter('c2')
# c() is equivalent to c.get()
print('first call:', c())
print('second call:', c())
# c2(val) is equivalent to c2.set(val)
c2(22)
```
## ArrayParameter
**NOTE:** This is an older base class for array-valued parameters. For any new driver we strongly recommend using `ParameterWithSetpoints` class which is both more flexible and significantly easier to use. Refer to notebook on [writing drivers with ParameterWithSetpoints](Simple-Example-of-ParameterWithSetpoints.ipynb).
We have kept the documentation shown below of `ArrayParameter` for the legacy purpose.
For actions that create a whole array of values at once. When you use it in a `Loop`, it makes a single `DataArray` with the array returned by `get` nested inside extra dimension(s) for the loop.
`ArrayParameter` is, for now, only gettable.
```
class ArrayCounter(ArrayParameter):
def __init__(self):
# only name and shape are required
# the setpoints I'm giving here are identical to the defaults
# this param would get but I'll give them anyway for
# demonstration purposes
super().__init__('array_counter', shape=(3, 2),
label='Total number of values provided',
unit='',
# first setpoint array is 1D, second is 2D, etc...
setpoints=((0, 1, 2), ((0, 1), (0, 1), (0, 1))),
setpoint_names=('index0', 'index1'),
setpoint_labels=('Outer param index', 'Inner param index'),
docstring='fills a 3x2 array with increasing integers')
self._val = 0
def get_raw(self):
# here I'm returning a nested list, but any sequence type will do.
# tuple, np.array, DataArray...
out = [[self._val + 2 * i + j for j in range(2)] for i in range(3)]
self._val += 6
return out
array_counter = ArrayCounter()
# simple get
print('first call:', array_counter())
```
## MultiParameter
Return multiple items at once, where each item can be a single value or an array.
NOTE: Most of the kwarg names here are the plural of those used in `Parameter` and `ArrayParameter`. In particular, `MultiParameter` is the ONLY one that uses `units`, all the others use `unit`.
`MultiParameter` is, for now, only gettable.
```
class SingleIQPair(MultiParameter):
def __init__(self, scale_param):
# only name, names, and shapes are required
# this version returns two scalars (shape = `()`)
super().__init__('single_iq', names=('I', 'Q'), shapes=((), ()),
labels=('In phase amplitude', 'Quadrature amplitude'),
units=('V', 'V'),
# including these setpoints is unnecessary here, but
# if you have a parameter that returns a scalar alongside
# an array you can represent the scalar as an empty sequence.
setpoints=((), ()),
docstring='param that returns two single values, I and Q')
self._scale_param = scale_param
def get_raw(self):
scale_val = self._scale_param()
return (scale_val, scale_val / 2)
scale = ManualParameter('scale', initial_value=2)
iq = SingleIQPair(scale_param=scale)
# simple get
print('simple get:', iq())
class IQArray(MultiParameter):
def __init__(self, scale_param):
# names, labels, and units are the same
super().__init__('iq_array', names=('I', 'Q'), shapes=((5,), (5,)),
labels=('In phase amplitude', 'Quadrature amplitude'),
units=('V', 'V'),
# note that EACH item needs a sequence of setpoint arrays
# so a 1D item has its setpoints wrapped in a length-1 tuple
setpoints=(((0, 1, 2, 3, 4),), ((0, 1, 2, 3, 4),)),
docstring='param that returns two single values, I and Q')
self._scale_param = scale_param
self._indices = np.array([0, 1, 2, 3, 4])
def get_raw(self):
scale_val = self._scale_param()
return (self._indices * scale_val, self._indices * scale_val / 2)
iq_array = IQArray(scale_param=scale)
# simple get
print('simple get', iq_array())
```
|
github_jupyter
|
# 基于注意力的神经机器翻译
此笔记本训练一个将卡比尔语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。
训练完此笔记本中的模型后,你将能够输入一个卡比尔语句子,例如 *"Times!"*,并返回其英语翻译 *"Fire!"*
对于一个简单的例子来说,翻译质量令人满意。但是更有趣的可能是生成的注意力图:它显示在翻译过程中,输入句子的哪些部分受到了模型的注意。
<img src="https://tensorflow.google.cn/images/spanish-english.png" alt="spanish-english attention plot">
请注意:运行这个例子用一个 P100 GPU 需要花大约 10 分钟。
```
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
## 下载和准备数据集
我们将使用 http://www.manythings.org/anki/ 提供的一个语言数据集。这个数据集包含如下格式的语言翻译对:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
这个数据集中有很多种语言可供选择。我们将使用英语 - 卡比尔语数据集。为方便使用,我们在谷歌云上提供了此数据集的一份副本。但是你也可以自己下载副本。下载完数据集后,我们将采取下列步骤准备数据:
1. 给每个句子添加一个 *开始* 和一个 *结束* 标记(token)。
2. 删除特殊字符以清理句子。
3. 创建一个单词索引和一个反向单词索引(即一个从单词映射至 id 的词典和一个从 id 映射至单词的词典)。
4. 将每个句子填充(pad)到最大长度。
```
'''
# 下载文件
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
'''
path_to_file = "./lan/kab.txt"
# 将 unicode 文件转换为 ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# 在单词与跟在其后的标点符号之间插入一个空格
# 例如: "he is a boy." => "he is a boy ."
# 参考:https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# 除了 (a-z, A-Z, ".", "?", "!", ","),将所有字符替换为空格
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# 给句子加上开始和结束标记
# 以便模型知道何时开始和结束预测
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. 去除重音符号
# 2. 清理句子
# 3. 返回这样格式的单词对:[ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def max_length(tensor):
return max(len(t) for t in tensor)
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# 创建清理过的输入输出对
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
```
### 限制数据集的大小以加快实验速度(可选)
在超过 10 万个句子的完整数据集上训练需要很长时间。为了更快地训练,我们可以将数据集的大小限制为 3 万个句子(当然,翻译质量也会随着数据的减少而降低):
```
# 尝试实验不同大小的数据集
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# 计算目标张量的最大长度 (max_length)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
# 采用 80 - 20 的比例切分训练集和验证集
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# 显示长度
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
```
### 创建一个 tf.data 数据集
```
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
## 编写编码器 (encoder) 和解码器 (decoder) 模型
实现一个基于注意力的编码器 - 解码器模型。关于这种模型,你可以阅读 TensorFlow 的 [神经机器翻译 (序列到序列) 教程](https://github.com/tensorflow/nmt)。本示例采用一组更新的 API。此笔记本实现了上述序列到序列教程中的 [注意力方程式](https://github.com/tensorflow/nmt#background-on-the-attention-mechanism)。下图显示了注意力机制为每个输入单词分配一个权重,然后解码器将这个权重用于预测句子中的下一个单词。下图和公式是 [Luong 的论文](https://arxiv.org/abs/1508.04025v5)中注意力机制的一个例子。
<img src="https://tensorflow.google.cn/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
输入经过编码器模型,编码器模型为我们提供形状为 *(批大小,最大长度,隐藏层大小)* 的编码器输出和形状为 *(批大小,隐藏层大小)* 的编码器隐藏层状态。
下面是所实现的方程式:
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://tensorflow.google.cn/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
本教程的编码器采用 [Bahdanau 注意力](https://arxiv.org/pdf/1409.0473.pdf)。在用简化形式编写之前,让我们先决定符号:
* FC = 完全连接(密集)层
* EO = 编码器输出
* H = 隐藏层状态
* X = 解码器输入
以及伪代码:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`。 Softmax 默认被应用于最后一个轴,但是这里我们想将它应用于 *第一个轴*, 因为分数 (score) 的形状是 *(批大小,最大长度,隐藏层大小)*。最大长度 (`max_length`) 是我们的输入的长度。因为我们想为每个输入分配一个权重,所以 softmax 应该用在这个轴上。
* `context vector = sum(attention weights * EO, axis = 1)`。选择第一个轴的原因同上。
* `embedding output` = 解码器输入 X 通过一个嵌入层。
* `merged vector = concat(embedding output, context vector)`
* 此合并后的向量随后被传送到 GRU
每个步骤中所有向量的形状已在代码的注释中阐明:
```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# 样本输入
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# 隐藏层的形状 == (批大小,隐藏层大小)
# hidden_with_time_axis 的形状 == (批大小,1,隐藏层大小)
# 这样做是为了执行加法以计算分数
hidden_with_time_axis = tf.expand_dims(query, 1)
# 分数的形状 == (批大小,最大长度,1)
# 我们在最后一个轴上得到 1, 因为我们把分数应用于 self.V
# 在应用 self.V 之前,张量的形状是(批大小,最大长度,单位)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# 注意力权重 (attention_weights) 的形状 == (批大小,最大长度,1)
attention_weights = tf.nn.softmax(score, axis=1)
# 上下文向量 (context_vector) 求和之后的形状 == (批大小,隐藏层大小)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# 用于注意力
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# 编码器输出 (enc_output) 的形状 == (批大小,最大长度,隐藏层大小)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x 在通过嵌入层后的形状 == (批大小,1,嵌入维度)
x = self.embedding(x)
# x 在拼接 (concatenation) 后的形状 == (批大小,1,嵌入维度 + 隐藏层大小)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# 将合并后的向量传送到 GRU
output, state = self.gru(x)
# 输出的形状 == (批大小 * 1,隐藏层大小)
output = tf.reshape(output, (-1, output.shape[2]))
# 输出的形状 == (批大小,vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
## 定义优化器和损失函数
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
```
## 检查点(基于对象保存)
```
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
```
## 训练
1. 将 *输入* 传送至 *编码器*,编码器返回 *编码器输出* 和 *编码器隐藏层状态*。
2. 将编码器输出、编码器隐藏层状态和解码器输入(即 *开始标记*)传送至解码器。
3. 解码器返回 *预测* 和 *解码器隐藏层状态*。
4. 解码器隐藏层状态被传送回模型,预测被用于计算损失。
5. 使用 *教师强制 (teacher forcing)* 决定解码器的下一个输入。
6. *教师强制* 是将 *目标词* 作为 *下一个输入* 传送至解码器的技术。
7. 最后一步是计算梯度,并将其应用于优化器和反向传播。
```
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# 教师强制 - 将目标词作为下一个输入
for t in range(1, targ.shape[1]):
# 将编码器输出 (enc_output) 传送至解码器
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# 使用教师强制
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# 每 2 个周期(epoch),保存(检查点)一次模型
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
```
## 翻译
* 评估函数类似于训练循环,不同之处在于在这里我们不使用 *教师强制*。每个时间步的解码器输入是其先前的预测、隐藏层状态和编码器输出。
* 当模型预测 *结束标记* 时停止预测。
* 存储 *每个时间步的注意力权重*。
请注意:对于一个输入,编码器输出仅计算一次。
```
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# 存储注意力权重以便后面制图
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# 预测的 ID 被输送回模型
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# 注意力权重制图函数
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
```
## 恢复最新的检查点并验证
```
# 恢复检查点目录 (checkpoint_dir) 中最新的检查点
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# 错误的翻译
translate(u'trata de averiguarlo.')
```
|
github_jupyter
|
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub
</div>
<br>
<hr>
# Transformers
In this lesson we will learn how to implement the Transformer architecture to extract contextual embeddings for our text classification task.
<div align="left">
<a target="_blank" href="https://madewithml.com/courses/foundations/transformers/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>
<a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d"></a>
<a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
# Overview
Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task.
- **advantages**:
- better representation for our input tokens via contextual embeddings where the token representation is based on the specific neighboring tokens using self-attention.
- sub-word tokens, as opposed to character tokens, since they can hold more meaningful representation for many of our keywords, prefixes, suffixes, etc.
- attend (in parallel) to all the tokens in our input, as opposed to being limited by filter spans (CNNs) or memory issues from sequential processing (RNNs).
- **disadvantages**:
- computationally intensive
- required large amounts of data (mitigated using pretrained models)
<div align="left">
<img src="https://madewithml.com/static/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
# Set up
```
!pip install transformers==3.0.2 -q
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
SEED = 1234
def set_seeds(seed=1234):
"""Set seeds for reproducibility."""
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set device
cuda = True
device = torch.device("cuda" if (
torch.cuda.is_available() and cuda) else "cpu")
torch.set_default_tensor_type("torch.FloatTensor")
if device.type == "cuda":
torch.set_default_tensor_type("torch.cuda.FloatTensor")
print (device)
```
## Load data
We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`)
```
import numpy as np
import pandas as pd
import re
import urllib
# Load data
url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv"
df = pd.read_csv(url, header=0) # load
df = df.sample(frac=1).reset_index(drop=True) # shuffle
df.head()
# Reduce data size (too large to fit in Colab's limited memory)
df = df[:10000]
print (len(df))
```
## Preprocessing
We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re
nltk.download("stopwords")
STOPWORDS = stopwords.words("english")
print (STOPWORDS[:5])
porter = PorterStemmer()
def preprocess(text, stopwords=STOPWORDS):
"""Conditional preprocessing on our text unique to our task."""
# Lower
text = text.lower()
# Remove stopwords
pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*')
text = pattern.sub('', text)
# Remove words in paranthesis
text = re.sub(r'\([^)]*\)', '', text)
# Spacing and filters
text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
return text
# Sample
text = "Great week for the NYSE!"
preprocess(text=text)
# Apply to dataframe
preprocessed_df = df.copy()
preprocessed_df.title = preprocessed_df.title.apply(preprocess)
print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}")
```
## Split data
```
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
def train_val_test_split(X, y, train_size):
"""Split dataset into data splits."""
X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)
return X_train, X_val, X_test, y_train, y_val, y_test
# Data
X = preprocessed_df["title"].values
y = preprocessed_df["category"].values
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, train_size=TRAIN_SIZE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
```
## Label encoder
```
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(y)
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y):
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
y_one_hot[i][self.class_to_index[item]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
index = np.where(item == 1)[0][0]
classes.append(self.index_to_class[index])
return classes
def save(self, fp):
with open(fp, "w") as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, "r") as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
num_classes = len(label_encoder)
label_encoder.class_to_index
# Class weights
counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train])
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"counts: {counts}\nweights: {class_weights}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = label_encoder.encode(y_train)
y_val = label_encoder.encode(y_val)
y_test = label_encoder.encode(y_test)
print (f"y_train[0]: {y_train[0]}")
print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}")
```
## Tokenizer
We'll be using the [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer) to tokenize our input text in to sub-word tokens.
```
from transformers import DistilBertTokenizer
from transformers import BertTokenizer
# Load tokenizer and model
# tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
vocab_size = len(tokenizer)
print (vocab_size)
# Tokenize inputs
encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True)
X_train_ids = encoded_input["input_ids"]
X_train_masks = encoded_input["attention_mask"]
print (X_train_ids.shape, X_train_masks.shape)
encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True)
X_val_ids = encoded_input["input_ids"]
X_val_masks = encoded_input["attention_mask"]
print (X_val_ids.shape, X_val_masks.shape)
encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True)
X_test_ids = encoded_input["input_ids"]
X_test_masks = encoded_input["attention_mask"]
print (X_test_ids.shape, X_test_masks.shape)
# Decode
print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}")
# Sub-word tokens
print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0]))
```
## Datasets
We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits.
```
class TransformerTextDataset(torch.utils.data.Dataset):
def __init__(self, ids, masks, targets):
self.ids = ids
self.masks = masks
self.targets = targets
def __len__(self):
return len(self.targets)
def __str__(self):
return f"<Dataset(N={len(self)})>"
def __getitem__(self, index):
ids = torch.tensor(self.ids[index], dtype=torch.long)
masks = torch.tensor(self.masks[index], dtype=torch.long)
targets = torch.FloatTensor(self.targets[index])
return ids, masks, targets
def create_dataloader(self, batch_size, shuffle=False, drop_last=False):
return torch.utils.data.DataLoader(
dataset=self,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
pin_memory=False)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Create dataloaders
batch_size = 128
train_dataloader = train_dataset.create_dataloader(
batch_size=batch_size)
val_dataloader = val_dataset.create_dataloader(
batch_size=batch_size)
test_dataloader = test_dataset.create_dataloader(
batch_size=batch_size)
batch = next(iter(train_dataloader))
print ("Sample batch:\n"
f" ids: {batch[0].size()}\n"
f" masks: {batch[1].size()}\n"
f" targets: {batch[2].size()}")
```
## Trainer
Let's create the `Trainer` class that we'll use to facilitate training for our experiments.
```
import torch.nn.functional as F
class Trainer(object):
def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None):
# Set params
self.model = model
self.device = device
self.loss_fn = loss_fn
self.optimizer = optimizer
self.scheduler = scheduler
def train_step(self, dataloader):
"""Train step."""
# Set model to train mode
self.model.train()
loss = 0.0
# Iterate over train batches
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, targets = batch[:-1], batch[-1]
self.optimizer.zero_grad() # Reset gradients
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, targets) # Define loss
J.backward() # Backward pass
self.optimizer.step() # Update weights
# Cumulative Metrics
loss += (J.detach().item() - loss) / (i + 1)
return loss
def eval_step(self, dataloader):
"""Validation or test step."""
# Set model to eval mode
self.model.eval()
loss = 0.0
y_trues, y_probs = [], []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, y_true = batch[:-1], batch[-1]
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, y_true).item()
# Cumulative Metrics
loss += (J - loss) / (i + 1)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
y_trues.extend(y_true.cpu().numpy())
return loss, np.vstack(y_trues), np.vstack(y_probs)
def predict_step(self, dataloader):
"""Prediction step."""
# Set model to eval mode
self.model.eval()
y_probs = []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = self.model(inputs)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
return np.vstack(y_probs)
def train(self, num_epochs, patience, train_dataloader, val_dataloader):
best_val_loss = np.inf
for epoch in range(num_epochs):
# Steps
train_loss = self.train_step(dataloader=train_dataloader)
val_loss, _, _ = self.eval_step(dataloader=val_dataloader)
self.scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = self.model
_patience = patience # reset _patience
else:
_patience -= 1
if not _patience: # 0
print("Stopping early!")
break
# Logging
print(
f"Epoch: {epoch+1} | "
f"train_loss: {train_loss:.5f}, "
f"val_loss: {val_loss:.5f}, "
f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, "
f"_patience: {_patience}"
)
return best_model
```
# Transformer
## Scaled dot-product attention
The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn.
$ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim)
$ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $
$ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $
$ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $
$ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $
## Multi-head attention
Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces.
$ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $
* ${head}_i = attention(Q_i, K_i, V_i) $
* $h$ = # of self-attention heads
* $W_O \in \mathbb{R}^{hd_vXH} $
* $H$ = hidden dim. (or dimension of the model $d_{model}$)
## Positional encoding
With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training.
$ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $
$ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $
where:
* $pos$ = position of the token $(1...M)$
* $i$ = hidden dim $(1..H)$
This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads.
## Architecture
And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on.
<div align="left">
<img src="https://madewithml.com/static/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
> We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to load a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) , which we'll use as a feature extractor and fine-tune on our own dataset.
## Model
We're going to use a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) to act as a feature extractor. We'll only use the encoder to receive sequential and pooled outputs (`is_decoder=False` is default).
```
from transformers import BertModel
# transformer = BertModel.from_pretrained("distilbert-base-uncased")
# embedding_dim = transformer.config.dim
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
class Transformer(nn.Module):
def __init__(self, transformer, dropout_p, embedding_dim, num_classes):
super(Transformer, self).__init__()
self.transformer = transformer
self.dropout = torch.nn.Dropout(dropout_p)
self.fc1 = torch.nn.Linear(embedding_dim, num_classes)
def forward(self, inputs):
ids, masks = inputs
seq, pool = self.transformer(input_ids=ids, attention_mask=masks)
z = self.dropout(pool)
z = self.fc1(z)
return z
```
> We decided to work with the pooled output, but we could have just as easily worked with the sequential output (encoder representation for each sub-token) and applied a CNN (or other decoder options) on top of it.
```
# Initialize model
dropout_p = 0.5
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model = model.to(device)
print (model.named_parameters)
```
## Training
```
# Arguments
lr = 1e-4
num_epochs = 100
patience = 10
# Define loss
class_weights_tensor = torch.Tensor(np.array(list(class_weights.values())))
loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor)
# Define optimizer & scheduler
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=0.1, patience=5)
# Trainer module
trainer = Trainer(
model=model, device=device, loss_fn=loss_fn,
optimizer=optimizer, scheduler=scheduler)
# Train
best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader)
```
## Evaluation
```
import json
from sklearn.metrics import precision_recall_fscore_support
def get_performance(y_true, y_pred, classes):
"""Per-class performance metrics."""
# Performance
performance = {"overall": {}, "class": {}}
# Overall performance
metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted")
performance["overall"]["precision"] = metrics[0]
performance["overall"]["recall"] = metrics[1]
performance["overall"]["f1"] = metrics[2]
performance["overall"]["num_samples"] = np.float64(len(y_true))
# Per-class performance
metrics = precision_recall_fscore_support(y_true, y_pred, average=None)
for i in range(len(classes)):
performance["class"][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i]),
}
return performance
# Get predictions
test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader)
y_pred = np.argmax(y_prob, axis=1)
# Determine performance
performance = get_performance(
y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes)
print (json.dumps(performance["overall"], indent=2))
# Save artifacts
from pathlib import Path
dir = Path("transformers")
dir.mkdir(parents=True, exist_ok=True)
label_encoder.save(fp=Path(dir, "label_encoder.json"))
torch.save(best_model.state_dict(), Path(dir, "model.pt"))
with open(Path(dir, "performance.json"), "w") as fp:
json.dump(performance, indent=2, sort_keys=False, fp=fp)
```
## Inference
```
def get_probability_distribution(y_prob, classes):
"""Create a dict of class probabilities from an array."""
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results
# Load artifacts
device = torch.device("cpu")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json"))
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device))
model.to(device);
# Initialize trainer
trainer = Trainer(model=model, device=device)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Dataloader
text = "The final tennis tournament starts next week."
X = preprocess(text)
encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu"))
ids = encoded_input["input_ids"]
masks = encoded_input["attention_mask"]
y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids))
dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler)
dataloader = dataset.create_dataloader(batch_size=int(batch_size))
# Inference
y_prob = trainer.predict_step(dataloader)
y_pred = np.argmax(y_prob, axis=1)
label_encoder.index_to_class[y_pred[0]]
# Class distributions
prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes)
print (json.dumps(prob_dist, indent=2))
```
## Interpretability
Let's visualize the self-attention weights from each of the attention heads in the encoder.
```
import sys
!rm -r bertviz_repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not "bertviz_repo" in sys.path:
sys.path += ["bertviz_repo"]
from bertviz import head_view
# Print input ids
print (ids)
print (tokenizer.batch_decode(ids))
# Get encoder attentions
seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True)
print (len(attn)) # 12 attention layers (heads)
print (attn[0].shape)
# HTML set up
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
# Visualize self-attention weights
call_html()
tokens = tokenizer.convert_ids_to_tokens(ids[0])
head_view(attention=attn, tokens=tokens)
```
> Now you're ready to start the [MLOps lessons](https://madewithml.com/#mlops) to learn how to apply all this foundational modeling knowledge to responsibly deliver value.
|
github_jupyter
|
Here is an illustration of the IFS T42 issue.
```
import xarray as xr
import matplotlib.pyplot as plt
from src.score import *
# This is the regridded ERA data
DATADIR = '/data/weather-benchmark/5.625deg/'
z500_valid = load_test_data(f'{DATADIR}geopotential_500', 'z')
t850_valid = load_test_data(f'{DATADIR}temperature_850', 't')
era = xr.merge([z500_valid, t850_valid]).drop('level')
era
# This is the data that was regridded by Peter
t42_raw = xr.open_dataset(f'/media/rasp/Elements/weather-benchmark/IFS_T42/output_42_pl_5.625.nc')
# Make longitude dimensions match
t42_raw['lat'] = -era.lat
t42_raw = t42_raw.roll(lon=32)
t42_raw['lon'] = era.lon
t42_raw
```
Let's now plot the initial conditions of the first forecast.
```
# Plot for Z500 with difference
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.z.isel(time=0).sel(lev=5e4).plot(ax=axs[0]);
era.z.isel(time=0).plot(ax=axs[1])
(t42_raw.z.isel(time=0).sel(lev=5e4)-era.z.isel(time=0)).plot(ax=axs[2]);
# Same for T850
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.t.isel(time=0).sel(lev=8.5e4).plot(ax=axs[0]);
era.t.isel(time=0).plot(ax=axs[1])
(t42_raw.t.isel(time=0).sel(lev=8.5e4)-era.t.isel(time=0)).plot(ax=axs[2]);
```
We can see that the ERA field is a lot noisier that the smooth T42 field. This is obviously worse for T than for Z, which causes the RMSE for T to be much worse.
```
# Now for a 5 day forecast
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.z.isel(time=5*24//6).sel(lev=5e4).plot(ax=axs[0]);
era.z.isel(time=5*24).plot(ax=axs[1])
(t42_raw.z.isel(time=5*24//6).sel(lev=5e4)-era.z.isel(time=5*24)).plot(ax=axs[2]);
# Same for T850
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.t.isel(time=5*24//6).sel(lev=8.5e4).plot(ax=axs[0]);
era.t.isel(time=5*24).plot(ax=axs[1])
(t42_raw.t.isel(time=5*24//6).sel(lev=8.5e4)-era.t.isel(time=5*24)).plot(ax=axs[2]);
```
So one weird thing here is that we have a 30(!) degree temperature error in the forecast. That doesn't seem physical, right?
Since T42 is started from ERA ICs the question is: Why is it so much smoother? Does it have to do with the interpolation. To check that, let's do the same analysis for the 2.8125 degree data.
```
# This is the regridded ERA data
DATADIR = '/media/rasp/Elements/weather-benchmark/2.8125deg/'
z500_valid = load_test_data(f'{DATADIR}geopotential', 'z')
t850_valid = load_test_data(f'{DATADIR}temperature', 't')
era = xr.merge([z500_valid, t850_valid])
era
# This is the data that was regridded by Peter
t42_raw = xr.open_dataset(f'/media/rasp/Elements/weather-benchmark/IFS_T42/output_42_pl_2.8125.nc')
# Make longitude dimensions match
t42_raw['lat'] = -era.lat
t42_raw = t42_raw.roll(lon=64)
t42_raw['lon'] = era.lon
t42_raw
```
Let's now plot the initial conditions of the first forecast.
```
# Plot for Z500 with difference
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.z.isel(time=0).sel(lev=5e4).plot(ax=axs[0]);
era.z.isel(time=0).plot(ax=axs[1])
(t42_raw.z.isel(time=0).sel(lev=5e4)-era.z.isel(time=0)).plot(ax=axs[2]);
# Same for T850
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.t.isel(time=0).sel(lev=8.5e4).plot(ax=axs[0]);
era.t.isel(time=0).plot(ax=axs[1])
(t42_raw.t.isel(time=0).sel(lev=8.5e4)-era.t.isel(time=0)).plot(ax=axs[2]);
```
As you can see the T42 forecasts are still much smoother. So why is that?
```
# Now for a 5 day forecast
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.z.isel(time=5*24//6).sel(lev=5e4).plot(ax=axs[0]);
era.z.isel(time=5*24).plot(ax=axs[1])
(t42_raw.z.isel(time=5*24//6).sel(lev=5e4)-era.z.isel(time=5*24)).plot(ax=axs[2]);
# Same for T850
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.t.isel(time=5*24//6).sel(lev=8.5e4).plot(ax=axs[0]);
era.t.isel(time=5*24).plot(ax=axs[1])
(t42_raw.t.isel(time=5*24//6).sel(lev=8.5e4)-era.t.isel(time=5*24)).plot(ax=axs[2]);
# Same for T850; now for 1 forecast lead time
t = 24
fig, axs = plt.subplots(1, 3, figsize=(15, 4))
t42_raw.t.isel(time=t//6).sel(lev=8.5e4).plot(ax=axs[0]);
era.t.isel(time=t).plot(ax=axs[1])
(t42_raw.t.isel(time=t//6).sel(lev=8.5e4)-era.t.isel(time=t)).plot(ax=axs[2]);
```
We still have that huge temperature error. Let's check where that is.
```
import cartopy.crs as ccrs
diff = t42_raw.t.isel(time=5*24//6).sel(lev=8.5e4)-era.t.isel(time=5*24).load()
ax = plt.axes(projection=ccrs.PlateCarree())
diff.plot(ax=ax, transform=ccrs.PlateCarree())
ax.set_global(); ax.coastlines()
```
So the huge error is over Eastern China? I almost suspect that this is the main reason for the bad RMSEs.
|
github_jupyter
|
# Введение в координатный спуск (coordinate descent): теория и приложения
## Постановка задачи и основное предположение
$$
\min_{x \in \mathbb{R}^n} f(x)
$$
- $f$ выпуклая функция
- Если по каждой координате будет выполнено $f(x + \varepsilon e_i) \geq f(x)$, будет ли это означать, что $x$ точка минимума?
- Если $f$ гладкая, то да, по критерию первого порядка $f'(x) = 0$
- Если $f$ негладкая, то нет, так как условие может быть выполнено в "угловых" точках, которые не являются точками минимума
- Если $f$ негладкая, но композитная с сепарабельной негладкой частью, то есть
$$
f(x) = g(x) + \sum_{i=1}^n h_i(x_i),
$$
то да. Почему?
- Для любого $y$ и $x$, в котором выполнено условие оптимальности по каждому направлению, выполнено
$$
f(y) - f(x) = g(y) - g(x) + \sum_{i=1}^n (h_i(y_i) - h_i(x_i)) \geq \langle g'(x), y - x \rangle+ \sum_{i=1}^n (h_i(y_i) - h_i(x_i)) = \sum_{i=1}^n [g'_i(x)(y_i - x_i) + h_i(y_i) - h_i(x_i)] \geq 0
$$
- Значит для функций такого вида поиск минимума можно проводить покоординатно, а в результате всё равно получить точку минимума
### Вычислительные нюансы
- На этапе вычисления $i+1$ координаты используются обновлённые значения $1, 2, \ldots, i$ координат при последовательном переборе координат
- Вспомните разницу между методами Якоби и Гаусса-Зейделя для решения линейных систем!
- Порядок выбора координат имеет значение
- Сложность обновления полного вектора $\sim$ сложности обновления $n$ его компонент, то есть для покоординатного обновления целевой переменной не требуется оперировать с полным градиентом!
## Простой пример
- $f(x) = \frac12 \|Ax - b\|_2^2$, где $A \in \mathbb{R}^{m \times n}$ и $m \gg n$
- Выберем некоторую координату $i$
- Тогда покоординатное условие оптимальности $[f'(x)]_i = A^{\top}_i(Ax - b) = A^{\top}_i(A_{-i} x_{-i} + A_ix_i - b) = 0$
- Откуда $x_i = \dfrac{A^{\top}_i (b - A_{-i} x_{-i})}{\|A_i\|_2^2}$ - сложность $O(nm)$, что сопоставимо с вычислением полного градиента. Можно ли быстрее?
- Да, можно! Для этого необходимо заметить следующее
$$
x_i = \dfrac{A^{\top}_i (b - A_{-i} x_{-i})}{\|A_i\|_2^2} = \dfrac{A^{\top}_i (b - Ax + A_{i}x_i)}{\|A_i\|_2^2} = x_i - \dfrac{A^{\top}_i r}{\|A_i\|_2^2},
$$
где $r = Ax - b$
- Обновление $r$ - $\mathcal{O}(m)$, вычисление $A^{\top}_i r$ - $\mathcal{O}(m)$
- В итоге, обновить одну координату стоит $\mathcal{O}(m)$, то есть сложность обновления всех координат сопоставима с вычислением полного градиента $\mathcal{O}(mn)$
## Как выбирать координаты?
- По циклы от 1 до $n$
- Случайной перестановкой
- Правило Gauss-Southwell: $i = \arg\max_k |f'_k(x)|$ - потенциально более дорогое чем остальные
```
import numpy as np
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
m = 1000
n = 100
A = np.random.randn(m, n)
u, s, v = np.linalg.svd(A, compute_uv=True, full_matrices=False)
print(s)
s[-1] = 2
A = u @ np.diag(s) @ v
print(np.linalg.cond(A))
print(np.linalg.cond(A.T @ A))
x_true = np.random.randn(n)
b = A @ x_true + 1e-7 * np.random.randn(m)
def coordinate_descent_lsq(x0, num_iter, sampler="sequential"):
conv = [x0]
x = x0.copy()
r = A @ x0 - b
grad = A.T @ r
if sampler == "sequential" or sampler == "GS":
perm = np.arange(x.shape[0])
elif sampler == "random":
perm = np.random.permutation(x.shape[0])
else:
raise ValueError("Unknown sampler!")
for i in range(num_iter):
for idx in perm:
if sampler == "GS":
idx = np.argmax(np.abs(grad))
new_x_idx = x[idx] - A[:, idx] @ r / (A[:, idx] @ A[:, idx])
r = r + A[:, idx] * (new_x_idx - x[idx])
if sampler == "GS":
grad = A.T @ r
x[idx] = new_x_idx
if sampler == "random":
perm = np.random.permutation(x.shape[0])
conv.append(x.copy())
# print(np.linalg.norm(A @ x - b))
return x, conv
x0 = np.random.randn(n)
num_iter = 500
x_cd_seq, conv_cd_seq = coordinate_descent_lsq(x0, num_iter)
x_cd_rand, conv_cd_rand = coordinate_descent_lsq(x0, num_iter, "random")
x_cd_gs, conv_cd_gs = coordinate_descent_lsq(x0, num_iter, "GS")
# !pip install git+https://github.com/amkatrutsa/liboptpy
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
def f(x):
res = A @ x - b
return 0.5 * res @ res
def gradf(x):
res = A @ x - b
return A.T @ res
L = np.max(np.linalg.eigvalsh(A.T @ A))
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(1 / L))
x_gd = gd.solve(x0=x0, max_iter=num_iter)
acc_gd = methods.fo.AcceleratedGD(f, gradf, ss.ConstantStepSize(1 / L))
x_accgd = acc_gd.solve(x0=x0, max_iter=num_iter)
plt.figure(figsize=(15, 10))
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(A @ x - b) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(A @ x - b) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(A @ x - b) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|Ax - b\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_rand], label="Random")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_seq], label="Sequential")
plt.semilogy([np.linalg.norm(x - x_true) for x in conv_cd_gs], label="GS")
plt.semilogy([np.linalg.norm(x - x_true) for x in gd.get_convergence()], label="GD")
plt.semilogy([np.linalg.norm(x - x_true) for x in acc_gd.get_convergence()], label="Nesterov")
plt.legend(fontsize=20)
plt.xlabel("Number of iterations", fontsize=24)
plt.ylabel("$\|x - x^*\|_2$", fontsize=24)
plt.grid(True)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
```
## Сходимость
- Сублинейная для выпуклых гладких с Липшицевым градиентом
- Линейная для сильно выпуклых функций
- Прямая аналогия с градиентным спуском
- Но много особенностей использования
## Типичные примеры использования
- Lasso (снова)
- SMO метод обучения SVM - блочный координатный спуск с размером блока равным 2
- Вывод в графических моделях
|
github_jupyter
|
# Principal Component Analysis in Shogun
#### By Abhijeet Kislay (GitHub ID: <a href='https://github.com/kislayabhi'>kislayabhi</a>)
This notebook is about finding Principal Components (<a href="http://en.wikipedia.org/wiki/Principal_component_analysis">PCA</a>) of data (<a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised</a>) in Shogun. Its <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensional reduction</a> capabilities are further utilised to show its application in <a href="http://en.wikipedia.org/wiki/Data_compression">data compression</a>, image processing and <a href="http://en.wikipedia.org/wiki/Facial_recognition_system">face recognition</a>.
```
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all shogun classes
from shogun import *
```
## Some Formal Background (Skip if you just want code examples)
PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.
The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.
Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\mathbf{x}$ is 'projected down' to a lower dimensional vector $\mathbf{y}$ by:
$$\mathbf{y}=\mathbf{F}\mathbf{x}+\text{const}.$$
where the matrix $\mathbf{F}\in\mathbb{R}^{\text{M}\times \text{D}}$, with $\text{M}<\text{D}$. Here $\text{M}=\dim(\mathbf{y})$ and $\text{D}=\dim(\mathbf{x})$.
From the above scenario, we assume that
* The number of principal components to use is $\text{M}$.
* The dimension of each data point is $\text{D}$.
* The number of data points is $\text{N}$.
We express the approximation for datapoint $\mathbf{x}^n$ as:$$\mathbf{x}^n \approx \mathbf{c} + \sum\limits_{i=1}^{\text{M}}y_i^n \mathbf{b}^i \equiv \tilde{\mathbf{x}}^n.$$
* Here the vector $\mathbf{c}$ is a constant and defines a point in the lower dimensional space.
* The $\mathbf{b}^i$ define vectors in the lower dimensional space (also known as 'principal component coefficients' or 'loadings').
* The $y_i^n$ are the low dimensional co-ordinates of the data.
Our motive is to find the reconstruction $\tilde{\mathbf{x}}^n$ given the lower dimensional representation $\mathbf{y}^n$(which has components $y_i^n,i = 1,...,\text{M})$. For a data space of dimension $\dim(\mathbf{x})=\text{D}$, we hope to accurately describe the data using only a small number $(\text{M}\ll \text{D})$ of coordinates of $\mathbf{y}$.
To determine the best lower dimensional representation it is convenient to use the square distance error between $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:$$\text{E}(\mathbf{B},\mathbf{Y},\mathbf{c})=\sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}[x_i^n - \tilde{x}_i^n]^2.$$
* Here the basis vectors are defined as $\mathbf{B} = [\mathbf{b}^1,...,\mathbf{b}^\text{M}]$ (defining $[\text{B}]_{i,j} = b_i^j$).
* Corresponding low dimensional coordinates are defined as $\mathbf{Y} = [\mathbf{y}^1,...,\mathbf{y}^\text{N}].$
* Also, $x_i^n$ and $\tilde{x}_i^n$ represents the coordinates of the data points for the original and the reconstructed data respectively.
* The bias $\mathbf{c}$ is given by the mean of the data $\sum_n\mathbf{x}^n/\text{N}$.
Therefore, for simplification purposes we centre our data, so as to set $\mathbf{c}$ to zero. Now we concentrate on finding the optimal basis $\mathbf{B}$( which has the components $\mathbf{b}^i, i=1,...,\text{M} $).
#### Deriving the optimal linear reconstruction
To find the best basis vectors $\mathbf{B}$ and corresponding low dimensional coordinates $\mathbf{Y}$, we may minimize the sum of squared differences between each vector $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:
$\text{E}(\mathbf{B},\mathbf{Y}) = \sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}\left[x_i^n - \sum\limits_{j=1}^{\text{M}}y_j^nb_i^j\right]^2 = \text{trace} \left( (\mathbf{X}-\mathbf{B}\mathbf{Y})^T(\mathbf{X}-\mathbf{B}\mathbf{Y}) \right)$
where $\mathbf{X} = [\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Considering the above equation under the orthonormality constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$ (i.e the basis vectors are mutually orthogonal and of unit length), we differentiate it w.r.t $y_k^n$. The squared error $\text{E}(\mathbf{B},\mathbf{Y})$ therefore has zero derivative when:
$y_k^n = \sum_i b_i^kx_i^n$
By substituting this solution in the above equation, the objective becomes
$\text{E}(\mathbf{B}) = (\text{N}-1)\left[\text{trace}(\mathbf{S}) - \text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)\right],$
where $\mathbf{S}$ is the sample covariance matrix of the data.
To minimise equation under the constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$, we use a set of Lagrange Multipliers $\mathbf{L}$, so that the objective is to minimize:
$-\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)+\text{trace}\left(\mathbf{L}\left(\mathbf{B}^T\mathbf{B} - \mathbf{I}\right)\right).$
Since the constraint is symmetric, we can assume that $\mathbf{L}$ is also symmetric. Differentiating with respect to $\mathbf{B}$ and equating to zero we obtain that at the optimum
$\mathbf{S}\mathbf{B} = \mathbf{B}\mathbf{L}$.
This is a form of eigen-equation so that a solution is given by taking $\mathbf{L}$ to be diagonal and $\mathbf{B}$ as the matrix whose columns are the corresponding eigenvectors of $\mathbf{S}$. In this case,
$\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right) =\text{trace}(\mathbf{L}),$
which is the sum of the eigenvalues corresponding to the eigenvectors forming $\mathbf{B}$. Since we wish to minimise $\text{E}(\mathbf{B})$, we take the eigenvectors with the largest corresponding eigenvalues.
Whilst the solution to this eigen-problem is unique, this only serves to define the solution subspace since one may rotate and scale $\mathbf{B}$ and $\mathbf{Y}$ such that the value of the squared loss is exactly the same. The justification for choosing the non-rotated eigen solution is given by the additional requirement that the principal components corresponds to directions of maximal variance.
#### Maximum variance criterion
We aim to find that single direction $\mathbf{b}$ such that, when the data is projected onto this direction, the variance of this projection is maximal amongst all possible such projections.
The projection of a datapoint onto a direction $\mathbf{b}$ is $\mathbf{b}^T\mathbf{x}^n$ for a unit length vector $\mathbf{b}$. Hence the sum of squared projections is: $$\sum\limits_{n}\left(\mathbf{b}^T\mathbf{x}^n\right)^2 = \mathbf{b}^T\left[\sum\limits_{n}\mathbf{x}^n(\mathbf{x}^n)^T\right]\mathbf{b} = (\text{N}-1)\mathbf{b}^T\mathbf{S}\mathbf{b} = \lambda(\text{N} - 1)$$
which ignoring constants, is simply the negative of the equation for a single retained eigenvector $\mathbf{b}$(with $\mathbf{S}\mathbf{b} = \lambda\mathbf{b}$). Hence the optimal single $\text{b}$ which maximises the projection variance is given by the eigenvector corresponding to the largest eigenvalues of $\mathbf{S}.$ The second largest eigenvector corresponds to the next orthogonal optimal direction and so on. This explains why, despite the squared loss equation being invariant with respect to arbitrary rotation of the basis vectors, the ones given by the eigen-decomposition have the additional property that they correspond to directions of maximal variance. These maximal variance directions found by PCA are called the $\text{principal} $ $\text{directions}.$
There are two eigenvalue methods through which shogun can perform PCA namely
* Eigenvalue Decomposition Method.
* Singular Value Decomposition.
#### EVD vs SVD
* The EVD viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product of $\mathbf{X}\mathbf{X}^\text{T}$, where $\mathbf{X}$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:
$\mathbf{S}=\frac{1}{\text{N}-1}\mathbf{X}\mathbf{X}^\text{T},$
where the $\text{D}\times\text{N}$ matrix $\mathbf{X}$ contains all the data vectors: $\mathbf{X}=[\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Writing the $\text{D}\times\text{N}$ matrix of eigenvectors as $\mathbf{E}$ and the eigenvalues as an $\text{N}\times\text{N}$ diagonal matrix $\mathbf{\Lambda}$, the eigen-decomposition of the covariance $\mathbf{S}$ is
$\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{X}^\text{T}\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\tilde{\mathbf{E}}=\tilde{\mathbf{E}}\mathbf{\Lambda},$
where we defined $\tilde{\mathbf{E}}=\mathbf{X}^\text{T}\mathbf{E}$. The final expression above represents the eigenvector equation for $\mathbf{X}^\text{T}\mathbf{X}.$ This is a matrix of dimensions $\text{N}\times\text{N}$ so that calculating the eigen-decomposition takes $\mathcal{O}(\text{N}^3)$ operations, compared with $\mathcal{O}(\text{D}^3)$ operations in the original high-dimensional space. We then can therefore calculate the eigenvectors $\tilde{\mathbf{E}}$ and eigenvalues $\mathbf{\Lambda}$ of this matrix more easily. Once found, we use the fact that the eigenvalues of $\mathbf{S}$ are given by the diagonal entries of $\mathbf{\Lambda}$ and the eigenvectors by
$\mathbf{E}=\mathbf{X}\tilde{\mathbf{E}}\mathbf{\Lambda}^{-1}$
* On the other hand, applying SVD to the data matrix $\mathbf{X}$ follows like:
$\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}$
where $\mathbf{U}^\text{T}\mathbf{U}=\mathbf{I}_\text{D}$ and $\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}_\text{N}$ and $\mathbf{\Sigma}$ is a diagonal matrix of the (positive) singular values. We assume that the decomposition has ordered the singular values so that the upper left diagonal element of $\mathbf{\Sigma}$ contains the largest singular value.
Attempting to construct the covariance matrix $(\mathbf{X}\mathbf{X}^\text{T})$from this decomposition gives:
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)^\text{T}$
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{V}\mathbf{\Sigma}\mathbf{U}^\text{T}\right)$
and since $\mathbf{V}$ is an orthogonal matrix $\left(\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}\right),$
$\mathbf{X}\mathbf{X}^\text{T}=\left(\mathbf{U}\mathbf{\Sigma}^\mathbf{2}\mathbf{U}^\text{T}\right)$
Since it is in the form of an eigen-decomposition, the PCA solution given by performing the SVD decomposition of $\mathbf{X}$, for which the eigenvectors are then given by $\mathbf{U}$, and corresponding eigenvalues by the square of the singular values.
#### [CPCA](http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CPCA.html) Class Reference (Shogun)
CPCA class of Shogun inherits from the [CPreprocessor](http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CPreprocessor.html) class. Preprocessors are transformation functions that doesn't change the domain of the input features. Specifically, CPCA performs principal component analysis on the input vectors and keeps only the specified number of eigenvectors. On preprocessing, the stored covariance matrix is used to project vectors into eigenspace.
Performance of PCA depends on the algorithm used according to the situation in hand.
Our PCA preprocessor class provides 3 method options to compute the transformation matrix:
* $\text{PCA(EVD)}$ sets $\text{PCAmethod == EVD}$ : Eigen Value Decomposition of Covariance Matrix $(\mathbf{XX^T}).$
The covariance matrix $\mathbf{XX^T}$ is first formed internally and then
its eigenvectors and eigenvalues are computed using QR decomposition of the matrix.
The time complexity of this method is $\mathcal{O}(D^3)$ and should be used when $\text{N > D.}$
* $\text{PCA(SVD)}$ sets $\text{PCAmethod == SVD}$ : Singular Value Decomposition of feature matrix $\mathbf{X}$.
The transpose of feature matrix, $\mathbf{X^T}$, is decomposed using SVD. $\mathbf{X^T = UDV^T}.$
The matrix V in this decomposition contains the required eigenvectors and
the diagonal entries of the diagonal matrix D correspond to the non-negative
eigenvalues.The time complexity of this method is $\mathcal{O}(DN^2)$ and should be used when $\text{N < D.}$
* $\text{PCA(AUTO)}$ sets $\text{PCAmethod == AUTO}$ : This mode automagically chooses one of the above modes for the user based on whether $\text{N>D}$ (chooses $\text{EVD}$) or $\text{N<D}$ (chooses $\text{SVD}$)
## PCA on 2D data
#### Step 1: Get some data
We will generate the toy data by adding orthogonal noise to a set of points lying on an arbitrary 2d line. We expect PCA to recover this line, which is a one-dimensional linear sub-space.
```
#number of data points.
n=100
#generate a random 2d line(y1 = mx1 + c)
m = random.randint(1,10)
c = random.randint(1,10)
x1 = random.random_integers(-20,20,n)
y1=m*x1+c
#generate the noise.
noise=random.random_sample([n]) * random.random_integers(-35,35,n)
#make the noise orthogonal to the line y=mx+c and add it.
x=x1 + noise*m/sqrt(1+square(m))
y=y1 + noise/sqrt(1+square(m))
twoD_obsmatrix=array([x,y])
#to visualise the data we must plot it.
rcParams['figure.figsize'] = 7, 7
figure,axis=subplots(1,1)
xlim(-50,50)
ylim(-50,50)
axis.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6)
#the line from which we generated the data is plotted in red
axis.plot(x1[:],y1[:],linewidth=0.3,color='red')
title('One-Dimensional sub-space with noise')
xlabel("x axis")
_=ylabel("y axis")
```
#### Step 2: Subtract the mean.
For PCA to work properly, we must subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. So, all the $x$ values have $\bar{x}$ subtracted, and all the $y$ values have $\bar{y}$ subtracted from them, where:$$\bar{\mathbf{x}} = \frac{\sum\limits_{i=1}^{n}x_i}{n}$$ $\bar{\mathbf{x}}$ denotes the mean of the $x_i^{'s}$
##### Shogun's way of doing things :
Preprocessor PCA performs principial component analysis on input feature vectors/matrices. It provides an interface to set the target dimension by $\text{put('target_dim', target_dim) method}.$ When the $\text{init()}$ method in $\text{PCA}$ is called with proper
feature matrix $\text{X}$ (with say $\text{N}$ number of vectors and $\text{D}$ feature dimension), a transformation matrix is computed and stored internally.It inherenty also centralizes the data by subtracting the mean from it.
```
#convert the observation matrix into dense feature matrix.
train_features = features(twoD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=2 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = PCA(EVD)
#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by
#setting the target dimension to 2 to visualize both the eigen vectors.
#However, in future examples we will get rid of this step by implementing it directly.
preprocessor.put('target_dim', 2)
#Centralise the data by subtracting its mean from it.
preprocessor.init(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get_real_vector('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
```
#### Step 3: Calculate the covariance matrix
To understand the relationship between 2 dimension we define $\text{covariance}$. It is a measure to find out how much the dimensions vary from the mean $with$ $respect$ $to$ $each$ $other.$$$cov(X,Y)=\frac{\sum\limits_{i=1}^{n}(X_i-\bar{X})(Y_i-\bar{Y})}{n-1}$$
A useful way to get all the possible covariance values between all the different dimensions is to calculate them all and put them in a matrix.
Example: For a 3d dataset with usual dimensions of $x,y$ and $z$, the covariance matrix has 3 rows and 3 columns, and the values are this:
$$\mathbf{S} = \quad\begin{pmatrix}cov(x,x)&cov(x,y)&cov(x,z)\\cov(y,x)&cov(y,y)&cov(y,z)\\cov(z,x)&cov(z,y)&cov(z,z)\end{pmatrix}$$
#### Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix
Find the eigenvectors $e^1,....e^M$ of the covariance matrix $\mathbf{S}$.
##### Shogun's way of doing things :
Step 3 and Step 4 are directly implemented by the PCA preprocessor of Shogun toolbar. The transformation matrix is essentially a $\text{D}$$\times$$\text{M}$ matrix, the columns of which correspond to the eigenvectors of the covariance matrix $(\text{X}\text{X}^\text{T})$ having top $\text{M}$ eigenvalues.
```
#Get the eigenvectors(We will get two of these since we set the target to 2).
E = preprocessor.get_real_matrix('transformation_matrix')
#Get all the eigenvalues returned by PCA.
eig_value=preprocessor.get_real_vector('eigenvalues_vector')
e1 = E[:,0]
e2 = E[:,1]
eig_value1 = eig_value[0]
eig_value2 = eig_value[1]
```
#### Step 5: Choosing components and forming a feature vector.
Lets visualize the eigenvectors and decide upon which to choose as the $principle$ $component$ of the data set.
```
#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E
#Here M=1
#slope of e1 & e2
m1=e1[1]/e1[0]
m2=e2[1]/e2[0]
#generate the two lines
x1=range(-50,50)
x2=x1
y1=multiply(m1,x1)
y2=multiply(m2,x2)
#plot the data along with those two eigenvectors
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x1[:], y1[:], linewidth=0.7, color='black')
axis.plot(x2[:], y2[:], linewidth=0.7, color='blue')
p1 = Rectangle((0, 0), 1, 1, fc="black")
p2 = Rectangle((0, 0), 1, 1, fc="blue")
legend([p1,p2],["1st eigenvector","2nd eigenvector"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Eigenvectors selection')
xlabel("x axis")
_=ylabel("y axis")
```
In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.
Form the matrix $\mathbf{E}=[\mathbf{e}^1,...,\mathbf{e}^M].$
Here $\text{M}$ represents the target dimension of our final projection
```
#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).
#E is the feature vector.
E=e2
```
#### Step 6: Projecting the data to its Principal Components.
This is the final step in PCA. Once we have choosen the components(eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the vector and multiply it on the left of the original dataset.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by
$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$
Here the $\mathbf{E}^T$ is the matrix with the eigenvectors in rows, with the most significant eigenvector at the top. The mean adjusted data, with data items in each column, with each row holding a seperate dimension is multiplied to it.
##### Shogun's way of doing things :
Step 6 can be performed by shogun's PCA preprocessor as follows:
The transformation matrix that we got after $\text{init()}$ is used to transform all $\text{D-dim}$ feature matrices (with $\text{D}$ feature dimensions) supplied, via $\text{apply_to_feature_matrix methods}$.This transformation outputs the $\text{M-Dim}$ approximation of all these input vectors and matrices (where $\text{M}$ $\leq$ $\text{min(D,N)}$).
```
#transform all 2-dimensional feature matrices to target-dimensional approximations.
yn=preprocessor.apply_to_feature_matrix(train_features)
#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.
#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.
yn1=yn[1,:]
```
Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
#### Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
```
x_new=(yn1 * E[0]) + tile(mean_x,[n,1]).T[0]
y_new=(yn1 * E[1]) + tile(mean_y,[n,1]).T[0]
```
The new data is plotted below
```
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x_new, y_new, 'o', color='blue', markersize=5, label="red")
title('PCA Projection of 2D data into 1D subspace')
xlabel("x axis")
ylabel("y axis")
#add some legend for information
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="g")
p3 = Rectangle((0, 0), 1, 1, fc="b")
legend([p1,p2,p3],["normal projection","2d data","1d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
#plot the projections in red:
for i in range(n):
axis.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')
```
## PCA on a 3d data.
#### Step1: Get some data
We generate points from a plane and then add random noise orthogonal to it. The general equation of a plane is: $$\text{a}\mathbf{x}+\text{b}\mathbf{y}+\text{c}\mathbf{z}+\text{d}=0$$
```
rcParams['figure.figsize'] = 8,8
#number of points
n=100
#generate the data
a=random.randint(1,20)
b=random.randint(1,20)
c=random.randint(1,20)
d=random.randint(1,20)
x1=random.random_integers(-20,20,n)
y1=random.random_integers(-20,20,n)
z1=-(a*x1+b*y1+d)/c
#generate the noise
noise=random.random_sample([n])*random.random_integers(-30,30,n)
#the normal unit vector is [a,b,c]/magnitude
magnitude=sqrt(square(a)+square(b)+square(c))
normal_vec=array([a,b,c]/magnitude)
#add the noise orthogonally
x=x1+noise*normal_vec[0]
y=y1+noise*normal_vec[1]
z=z1+noise*normal_vec[2]
threeD_obsmatrix=array([x,y,z])
#to visualize the data, we must plot it.
from mpl_toolkits.mplot3d import Axes3D
fig = pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
#plot the noisy data generated by distorting a plane
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p2],["3d data"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Two dimensional subspace with noise')
xx, yy = meshgrid(range(-30,30), range(-30,30))
zz=-(a * xx + b * yy + d) / c
```
#### Step 2: Subtract the mean.
```
#convert the observation matrix into dense feature matrix.
train_features = features(threeD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=3 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = PCA(EVD)
#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their
#eigenvalues.
preprocessor.put('target_dim', 2)
preprocessor.init(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get_real_vector('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
mean_z=mean_datapoints[2]
```
#### Step 3 & Step 4: Calculate the eigenvectors of the covariance matrix
```
#get the required eigenvectors corresponding to top 2 eigenvalues.
E = preprocessor.get_real_matrix('transformation_matrix')
```
#### Steps 5: Choosing components and forming a feature vector.
Since we performed PCA for a target $\dim = 2$ for the $3 \dim$ data, we are directly given
the two required eigenvectors in $\mathbf{E}$
E is automagically filled by setting target dimension = M. This is different from the 2d data example where we implemented this step manually.
#### Step 6: Projecting the data to its Principal Components.
```
#This can be performed by shogun's PCA preprocessor as follows:
yn=preprocessor.apply_to_feature_matrix(train_features)
```
#### Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
```
new_data=dot(E,yn)
x_new=new_data[0,:]+tile(mean_x,[n,1]).T[0]
y_new=new_data[1,:]+tile(mean_y,[n,1]).T[0]
z_new=new_data[2,:]+tile(mean_z,[n,1]).T[0]
#all the above points lie on the same plane. To make it more clear we will plot the projection also.
fig=pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p1,p2,p3],["normal projection","3d data","2d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
title('PCA Projection of 3D data into 2D subspace')
for i in range(100):
ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')
ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r')
```
#### PCA Performance
Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\text{(N>D)}$ but for the next example $\text{(N<D)}$ we will be using Singular Value Decomposition.
## Practical Example : Eigenfaces
The problem with the image representation we are given is its high dimensionality. Two-dimensional $\text{p} \times \text{q}$ grayscale images span a $\text{m=pq}$ dimensional vector space, so an image with $\text{100}\times\text{100}$ pixels lies in a $\text{10,000}$ dimensional image space already.
The question is, are all dimensions really useful for us?
$\text{Eigenfaces}$ are based on the dimensional reduction approach of $\text{Principal Component Analysis(PCA)}$. The basic idea is to treat each image as a vector in a high dimensional space. Then, $\text{PCA}$ is applied to the set of images to produce a new reduced subspace that captures most of the variability between the input images. The $\text{Pricipal Component Vectors}$(eigenvectors of the sample covariance matrix) are called the $\text{Eigenfaces}$. Every input image can be represented as a linear combination of these eigenfaces by projecting the image onto the new eigenfaces space. Thus, we can perform the identfication process by matching in this reduced space. An input image is transformed into the $\text{eigenspace,}$ and the nearest face is identified using a $\text{Nearest Neighbour approach.}$
#### Step 1: Get some data.
Here data means those Images which will be used for training purposes.
```
rcParams['figure.figsize'] = 10, 10
import os
def get_imlist(path):
""" Returns a list of filenames for all jpg images in a directory"""
return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')]
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/training/')
#set no. of rows that the images will be resized.
k1=100
#set no. of columns that the images will be resized.
k2=100
filenames = get_imlist(path_train)
filenames = array(filenames)
#n is total number of images that has to be analysed.
n=len(filenames)
```
Lets have a look on the data:
```
# we will be using this often to visualize the images out there.
def showfig(image):
imgplot=imshow(image, cmap='gray')
imgplot.axes.get_xaxis().set_visible(False)
imgplot.axes.get_yaxis().set_visible(False)
from PIL import Image
from scipy import misc
# to get a hang of the data, lets see some part of the dataset images.
fig = pyplot.figure()
title('The Training Dataset')
for i in range(49):
fig.add_subplot(7,7,i+1)
train_img=array(Image.open(filenames[i]).convert('L'))
train_img=misc.imresize(train_img, [k1,k2])
showfig(train_img)
```
Represent every image $I_i$ as a vector $\Gamma_i$
```
#To form the observation matrix obs_matrix.
#read the 1st image.
train_img = array(Image.open(filenames[0]).convert('L'))
#resize it to k1 rows and k2 columns
train_img=misc.imresize(train_img, [k1,k2])
#since features accepts only data of float64 datatype, we do a type conversion
train_img=array(train_img, dtype='double')
#flatten it to make it a row vector.
train_img=train_img.flatten()
# repeat the above for all images and stack all those vectors together in a matrix
for i in range(1,n):
temp=array(Image.open(filenames[i]).convert('L'))
temp=misc.imresize(temp, [k1,k2])
temp=array(temp, dtype='double')
temp=temp.flatten()
train_img=vstack([train_img,temp])
#form the observation matrix
obs_matrix=train_img.T
```
#### Step 2: Subtract the mean
It is very important that the face images $I_1,I_2,...,I_M$ are $centered$ and of the $same$ size
We observe here that the no. of $\dim$ for each image is far greater than no. of training images. This calls for the use of $\text{SVD}$.
Setting the $\text{PCA}$ in the $\text{AUTO}$ mode does this automagically according to the situation.
```
train_features = features(obs_matrix)
preprocessor=PCA(AUTO)
preprocessor.put('target_dim', 100)
preprocessor.init(train_features)
mean=preprocessor.get_real_vector('mean_vector')
```
#### Step 3 & Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix.
```
#get the required eigenvectors corresponding to top 100 eigenvalues
E = preprocessor.get_real_matrix('transformation_matrix')
#lets see how these eigenfaces/eigenvectors look like:
fig1 = pyplot.figure()
title('Top 20 Eigenfaces')
for i in range(20):
a = fig1.add_subplot(5,4,i+1)
eigen_faces=E[:,i].reshape([k1,k2])
showfig(eigen_faces)
```
These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Clearly a tradeoff is required.
We here set for M=100.
#### Step 5: Choosing components and forming a feature vector.
Since we set target $\dim = 100$ for this $n \dim$ data, we are directly given the $100$ required eigenvectors in $\mathbf{E}$
E is automagically filled. This is different from the 2d data example where we implemented this step manually.
#### Step 6: Projecting the data to its Principal Components.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by $$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$$
```
#we perform the required dot product.
yn=preprocessor.apply_to_feature_matrix(train_features)
```
#### Step 7: Form the approximate reconstruction of the original image $I_n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\mathbf{x}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
```
re=tile(mean,[n,1]).T[0] + dot(E,yn)
#lets plot the reconstructed images.
fig2 = pyplot.figure()
title('Reconstructed Images from 100 eigenfaces')
for i in range(1,50):
re1 = re[:,i].reshape([k1,k2])
fig2.add_subplot(7,7,i)
showfig(re1)
```
## Recognition part.
In our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.
Test images are represented in terms of eigenface coefficients by projecting them into face space$\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\text{Euclidean distance}$.
```
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/testing/')
test_files=get_imlist(path_train)
test_img=array(Image.open(test_files[0]).convert('L'))
rcParams.update({'figure.figsize': (3, 3)})
#we plot the test image , for which we have to identify a good match from the training images we already have
fig = pyplot.figure()
title('The Test Image')
showfig(test_img)
#We flatten out our test image just the way we have done for the other images
test_img=misc.imresize(test_img, [k1,k2])
test_img=array(test_img, dtype='double')
test_img=test_img.flatten()
#We centralise the test image by subtracting the mean from it.
test_f=test_img-mean
```
Here we have to project our training image as well as the test image on the PCA subspace.
The Eigenfaces method then performs face recognition by:
1. Projecting all training samples into the PCA subspace.
2. Projecting the query image into the PCA subspace.
3. Finding the nearest neighbour between the projected training images and the projected query image.
```
#We have already projected our training images into pca subspace as yn.
train_proj = yn
#Projecting our test image into pca subspace
test_proj = dot(E.T, test_f)
```
##### Shogun's way of doing things:
Shogun uses [CEuclideanDistance](http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CEuclideanDistance.html) class to compute the familiar Euclidean distance for real valued features. It computes the square root of the sum of squared disparity between the corresponding feature dimensions of two data points.
$\mathbf{d(x,x')=}$$\sqrt{\mathbf{\sum\limits_{i=0}^{n}}|\mathbf{x_i}-\mathbf{x'_i}|^2}$
```
#To get Eucledian Distance as the distance measure use EuclideanDistance.
workfeat = features(mat(train_proj))
testfeat = features(mat(test_proj).T)
RaRb=EuclideanDistance(testfeat, workfeat)
#The distance between one test image w.r.t all the training is stacked in matrix d.
d=empty([n,1])
for i in range(n):
d[i]= RaRb.distance(0,i)
#The one having the minimum distance is found out
min_distance_index = d.argmin()
iden=array(Image.open(filenames[min_distance_index]))
title('Identified Image')
showfig(iden)
```
## References:
[1] David Barber. Bayesian Reasoning and Machine Learning.
[2] Lindsay I Smith. A tutorial on Principal Component Analysis.
[3] Philipp Wanger. Face Recognition with GNU Octave/MATLAB.
|
github_jupyter
|
```
# Making our own objects
class Foo:
def hi(self): # self is the first parameter by convention
print(self) # self is a pointer to the object
f = Foo() # create Foo class object
f.hi()
f
Foo.hi
# Constructor
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return f'{self.name} is {self.age} years old.'
person = Person('Mahin', 22)
str(person) # note: call str(object) calls obj.__str__()
dir(person) # showing all of the methods in person object.
# litle test
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
"""__len__ return integer """
def __len__(self):
return len(self.name) + 10
person = Person('Mahin', 23)
print(len(person)) # Haha 15. That's works.
# Fields
class Person:
"""name and age are fileds, that are accessed by dot"""
def __init__(self, name, age):
self.name = name
self.age = age
def grow_up(self):
self.age = self.age + 1
person = Person('Mahin', 22)
person.age # access by dot
person.grow_up()
person.age
# __init__ vs __new__
################### 1 #################################
class Test:
""" cls: class Test itself. Not object of class Test. It class itself """
def __new__(cls, x):
print(f'__new__, cls={cls}')
# return super().__new__(cls)
def __init__(self, x):
print(f'__init__, self={self}')
self.x = x
test = Test(2)
test.x
# see the difference
############################### 2 ######################
class Test:
""" cls: class Test itself. Not object of class Test. It class itself """
def __new__(cls, x):
print(f'__new__, cls={cls}')
return super().__new__(cls)
def __init__(self, x):
print(f'__init__, self={self}')
self.x = x
test = Test(3)
test.x
######################## 3 ####################
class Test:
""" cls: class Test itself. Not object of class Test. It class itself """
def __new__(cls, x):
print(f'__new__, cls={cls}')
return super().__new__(cls)
def __init__(self, x):
print(f'__init__, self={self}')
self.x = x
def __repr__(self):
return 'Are you kidding me!!!'
test = Test(4)
test.x
# eveything is an object
type(1)
type(1).mro() # Method Resolution Order. Show Inheritance Hierarchy
type('name').mro()
type(print).mro()
'hi' == 'hi'
id('hi')
'hi'.__eq__('hi')
1 == 2
(1).__eq__(2)
# Duck Typing
def reverse(string):
out = str()
for i in string:
out = i + out
return out
print(reverse('hello'))
print(reverse(343))
print(reverse(['a', 'b', 'cd'])) # unexpected behavior. Did you get it???
type(dict()).__dict__
```
|
github_jupyter
|
# Forecasting in statsmodels
This notebook describes forecasting using time series models in statsmodels.
**Note**: this notebook applies only to the state space model classes, which are:
- `sm.tsa.SARIMAX`
- `sm.tsa.UnobservedComponents`
- `sm.tsa.VARMAX`
- `sm.tsa.DynamicFactor`
```
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
macrodata = sm.datasets.macrodata.load_pandas().data
macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')
```
## Basic example
A simple example is to use an AR(1) model to forecast inflation. Before forecasting, let's take a look at the series:
```
endog = macrodata['infl']
endog.plot(figsize=(15, 5))
```
### Constructing and estimating the model
The next step is to formulate the econometric model that we want to use for forecasting. In this case, we will use an AR(1) model via the `SARIMAX` class in statsmodels.
After constructing the model, we need to estimate its parameters. This is done using the `fit` method. The `summary` method produces several convenient tables showing the results.
```
# Construct the model
mod = sm.tsa.SARIMAX(endog, order=(1, 0, 0), trend='c')
# Estimate the parameters
res = mod.fit()
print(res.summary())
```
### Forecasting
Out-of-sample forecasts are produced using the `forecast` or `get_forecast` methods from the results object.
The `forecast` method gives only point forecasts.
```
# The default is to get a one-step-ahead forecast:
print(res.forecast())
```
The `get_forecast` method is more general, and also allows constructing confidence intervals.
```
# Here we construct a more complete results object.
fcast_res1 = res.get_forecast()
# Most results are collected in the `summary_frame` attribute.
# Here we specify that we want a confidence level of 90%
print(fcast_res1.summary_frame(alpha=0.10))
```
The default confidence level is 95%, but this can be controlled by setting the `alpha` parameter, where the confidence level is defined as $(1 - \alpha) \times 100\%$. In the example above, we specified a confidence level of 90%, using `alpha=0.10`.
### Specifying the number of forecasts
Both of the functions `forecast` and `get_forecast` accept a single argument indicating how many forecasting steps are desired. One option for this argument is always to provide an integer describing the number of steps ahead you want.
```
print(res.forecast(steps=2))
fcast_res2 = res.get_forecast(steps=2)
# Note: since we did not specify the alpha parameter, the
# confidence level is at the default, 95%
print(fcast_res2.summary_frame())
```
However, **if your data included a Pandas index with a defined frequency** (see the section at the end on Indexes for more information), then you can alternatively specify the date through which you want forecasts to be produced:
```
print(res.forecast('2010Q2'))
fcast_res3 = res.get_forecast('2010Q2')
print(fcast_res3.summary_frame())
```
### Plotting the data, forecasts, and confidence intervals
Often it is useful to plot the data, the forecasts, and the confidence intervals. There are many ways to do this, but here's one example
```
fig, ax = plt.subplots(figsize=(15, 5))
# Plot the data (here we are subsetting it to get a better look at the forecasts)
endog.loc['1999':].plot(ax=ax)
# Construct the forecasts
fcast = res.get_forecast('2011Q4').summary_frame()
fcast['mean'].plot(ax=ax, style='k--')
ax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);
```
### Note on what to expect from forecasts
The forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive.
## Prediction vs Forecasting
The results objects also contain two methods that all for both in-sample fitted values and out-of-sample forecasting. They are `predict` and `get_prediction`. The `predict` method only returns point predictions (similar to `forecast`), while the `get_prediction` method also returns additional results (similar to `get_forecast`).
In general, if your interest is out-of-sample forecasting, it is easier to stick to the `forecast` and `get_forecast` methods.
## Cross validation
**Note**: some of the functions used in this section were first introduced in statsmodels v0.11.0.
A common use case is to cross-validate forecasting methods by performing h-step-ahead forecasts recursively using the following process:
1. Fit model parameters on a training sample
2. Produce h-step-ahead forecasts from the end of that sample
3. Compare forecasts against test dataset to compute error rate
4. Expand the sample to include the next observation, and repeat
Economists sometimes call this a pseudo-out-of-sample forecast evaluation exercise, or time-series cross-validation.
### Example
We will conduct a very simple exercise of this sort using the inflation dataset above. The full dataset contains 203 observations, and for expositional purposes we'll use the first 80% as our training sample and only consider one-step-ahead forecasts.
A single iteration of the above procedure looks like the following:
```
# Step 1: fit model parameters w/ training sample
training_obs = int(len(endog) * 0.8)
training_endog = endog[:training_obs]
training_mod = sm.tsa.SARIMAX(
training_endog, order=(1, 0, 0), trend='c')
training_res = training_mod.fit()
# Print the estimated parameters
print(training_res.params)
# Step 2: produce one-step-ahead forecasts
fcast = training_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
```
To add on another observation, we can use the `append` or `extend` results methods. Either method can produce the same forecasts, but they differ in the other results that are available:
- `append` is the more complete method. It always stores results for all training observations, and it optionally allows refitting the model parameters given the new observations (note that the default is *not* to refit the parameters).
- `extend` is a faster method that may be useful if the training sample is very large. It *only* stores results for the new observations, and it does not allow refitting the model parameters (i.e. you have to use the parameters estimated on the previous sample).
If your training sample is relatively small (less than a few thousand observations, for example) or if you want to compute the best possible forecasts, then you should use the `append` method. However, if that method is infeasible (for example, because you have a very large training sample) or if you are okay with slightly suboptimal forecasts (because the parameter estimates will be slightly stale), then you can consider the `extend` method.
A second iteration, using the `append` method and refitting the parameters, would go as follows (note again that the default for `append` does not refit the parameters, but we have overridden that with the `refit=True` argument):
```
# Step 1: append a new observation to the sample and refit the parameters
append_res = training_res.append(endog[training_obs:training_obs + 1], refit=True)
# Print the re-estimated parameters
print(append_res.params)
```
Notice that these estimated parameters are slightly different than those we originally estimated. With the new results object, `append_res`, we can compute forecasts starting from one observation further than the previous call:
```
# Step 2: produce one-step-ahead forecasts
fcast = append_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
```
Putting it altogether, we can perform the recursive forecast evaluation exercise as follows:
```
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.append(updated_endog, refit=False)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
```
We now have a set of three forecasts made at each point in time from 1999Q2 through 2009Q3. We can construct the forecast errors by subtracting each forecast from the actual value of `endog` at that point.
```
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
```
To evaluate our forecasts, we often want to look at a summary value like the root mean square error. Here we can compute that for each horizon by first flattening the forecast errors so that they are indexed by horizon and then computing the root mean square error fore each horizon.
```
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
```
#### Using `extend`
We can check that we get similar forecasts if we instead use the `extend` method, but that they are not exactly the same as when we use `append` with the `refit=True` argument. This is because `extend` does not re-estimate the parameters given the new observation.
```
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.extend(updated_endog)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
```
By not re-estimating the parameters, our forecasts are slightly worse (the root mean square error is higher at each horizon). However, the process is faster, even with only 200 datapoints. Using the `%%timeit` cell magic on the cells above, we found a runtime of 570ms using `extend` versus 1.7s using `append` with `refit=True`. (Note that using `extend` is also faster than using `append` with `refit=False`).
## Indexes
Throughout this notebook, we have been making use of Pandas date indexes with an associated frequency. As you can see, this index marks our data as at a quarterly frequency, between 1959Q1 and 2009Q3.
```
print(endog.index)
```
In most cases, if your data has an associated data/time index with a defined frequency (like quarterly, monthly, etc.), then it is best to make sure your data is a Pandas series with the appropriate index. Here are three examples of this:
```
# Annual frequency, using a PeriodIndex
index = pd.period_range(start='2000', periods=4, freq='A')
endog1 = pd.Series([1, 2, 3, 4], index=index)
print(endog1.index)
# Quarterly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='QS')
endog2 = pd.Series([1, 2, 3, 4], index=index)
print(endog2.index)
# Monthly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='M')
endog3 = pd.Series([1, 2, 3, 4], index=index)
print(endog3.index)
```
In fact, if your data has an associated date/time index, it is best to use that even if does not have a defined frequency. An example of that kind of index is as follows - notice that it has `freq=None`:
```
index = pd.DatetimeIndex([
'2000-01-01 10:08am', '2000-01-01 11:32am',
'2000-01-01 5:32pm', '2000-01-02 6:15am'])
endog4 = pd.Series([0.2, 0.5, -0.1, 0.1], index=index)
print(endog4.index)
```
You can still pass this data to statsmodels' model classes, but you will get the following warning, that no frequency data was found:
```
mod = sm.tsa.SARIMAX(endog4)
res = mod.fit()
```
What this means is that you cannot specify forecasting steps by dates, and the output of the `forecast` and `get_forecast` methods will not have associated dates. The reason is that without a given frequency, there is no way to determine what date each forecast should be assigned to. In the example above, there is no pattern to the date/time stamps of the index, so there is no way to determine what the next date/time should be (should it be in the morning of 2000-01-02? the afternoon? or maybe not until 2000-01-03?).
For example, if we forecast one-step-ahead:
```
res.forecast(1)
```
The index associated with the new forecast is `4`, because if the given data had an integer index, that would be the next value. A warning is given letting the user know that the index is not a date/time index.
If we try to specify the steps of the forecast using a date, we will get the following exception:
KeyError: 'The `end` argument could not be matched to a location related to the index of the data.'
```
# Here we'll catch the exception to prevent printing too much of
# the exception trace output in this notebook
try:
res.forecast('2000-01-03')
except KeyError as e:
print(e)
```
Ultimately there is nothing wrong with using data that does not have an associated date/time frequency, or even using data that has no index at all, like a Numpy array. However, if you can use a Pandas series with an associated frequency, you'll have more options for specifying your forecasts and get back results with a more useful index.
|
github_jupyter
|
### SketchGraphs demo
In this notebook, we'll first go through various ways of representing and inspecting sketches in SketchGraphs. We'll then take a look at using Onshape's API in order to solve sketch constraints.
```
%load_ext autoreload
%autoreload 2
import os
import json
from copy import deepcopy
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
# cd to top-level directory
if os.path.isdir('../sketchgraphs/'):
os.chdir('../')
import sketchgraphs.data as datalib
from sketchgraphs.data import flat_array
import sketchgraphs.onshape.call as onshape_call
```
Let's first load in some sketch construction sequences. In this example, we'll be using the [validation set](https://sketchgraphs.cs.princeton.edu/sequence/sg_t16_validation.npy) (see [documentation](https://princetonlips.github.io/SketchGraphs/data) for details). This notebook assumes the data file is already downloaded and located in a directory `sequence_data`.
```
seq_data = flat_array.load_dictionary_flat('sequence_data/sg_t16_validation.npy')
seq_data['sequences']
```
This file has 315,228 sequences. Let's take a look at some of the operations in one of the sequences.
```
seq = seq_data['sequences'][1327]
print(*seq[:20], sep='\n')
```
We see that a construction sequence is a list of `NodeOp` and `EdgeOp` instances denoting the addition of primitives (also referred to as entities) and constraints, respectively.
Now let's instantiate a `Sketch` object from this sequence and render it.
```
sketch = datalib.sketch_from_sequence(seq)
datalib.render_sketch(sketch);
```
We can also render the sketch with a hand-drawn appearance using matplotlib's xkcd drawing mode.
```
datalib.render_sketch(sketch, hand_drawn=True);
```
Next, we'll build a graph representation of the sketch and visualize it with pygraphviz.
```
G = datalib.pgvgraph_from_sequence(seq)
datalib.render_graph(G, '/tmp/my_graph.png')
img = plt.imread('/tmp/my_graph.png')
fig = plt.figure(dpi=500)
plt.imshow(img[:, 500:1700])
plt.axis('off');
```
The full graph image for this example is large so we only display a portion of it above. Node labels that begin with `SN` are _subnodes_, specifying a point on some primitive (e.g., an endpoint of a line segment).
### Solving
We'll now take a look at how we can interact with Onshape's API in order to pass sketches to a geometric constraint solver. Various command line utilities for the API are defined in `sketchgraphs/onshape/call.py`.
Onshape developer credentials are required for this. Visit https://princetonlips.github.io/SketchGraphs/onshape_setup for directions. The default path for credentials is `sketchgraphs/onshape/creds/creds.json`.
We need to specify the URL of the Onshape document/PartStudio we'll be using. You should set the following `url` for your own document accordingly.
```
url = R'https://cad.onshape.com/documents/6f6d14f8facf0bba02184e88/w/66a5db71489c81f4893101ed/e/120c56983451157d26a7102d'
```
Let's test out Onshape's solver. We'll first make a copy of our sketch, remove its constraints, and manually add noise to the entity positions within Onshape's GUI.
```
no_constraint_sketch = deepcopy(sketch)
no_constraint_sketch.constraints.clear()
onshape_call.add_feature(url, no_constraint_sketch.to_dict(), 'No_Constraints_Sketch')
```
Before running the next code block, manually "mess up" the entities a bit in the GUI, i.e., drag the entities in order to leave the original constraints unsatisfied. The more drastic the change, the more difficult it will be for the solver to find a solution.
Now we retrieve the noisy sketch.
```
unsolved_sketch_info = onshape_call.get_info(url, 'No_Constraints_Sketch')
unsolved_sketch = datalib.Sketch.from_info(unsolved_sketch_info['geomEntities'])
datalib.render_sketch(unsolved_sketch);
```
Next, let's add the constraints back in and (attempt to) solve them.
```
with_constraints_sketch = deepcopy(unsolved_sketch)
with_constraints_sketch.constraints = sketch.constraints
onshape_call.add_feature(url, with_constraints_sketch.to_dict(), 'With_Constraints_Sketch')
solved_sketch_info = onshape_call.get_info(url, 'With_Constraints_Sketch')
solved_sketch = datalib.Sketch.from_info(solved_sketch_info['geomEntities'])
datalib.render_sketch(solved_sketch);
```
|
github_jupyter
|
```
import sqlite3
import pandas as pd
def run_query(query):
with sqlite3.connect('AreaOvitrap.db') as conn:
return pd.read_sql(query,conn)
def run_command(command):
with sqlite3.connect('AreaOvitrap.db') as conn:
conn.execute('PRAGMA foreign_keys = ON;')
conn.isolation_level = None
conn.execute(command)
def show_tables():
query = '''
SELECT name,
type
From sqlite_master
WHERE type IN ("type","view");
'''
return run_query(query)
def district_code_generator(area):
num_range = len(district_only[district_only['area_id'] == area])
for i in range(1,num_range+1):
yield area + "D{:02d}".format(i)
def location_code_generator(district_id):
num_range = len(locations[locations['District_id'] == district_id])
for i in range(1,num_range+1):
yield district_id + "{:02d}".format(i)
def match_district(area_districts):
for index,value in enumerate(locations['Eng']):
for key, item in area_districts.items():
if value in item:
locations.loc[index,'District'] = key
return locations
data = pd.read_csv("Area_Ovitrap_Index_Jan2008-Jul2018.csv")
all_districts = {
'HK':{
'Central Western':{'Central and Admiralty','Sai Wan','Sheung Wan and Sai Ying Pun'},
'Eastern':{'Chai Wan West','Shau Kei Wan & Sai Wan Ho','North Point'},
'Southern':{'Aberdeen and Ap Lei Chau','Pokfulam','Deep Water Bay & Repulse Bay'},
'Wanchai':{'Tin Hau','Wan Chai North','Happy Valley'}
},
'KL':{
'Yau Tsim':{'Tsim Sha Tsui','Tsim Sha Tsui East'},
'Mong Kok':{'Mong Kok'},
'Sham Shui Po':{'Cheung Sha Wan','Lai Chi Kok','Sham Shui Po East'},
'Kowloon City':{'Ho Man Tin','Kowloon City North','Hung Hom','Lok Fu West','Kai Tak North'},
'Wong Tai Sin':{'Wong Tai Sin Central','Diamond Hill','Ngau Chi Wan'},
'Kwun Tong':{'Kwun Tong Central','Lam Tin','Yau Tong','Kowloon Bay'}
},
'NT':{
'Sai Kung':{'Tseung Kwan O South','Tseung Kwan O North','Sai Kung Town'},
'Sha Tin':{'Tai Wai','Yuen Chau Kok','Ma On Shan','Lek Yuen','Wo Che'},
'Tai Po':{'Tai Po'},
'North':{'Fanling','Sheung Shui'},
'Yuen Long':{'Tin Shui Wai','Yuen Kong','Yuen Long Town'},
'Tuen Mun':{'Tuen Mun North','Tuen Mun South','Tuen Mun West','So Kwun Wat'},
'Tsuen Wan':{'Tsuen Wan Town','Tsuen Wan West','Ma Wan','Sheung Kwai Chung'},
'Kwai Tsing':{'Kwai Chung','Lai King','Tsing Yi North','Tsing Yi South'}
},
'IL':{
'Islands':{'Cheung Chau','Tung Chung'}
}
}
# matching the Chinese and English names of the districts into variable "translations"
chi_district = ['中西區','東區','南區','灣仔區','油尖區','旺角區','深水埗區','九龍城區','黃大仙區','觀塘區','西貢區','沙田區','大埔區','北區','元朗區','屯門區','荃灣區','葵青區','離島區']
eng_district = []
for area, district in all_districts.items():
for key, _ in district.items():
eng_district.append(key)
translations = list(zip(eng_district,chi_district))
# group the districts into their corresponding area
area_district = []
for area, district in all_districts.items():
for key,value in district.items():
area_district.append([area,key])
for index, value in enumerate(translations):
area_district[index].append(value[1])
area_district
# create a pandas dataframe for the data of all districts
district_only = pd.DataFrame(area_district,columns=['area_id','eng_district','chi_district'])
hk_code = district_code_generator('HK') # generate ID for main area "Hong Kong Island"
kl_code = district_code_generator('KL') # generate ID for main area "Kowloon"
nt_code = district_code_generator('NT') # generate ID for main area "New Territories"
il_code = district_code_generator('IL') # generate ID for main area "Islands"
district_code = [hk_code,kl_code,nt_code,il_code]
for index,value in enumerate(district_only['area_id']):
for i, area in enumerate(['HK','KL','NT','IL']):
if value == area:
district_only.loc[index,'District_id'] = next(district_code[i])
cols = district_only.columns.tolist()
cols = cols[-1:]+cols[:-1]
district_only = district_only[cols]
area_dict = {'area_id':['HK','KL','IL','NT'],
'eng_area':['Hong Kong Island','Kowloon','Islands','New Territories'],
'chi_area':['香港島','九龍','離島','新界']}
t_area = '''
CREATE TABLE IF NOT EXISTS area(
area_id TEXT PRIMARY KEY,
eng_area TEXT,
chi_area TEXT
)
'''
run_command(t_area)
area = pd.DataFrame(area_dict)
with sqlite3.connect("AreaOviTrap.db") as conn:
area.to_sql('area',conn,if_exists='append',index=False)
run_query("SELECT * FROM area")
t_district = '''
CREATE TABLE IF NOT EXISTS district(
district_id TEXT PRIMARY KEY,
area_id TEXT,
eng_district TEXT,
chi_district TEXT,
FOREIGN KEY (area_id) REFERENCES area(area_id)
)
'''
run_command(t_district)
with sqlite3.connect("AreaOviTrap.db") as conn:
district_only.to_sql('district',conn,if_exists='append',index=False)
run_query("SELECT * FROM district")
# extracting unique location from the data
eng_loc = data['Eng'].unique()
chi_loc = data['Chi'].unique()
locations = pd.DataFrame({'District':'','Eng':eng_loc,'Chi':chi_loc})
hk_districts = all_districts['HK']
kl_districts = all_districts['KL']
nt_districts = all_districts['NT']
il_districts = all_districts['IL']
four_district = [hk_districts,kl_districts,nt_districts,il_districts]
# match the location with the correpsonding district
for each in four_district:
locations = match_district(each)
# match the location with corresponding district_id
for index, value in enumerate(locations['District']):
for i, district in enumerate(district_only['eng_district']):
if value == district:
locations.loc[index,'District_id'] = district_only.loc[i,'District_id']
# generate Location_id by using location_code_generator
unique_district_id = locations['District_id'].unique().tolist()
for each in unique_district_id:
code = location_code_generator(each)
for index,value in enumerate(locations['District_id']):
if value == each:
locations.loc[index,'Location_id'] = next(code)
locations.head()
for index,value in enumerate(data['Eng']):
for i, name in enumerate(locations['Eng']):
if value == name:
data.loc[index,'District_id'] = locations.loc[i,'District_id']
data.loc[index,'Location_id'] = locations.loc[i,'Location_id']
with sqlite3.connect('AreaOvitrap.db') as conn:
data.to_sql('origin',conn,index=False)
data.head(20)
with sqlite3.connect('AreaOvitrap.db') as conn:
data.to_sql('origin',conn,index=False)
t_location = '''
CREATE TABLE IF NOT EXISTS location(
location_id TEXT PRIMARY KEY,
eng_location TEXT,
chi_location TEXT,
district_id,
FOREIGN KEY (district_id) REFERENCES district(district_id)
)
'''
location_data = '''
INSERT OR IGNORE INTO location
SELECT
DISTINCT Location_id,
Eng,
Chi,
District_id
FROM origin
'''
run_command(t_location)
run_command(location_data)
t_aoi = '''
CREATE TABLE IF NOT EXISTS area_ovitrap_index(
ID INTEGER PRIMARY KEY AUTOINCREMENT,
location_id TEXT,
date TEXT,
AOI FLOAT,
Classification INTEGER,
FOREIGN KEY (location_id) REFERENCES location(location_id)
)
'''
aoi_data = '''
INSERT OR IGNORE INTO area_ovitrap_index (location_id, date, AOI, Classification)
SELECT
Location_id,
DATE,
AOI,
Classification
FROM origin
'''
run_command(t_aoi)
run_command(aoi_data)
run_command("DROP TABLE IF EXISTS origin")
show_tables()
```
|
github_jupyter
|
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Slicing {#slice_example}
=======
Extract thin planar slices from a volume.
```
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
import matplotlib.pyplot as plt
import numpy as np
```
PyVista meshes have several slicing filters bound directly to all
datasets. These filters allow you to slice through a volumetric dataset
to extract and view sections through the volume of data.
One of the most common slicing filters used in PyVista is the
`pyvista.DataSetFilters.slice_orthogonal`{.interpreted-text role="func"}
filter which creates three orthogonal slices through the dataset
parallel to the three Cartesian planes. For example, let\'s slice
through the sample geostatistical training image volume. First, load up
the volume and preview it:
```
mesh = examples.load_channels()
# define a categorical colormap
cmap = plt.cm.get_cmap("viridis", 4)
mesh.plot(cmap=cmap)
```
Note that this dataset is a 3D volume and there might be regions within
this volume that we would like to inspect. We can create slices through
the mesh to gain further insight about the internals of the volume.
```
slices = mesh.slice_orthogonal()
slices.plot(cmap=cmap)
```
The orthogonal slices can be easily translated throughout the volume:
```
slices = mesh.slice_orthogonal(x=20, y=20, z=30)
slices.plot(cmap=cmap)
```
We can also add just a single slice of the volume by specifying the
origin and normal of the slicing plane with the
`pyvista.DataSetFilters.slice`{.interpreted-text role="func"} filter:
```
# Single slice - origin defaults to the center of the mesh
single_slice = mesh.slice(normal=[1, 1, 0])
p = pv.Plotter()
p.add_mesh(mesh.outline(), color="k")
p.add_mesh(single_slice, cmap=cmap)
p.show()
```
Adding slicing planes uniformly across an axial direction can also be
automated with the
`pyvista.DataSetFilters.slice_along_axis`{.interpreted-text role="func"}
filter:
```
slices = mesh.slice_along_axis(n=7, axis="y")
slices.plot(cmap=cmap)
```
Slice Along Line
================
We can also slice a dataset along a `pyvista.Spline`{.interpreted-text
role="func"} or `pyvista.Line`{.interpreted-text role="func"} using the
`DataSetFilters.slice_along_line`{.interpreted-text role="func"} filter.
First, define a line source through the dataset of interest. Please note
that this type of slicing is computationally expensive and might take a
while if there are a lot of points in the line - try to keep the
resolution of the line low.
```
model = examples.load_channels()
def path(y):
"""Equation: x = a(y-h)^2 + k"""
a = 110.0 / 160.0 ** 2
x = a * y ** 2 + 0.0
return x, y
x, y = path(np.arange(model.bounds[2], model.bounds[3], 15.0))
zo = np.linspace(9.0, 11.0, num=len(y))
points = np.c_[x, y, zo]
spline = pv.Spline(points, 15)
spline
```
Then run the filter
```
slc = model.slice_along_line(spline)
slc
p = pv.Plotter()
p.add_mesh(slc, cmap=cmap)
p.add_mesh(model.outline())
p.show(cpos=[1, -1, 1])
```
Multiple Slices in Vector Direction
===================================
Slice a mesh along a vector direction perpendicularly.
```
mesh = examples.download_brain()
# Create vector
vec = np.random.rand(3)
# Normalize the vector
normal = vec / np.linalg.norm(vec)
# Make points along that vector for the extent of your slices
a = mesh.center + normal * mesh.length / 3.0
b = mesh.center - normal * mesh.length / 3.0
# Define the line/points for the slices
n_slices = 5
line = pv.Line(a, b, n_slices)
# Generate all of the slices
slices = pv.MultiBlock()
for point in line.points:
slices.append(mesh.slice(normal=normal, origin=point))
p = pv.Plotter()
p.add_mesh(mesh.outline(), color="k")
p.add_mesh(slices, opacity=0.75)
p.add_mesh(line, color="red", line_width=5)
p.show()
```
Slice At Different Bearings
===========================
From
[pyvista-support\#23](https://github.com/pyvista/pyvista-support/issues/23)
An example of how to get many slices at different bearings all centered
around a user-chosen location.
Create a point to orient slices around
```
ranges = np.array(model.bounds).reshape(-1, 2).ptp(axis=1)
point = np.array(model.center) - ranges*0.25
```
Now generate a few normal vectors to rotate a slice around the z-axis.
Use equation for circle since its about the Z-axis.
```
increment = np.pi/6.
# use a container to hold all the slices
slices = pv.MultiBlock() # treat like a dictionary/list
for theta in np.arange(0, np.pi, increment):
normal = np.array([np.cos(theta), np.sin(theta), 0.0]).dot(np.pi/2.)
name = f'Bearing: {np.rad2deg(theta):.2f}'
slices[name] = model.slice(origin=point, normal=normal)
slices
```
And now display it!
```
p = pv.Plotter()
p.add_mesh(slices, cmap=cmap)
p.add_mesh(model.outline())
p.show()
```
|
github_jupyter
|
# Exporing Graph Datasets in Jupyter
Juypter notebooks are perfect environments for both carrying out and capturing exporatory work. Even on moderate sizes datasets they provide an interactive environement that can drive both local and remote computational tasks.
In this example, we will load a datatset using pandas, visualise it using the Graphistry graph service and import that into NetworkX so we can examine thje data and run analytics on the graph.
### Python Package Network
Our raw python module requriements data comes in the form of a csv, which we use pands to load and cresate a DataFrame for us. Each python module (Node) is related to another via a version number (Edge)
```
import pandas
rawgraph = pandas.read_csv('./requirements.csv')
```
We also print out the first 15 rows of the data and we can see it contains
```
print('Number of Entries', rawgraph.count())
rawgraph.head(15)
```
We notice straight away that our dataset has some NaN values for packages that have no requirements, this is a shortcoming of our dataset and we want to prevent those NaN's from propagating.
There are a few ways to handle this depending on whether we want to preserve the nodes in the graph or not, in this example we'll just drop that data using pandas.
```
rawgraph.dropna(inplace=True)
rawgraph.head(15)
```
## Visualizing the Graph
Efficient visualiations of anything but small graphs can be challenging in a local python environment, there are multiple ways around this but here we'll use a libreary and cloud based service called Graphisty.
First we'll start up Graphistry using our API key in order to access the cloud based rendering service.
```
from os import environ
from dotenv import load_dotenv, find_dotenv
import graphistry
load_dotenv(find_dotenv())
graphistry.register(key=environ.get("GRAPHISTRY_API_KEY"))
```
Next we'll plot the raw graph. Graphistry provides an awesome interactive plot widget in Juypter that of couerse allows us to interact with the graph itself but have more options. If you have tiome to play check out in particular:
- Full screen mode
- Layout settings (via the cogs icon)
- Histograms and Data Table
- The Workbook which launches an cloud based instance of Graphistry outside of Jupyter
- Visual Clustering!
```
plotter = graphistry.bind(source="requirement", destination="package_name")
plotter.plot(rawgraph)
```
Next we'll load our raw graph data into a NetworkX graph and run some analytics on the network. This dataset is heavily weighted by packages with a few requirements. Note: We are loading this as a DirectedGraph which will allow the direction of dependencies to be preserved.
```
import networkx as nx
G = nx.from_pandas_dataframe(rawgraph, 'package_name', 'requirement',
edge_attr='package_version', create_using=nx.DiGraph())
print('Nodes:', G.number_of_nodes())
print('Edges:', G.number_of_edges())
import numpy as np
import matplotlib.pyplot as plt
degrees = np.array(nx.degree_histogram(G))
plt.bar(range(1,20), degrees[1:20])
plt.xlabel('# requirements')
plt.ylabel('# packages')
plt.title('Degree - Dependencies per Package')
plt.grid(True)
plt.show()
```
We can see this network is dominated with packages with a single requirement, accounting for 37% of the nodes.
```
print('% of packages with only 1 requirement',
'{:.1f}%'.format(100 * degrees[1] / G.number_of_nodes()), ',', degrees[1], 'packages total')
highestDegree = len(degrees) - 1
nodesByDegree = G.degree()
mostConnectedNode = [n for n in nodesByDegree if nodesByDegree[n] == highestDegree][0]
print(mostConnectedNode)
print('The package with most requirements is >',mostConnectedNode,
'< having a total of', len(degrees), 'first order dependencies')
```
However, we are looking at all connections to requests in this directed graph, so this is a combination of it's dependencies and packages that are dependent on it. We can see how that is split by looking at the in and out degree.
```
print('Depencencies:', G.out_degree([mostConnectedNode])[mostConnectedNode])
print('Dependants:', G.in_degree([mostConnectedNode])[mostConnectedNode])
```
So rather than having a lot of requirements, we've discovered that `requests` actually has few requirements and is instead a heavily used module. we can take a closer look by extracting a sub-graph of `requests`' immedate neighbours and visualising this.
```
R = G.subgraph([mostConnectedNode]+G.neighbors(mostConnectedNode)+G.predecessors(mostConnectedNode))
graphistry.bind(source="requirement", destination="package_name").plot(R)
```
Note the visualizaton above was created by plotting the sub graph then using Grapistry's Visual Clustering to do its stuff.
|
github_jupyter
|
```
%matplotlib inline
```
# Early stopping of Gradient Boosting
Gradient boosting is an ensembling technique where several weak learners
(regression trees) are combined to yield a powerful single model, in an
iterative fashion.
Early stopping support in Gradient Boosting enables us to find the least number
of iterations which is sufficient to build a model that generalizes well to
unseen data.
The concept of early stopping is simple. We specify a ``validation_fraction``
which denotes the fraction of the whole dataset that will be kept aside from
training to assess the validation loss of the model. The gradient boosting
model is trained using the training set and evaluated using the validation set.
When each additional stage of regression tree is added, the validation set is
used to score the model. This is continued until the scores of the model in
the last ``n_iter_no_change`` stages do not improve by atleast `tol`. After
that the model is considered to have converged and further addition of stages
is "stopped early".
The number of stages of the final model is available at the attribute
``n_estimators_``.
This example illustrates how the early stopping can used in the
:class:`sklearn.ensemble.GradientBoostingClassifier` model to achieve
almost the same accuracy as compared to a model built without early stopping
using many fewer estimators. This can significantly reduce training time,
memory usage and prediction latency.
```
# Authors: Vighnesh Birodkar <[email protected]>
# Raghav RV <[email protected]>
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.model_selection import train_test_split
print(__doc__)
data_list = [datasets.load_iris(), datasets.load_digits()]
data_list = [(d.data, d.target) for d in data_list]
data_list += [datasets.make_hastie_10_2()]
names = ['Iris Data', 'Digits Data', 'Hastie Data']
n_gb = []
score_gb = []
time_gb = []
n_gbes = []
score_gbes = []
time_gbes = []
n_estimators = 500
for X, y in data_list:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=0)
# We specify that if the scores don't improve by atleast 0.01 for the last
# 10 stages, stop fitting additional stages
gbes = ensemble.GradientBoostingClassifier(n_estimators=n_estimators,
validation_fraction=0.2,
n_iter_no_change=5, tol=0.01,
random_state=0)
gb = ensemble.GradientBoostingClassifier(n_estimators=n_estimators,
random_state=0)
start = time.time()
gb.fit(X_train, y_train)
time_gb.append(time.time() - start)
start = time.time()
gbes.fit(X_train, y_train)
time_gbes.append(time.time() - start)
score_gb.append(gb.score(X_test, y_test))
score_gbes.append(gbes.score(X_test, y_test))
n_gb.append(gb.n_estimators_)
n_gbes.append(gbes.n_estimators_)
bar_width = 0.2
n = len(data_list)
index = np.arange(0, n * bar_width, bar_width) * 2.5
index = index[0:n]
```
Compare scores with and without early stopping
----------------------------------------------
```
plt.figure(figsize=(9, 5))
bar1 = plt.bar(index, score_gb, bar_width, label='Without early stopping',
color='crimson')
bar2 = plt.bar(index + bar_width, score_gbes, bar_width,
label='With early stopping', color='coral')
plt.xticks(index + bar_width, names)
plt.yticks(np.arange(0, 1.3, 0.1))
def autolabel(rects, n_estimators):
"""
Attach a text label above each bar displaying n_estimators of each model
"""
for i, rect in enumerate(rects):
plt.text(rect.get_x() + rect.get_width() / 2.,
1.05 * rect.get_height(), 'n_est=%d' % n_estimators[i],
ha='center', va='bottom')
autolabel(bar1, n_gb)
autolabel(bar2, n_gbes)
plt.ylim([0, 1.3])
plt.legend(loc='best')
plt.grid(True)
plt.xlabel('Datasets')
plt.ylabel('Test score')
plt.show()
```
Compare fit times with and without early stopping
-------------------------------------------------
```
plt.figure(figsize=(9, 5))
bar1 = plt.bar(index, time_gb, bar_width, label='Without early stopping',
color='crimson')
bar2 = plt.bar(index + bar_width, time_gbes, bar_width,
label='With early stopping', color='coral')
max_y = np.amax(np.maximum(time_gb, time_gbes))
plt.xticks(index + bar_width, names)
plt.yticks(np.linspace(0, 1.3 * max_y, 13))
autolabel(bar1, n_gb)
autolabel(bar2, n_gbes)
plt.ylim([0, 1.3 * max_y])
plt.legend(loc='best')
plt.grid(True)
plt.xlabel('Datasets')
plt.ylabel('Fit Time')
plt.show()
```
|
github_jupyter
|
# Problems
```
import math
import pandas as pd
from sklearn import preprocessing
from sklearn.neighbors import NearestNeighbors, KNeighborsClassifier, KNeighborsRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.metrics import accuracy_score, mean_squared_error
from dmutils import classification_summary
from dmutils import regression_summary
```
**1. Calculating Distance with Categorical Predictors.**
This exercise with a tiny dataset illustrates the calculation of Euclidean distance, and the creation of binary
dummies. The online education company Statistics.com segments its customers and prospects into three main categories: IT professionals (IT), statisticians (Stat), and other (Other). It also tracks, for each customer, the number of years since first contact (years). Consider the following customers; information about whether they have taken a course or not (the outcome to be predicted) is included:
Customer 1: Stat, 1 year, did not take course
Customer 2: Other, 1.1 year, took course
**a.** Consider now the following new prospect:
Prospect 1: IT, 1 year
Using the above information on the two customers and one prospect, create one dataset for all three with the categorical predictor variable transformed into 2 binaries, and a similar dataset with the categorical predictor variable transformed into 3 binaries.
```
# dataset for all three customers with the categorical predictor (category)
# transformed into 2 binaries
tiny_two_cat_dummies_df = pd.DataFrame({"IT": [0, 0, 1], "Stat": [1, 0, 0],
"years_since_first_contact": [1, 1.1, 1],
"course": [0, 1, None]})
tiny_two_cat_dummies_df
# dataset for all three customers with the categorical predictor (category)
# transformed into 3 binaries
tiny_all_cat_dummies_df = pd.DataFrame({"IT": [0, 0, 1], "Stat": [1, 0, 0],
"Other": [0, 1, 0], "years_since_first_contact": [1, 1.1, 1],
"course": [0, 1, None]})
tiny_all_cat_dummies_df
```
**b.** For each derived dataset, calculate the Euclidean distance between the prospect and each of the other two customers. (Note: While it is typical to normalize data for k-NN, this is not an iron-clad rule and you may proceed here without normalization.)
- Two categorical dummies (IT/Stat):
```
predictors = ["IT", "Stat", "years_since_first_contact"]
pd.DataFrame(euclidean_distances(tiny_two_cat_dummies_df[predictors],
tiny_two_cat_dummies_df[predictors]),
columns=["customer_1", "customer_2", "customer_3"],
index=["customer_1", "customer_2", "customer_3"])
```
- Three categorical dummies (IT/Stat/Other):
```
predictors = ["IT", "Stat", "Other", "years_since_first_contact"]
pd.DataFrame(euclidean_distances(tiny_all_cat_dummies_df[predictors],
tiny_all_cat_dummies_df[predictors]),
columns=["customer_1", "customer_2", "customer_3"],
index=["customer_1", "customer_2", "customer_3"])
```
We can already see the effect of using two/three dummy variables. For the two dummy variables dataset, the `customer_3` is nearer to `customer_2` than to `customer_1`. This happens because the variable `years_since_first_contact` are the same for the both customers. For the three dummy variables, we still see that the `customer_3` are nearer to `customer_1` than to `customer_2` though the distances are very close between all customers. This happens because the `Other` variable helps to discriminate each of the customers.
In contrast to the situation with statistical models such as regression, all *m* binaries should be created and
used with *k*-NN. While mathematically this is redundant, since *m* - 1 dummies contain the same information as *m* dummies, this redundant information does not create the multicollinearity problems that it does for linear models. Moreover, in *k*-NN the use of *m* - 1 dummies can yield different classifications than the use of *m* dummies, and lead to an imbalance in the contribution of the different categories to the model.
**c.** Using k-NN with k = 1, classify the prospect as taking or not taking a course using each of the two derived datasets. Does it make a difference whether you use two or three dummies?
- Two dummies variables (IT/Stat)
```
predictors = ["IT", "Stat", "years_since_first_contact"]
# user NearestNeighbors from scikit-learn to compute knn
knn = NearestNeighbors(n_neighbors=1)
knn.fit(tiny_two_cat_dummies_df.loc[:1, predictors])
new_customer = pd.DataFrame({"IT": [1], "Stat": [0],
"years_since_first_contact": [1]})
distances, indices = knn.kneighbors(new_customer)
# indices is a list of lists, we are only interested in the first element
tiny_two_cat_dummies_df.iloc[indices[0], :]
```
- Three dummies variable(IT/Stat/Other)
```
predictors = ["IT", "Stat", "Other", "years_since_first_contact"]
# user NearestNeighbors from scikit-learn to compute knn
knn = NearestNeighbors(n_neighbors=1)
knn.fit(tiny_all_cat_dummies_df.loc[:1, predictors])
new_customer = pd.DataFrame({"IT": [1], "Stat": [0], "Other": [1],
"years_since_first_contact": [1]})
distances, indices = knn.kneighbors(new_customer)
# indices is a list of lists, we are only interested in the first element
tiny_all_cat_dummies_df.iloc[indices[0], :]
```
If we use *k* = 1, the nearest customer is the one that took the course for both variables. Therefore, for this specific example there was no difference on using two or three categorical variable. Therefore, as indicated in the previous item (**b**), this redundant information does not create the multicollinearity problems that it does for linear models. Moreover, in *k*-NN the use of *m* - 1 dummies can yield different classifications than the use of *m* dummies, and lead to an imbalance in the contribution of the different categories to the model.
**2. Personal Loan Acceptance.** Universal Bank is a relatively young bank growing rapidly in terms of overall customer acquisition. The majority of these customers are liability customers (depositors) with varying sizes of relationship with the bank. The customer base of asset customers (borrowers) is quite small, and the bank is interested in expanding this base rapidly to bring in more loan business. In particular, it wants to explore ways of converting its liability customers to personal loan customers (while retaining them as depositors).
A campaign that the bank ran last year for liability customers showed a healthy conversion rate of over 9% success. This has encouraged the retail marketing department to devise smarter campaigns with better target marketing. The goal is to use *k*-NN to predict whether a new customer will accept a loan offer. This will serve as the basis for the design of a new campaign.
The file `UniversalBank.csv` contains data on 5000 customers. The data include customer demographic information (age, income, etc.), the customer's relationship with the bank (mortgage, securities account, etc.), and the customer response to the last personal loan campaign (Personal Loan). Among these 5000 customers, only 480 (=9.6%) accepted the personal loan that was offered to them in the earlier campaign.
Partition the data into training (60%) and validation (40%) sets.
**a.** Consider the following customer:
Age = 40, Experience = 10, Income = 84, Family = 2, CCAvg = 2, Education_1 = 0,
Education_2 = 1, Education_3 = 0, Mortgage = 0, Securities Account = 0, CDAccount = 0,
Online = 1, and Credit Card = 1.
Perform a *k*-NN classification with all predictors except ID and ZIP code using k = 1. Remember to transform categorical predictors with more than two categories into dummy variables first. Specify the success class as 1 (loan acceptance), and use the default cutoff value of 0.5. How would this customer be classified?
```
customer_df = pd.read_csv("../datasets/UniversalBank.csv")
customer_df.head()
# define predictors and the outcome for this problem
predictors = ["Age", "Experience", "Income", "Family", "CCAvg", "Education", "Mortgage",
"Securities Account", "CD Account", "Online", "CreditCard"]
outcome = "Personal Loan"
# before k-NN, we will convert 'Education' to binary dummies.
# 'Family' remains unchanged
customer_df = pd.get_dummies(customer_df, columns=["Education"], prefix_sep="_")
# update predictors to include the new dummy variables
predictors = ["Age", "Experience", "Income", "Family", "CCAvg", "Education_1",
"Education_2", "Education_3", "Mortgage",
"Securities Account", "CD Account", "Online", "CreditCard"]
# partition the data into training 60% and validation 40% sets
train_data, valid_data = train_test_split(customer_df, test_size=0.4,
random_state=26)
# equalize the scales that the various predictors have(standardization)
scaler = preprocessing.StandardScaler()
scaler.fit(train_data[predictors])
# transform the full dataset
customer_norm = pd.concat([pd.DataFrame(scaler.transform(customer_df[predictors]),
columns=["z"+col for col in predictors]),
customer_df[outcome]], axis=1)
train_norm = customer_norm.iloc[train_data.index]
valid_norm = customer_norm.iloc[valid_data.index]
# new customer
new_customer = pd.DataFrame({"Age": [40], "Experience": [10], "Income": [84], "Family": [2],
"CCAvg": [2], "Education_1": [0], "Education_2": [1],
"Education_3": [0], "Mortgage": [0], "Securities Account": [0],
"CDAccount": [0], "Online": [1], "Credit Card": [1]})
new_customer_norm = pd.DataFrame(scaler.transform(new_customer),
columns=["z"+col for col in predictors])
# use NearestNeighbors from scikit-learn to compute knn
# using all the dataset (training + validation sets) here!
knn = NearestNeighbors(n_neighbors=1)
knn.fit(customer_norm.iloc[:, 0:-1])
distances, indices = knn.kneighbors(new_customer_norm)
# indices is a list of lists, we are only interested in the first element
customer_norm.iloc[indices[0], :]
```
Since the closest customer did not accepted the loan (=0), we can estimate for the new customer a probability of 1 of being an non-borrower (and 0 for being a borrower). Using a simple majority rule is equivalent to setting the cutoff value to 0.5. In the above results, we see that the software assigned class non-borrower to this record.
**b.** What is a choice of *k* that balances between overfitting and ignoring the predictor information?
First, we need to remember that a balanced choice greatly depends on the nature of the data. The more complex and irregular the structure of the data, the lower the optimum value of *k*. Typically, values of *k* fall in the range of 1-20. We will use odd numbers to avoid ties.
If we choose *k* = 1, we will classify in a way that is very sensitive to the local characteristics of the training data. On the other hand, if we choose a large value of *k*, such as *k* = 14, we would simply predict the most frequent class in the dataset in all cases.
To find a balance, we examine the accuracy (of predictions in the validation set) that results from different choices of *k* between 1 and 14.
```
train_X = train_norm[["z"+col for col in predictors]]
train_y = train_norm[outcome]
valid_X = valid_norm[["z"+col for col in predictors]]
valid_y = valid_norm[outcome]
# Train a classifier for different values of k
results = []
for k in range(1, 15):
knn = KNeighborsClassifier(n_neighbors=k).fit(train_X, train_y)
results.append({"k": k,
"accuracy": accuracy_score(valid_y, knn.predict(valid_X))})
# Convert results to a pandas data frame
results = pd.DataFrame(results)
results
```
Based on the above table, we would choose **k = 3** (though **k = 5** appears to be another option too), which maximizes our accuracy in the validation set. Note, however, that now the validation set is used as part of the training process (to set *k*) and does not reflect a
true holdout set as before. Ideally, we would want a third test set to evaluate the performance of the method on data that it did not see.
**c.** Show the confusion matrix for the validation data that results from using the best *k*.
- k = 3
```
knn = KNeighborsClassifier(n_neighbors=3).fit(train_X, train_y)
classification_summary(y_true=valid_y, y_pred=knn.predict(valid_X))
```
- k = 5
```
knn = KNeighborsClassifier(n_neighbors=5).fit(train_X, train_y)
classification_summary(y_true=valid_y, y_pred=knn.predict(valid_X))
```
**d.** Consider the following customer:
Age = 40, Experience = 10, Income = 84, Family = 2, CCAvg = 2, Education_1 = 0,
Education_2 = 1, Education_3 = 0, Mortgage = 0, Securities Account = 0, CD Account = 0,
Online = 1 and Credit Card = 1.
Classify the customer using the best *k*.
Note: once *k* is chosen, we rerun the algorithm on the combined training and testing sets in order to generate classifications of new records.
```
# using the same user created before :)
knn = KNeighborsClassifier(n_neighbors=3).fit(customer_norm.iloc[:, 0:-1],
customer_norm.loc[:, "Personal Loan"])
knn.predict(new_customer_norm), knn.predict_proba(new_customer_norm)
knn = KNeighborsClassifier(n_neighbors=5).fit(customer_norm.iloc[:, 0:-1],
customer_norm.loc[:, "Personal Loan"])
knn.predict(new_customer_norm), knn.predict_proba(new_customer_norm)
```
Using the best *k* (=3) the user was classified as a **non-borrower**. Also with *k* = 5
**e**. Repartition the data, this time into training, validation, and test sets (50%:30%:20%). Apply the *k*-NN method with the *k* chosen above. Compare the confusion matrix of the test set with that of the training and validation sets. Comment on the differences and their reason.
```
# using the customer_norm computed earlier
# training: 50%
# validation: 30% (0.5 * 0.6)
# test: 20% (0.5 * 0.4)
train_data, temp = train_test_split(customer_df, test_size=0.50, random_state=1)
valid_data, test_data = train_test_split(temp, test_size=0.40, random_state=1)
train_norm = customer_norm.iloc[train_data.index]
valid_norm = customer_norm.iloc[valid_data.index]
test_norm = customer_norm.iloc[test_data.index]
train_X = train_norm[["z"+col for col in predictors]]
train_y = train_norm[outcome]
valid_X = valid_norm[["z"+col for col in predictors]]
valid_y = valid_norm[outcome]
test_X = test_norm[["z"+col for col in predictors]]
test_y = test_norm[outcome]
knn = KNeighborsClassifier(n_neighbors=3).fit(train_X, train_y)
print("Training set\n" + "*" * 12)
classification_summary(y_true=train_y, y_pred=knn.predict(train_X))
print("\nValidation set\n" + "*" * 14)
classification_summary(y_true=valid_y, y_pred=knn.predict(valid_X))
print("\nTest set\n" + "*" * 8)
classification_summary(y_true=test_y, y_pred=knn.predict(test_X))
```
Based on the training, validation, and test matrices we can see a steady increase in the percentage error from training set and validation/test sets. As the model is being fit on the training data it would make intuitive sense that the classifications are most accurate on it rather than validation/test datasets.
We can see also that there does not appear to be overfitting due to the minimal error discrepancies among all three matrices, and specially between validation and test sets.
**3. Predicting Housing Median Prices.** The file `BostonHousing.csv` contains information on over 500 census tracts in Boston, where for each tract multiple variables are recorded. The last column (`CAT.MEDV`) was derived from `MEDV`, such that it obtains the value 1 if `MEDV` > 30 and 0 otherwise. Consider the goal of predicting the median value (`MEDV`) of a tract, given the information in the first 12 columns.
Partition the data into training (60%) and validation (40%) sets.
**a.** Perform a *k*-NN prediction with all 12 predictors (ignore the `CAT.MEDV` column), trying values of *k* from 1 to 5. Make sure to normalize the data. What is the best *k*? What does it mean?
The idea of *k*-NN can readily be extended to predicting a continuous value (as is our aim with multiple linear regression models). The first step of determining neighbors by computing distances remains unchanged. The second step, where a majority vote of the neighbors is used to determine class, is modified such that we take the average outcome value of the *k*-nearest neighbors to determine the prediction. Often, this average is a weighted average, with the weight decreasing with increasing distance from the point at which the prediction is required. In `scikit-learn`, we can use `KNeighborsRegressor` to compute *k*-NN numerical predictions for the validation set.
Another modification is in the error metric used for determining the "best k". Rather than the overall error rate used in classification, RMSE (root-mean-squared error) or another prediction error metric should be used in prediction.
```
housing_df = pd.read_csv("../datasets/BostonHousing.csv")
housing_df.head()
# define predictors and the outcome for this problem
predictors = ["CRIM", "ZN", "INDUS", "CHAS", "NOX", "RM", "AGE",
"DIS", "RAD", "TAX", "PTRATIO", "LSTAT"]
outcome = "MEDV"
# partition the data into training 60% and validation 40% sets
train_data, valid_data = train_test_split(housing_df, test_size=0.4,
random_state=26)
# equalize the scales that the various predictors have(standardization)
scaler = preprocessing.StandardScaler()
scaler.fit(train_data[predictors])
# transform the full dataset
housing_norm = pd.concat([pd.DataFrame(scaler.transform(housing_df[predictors]),
columns=["z"+col for col in predictors]),
housing_df[outcome]], axis=1)
train_norm = housing_norm.iloc[train_data.index]
valid_norm = housing_norm.iloc[valid_data.index]
# Perform a k-NN prediction with all 12 predictors
# trying values of k from 1 to 5
train_X = train_norm[["z"+col for col in predictors]]
train_y = train_norm[outcome]
valid_X = valid_norm[["z"+col for col in predictors]]
valid_y = valid_norm[outcome]
# Train a classifier for different values of k
# Using weighted average
results = []
for k in range(1, 6):
knn = KNeighborsRegressor(n_neighbors=k, weights="distance").fit(train_X, train_y)
y_pred = knn.predict(valid_X)
y_res = valid_y - y_pred
results.append({"k": k,
"mean_error": sum(y_res) / len(y_res),
"rmse": math.sqrt(mean_squared_error(valid_y, y_pred)),
"mae": sum(abs(y_res)) / len(y_res)})
# Convert results to a pandas data frame
results = pd.DataFrame(results)
results
```
Using the RMSE (root mean squared errors) as the *k* decision driver, the best *k* is 4. We choose 4 as a way to minimize the errors found in the validation set. Note, however, that now the validation set is used as part of the training process (to set *k*) and does not reflect a true holdout set as before.
Note also that performance on validation data may be overly optimistic when it comes to predicting performance on data that have not been exposed to the model at all. This is because when the validation data are used to select a final model among a set of model, we are selecting based on how well the model performs with those data and therefore may be incorporating some of the random idiosyncrasies (bias) of the validation data into the judgment about the best model.
The model still may be the best for the validation data among those considered, but it will probably not do as well with the unseen data. Therefore, it is useful to evaluate the chosen model on a new test set to get a sense of how well it will perform on new data. In addition, one must consider practical issues such as costs of collecting variables, error-proneness, and model complexity in the selection of the final model.
**b.** Predict the `MEDV` for a tract with the following information, using the best *k*:
CRIM: 0.2
ZN: 0
INDUS: 7
CHAS: 0
NOX: 0.538
RM: 6
AGE: 62
DIS: 4.7
RAD: 4
TAX: 307
PTRATIO: 21
LSTAT: 10
Once *k* is chosen, we rerun the algorithm on the combined training and testing sets in order to generate classifications of new records.
```
# new house to be predicted. Before predicting the MEDV we normalize it
new_house = pd.DataFrame({"CRIM": [0.2], "ZN": [0], "INDUS": [7], "CHAS": [0],
"NOX": [0.538], "RM": [6], "AGE": [62], "DIS": [4.7],
"RAD": [4], "TAX": [307], "PTRATIO": [21], "LSTAT": [10]})
new_house_norm = pd.DataFrame(scaler.transform(new_house),
columns=["z"+col for col in predictors])
# retrain the knn using the best k and all data
knn = KNeighborsRegressor(n_neighbors=4, weights="distance").fit(housing_norm[["z"+col for col in predictors]],
housing_norm[outcome])
knn.predict(new_house_norm)
```
The new house has a predicted value of 19.6 (in \\$1000s)
**c.** If we used the above *k*-NN algorithm to score the training data, what would be the error of the training set?
It would be zero or near zero. This happens because the best *k* was selected from a model built using such dataset. Therefore, we have used the same data for fitting the classification functions and for estimating the error.
```
# Using the previous trained model (all data, k=5)
y_pred = knn.predict(train_X)
y_res = train_y - y_pred
results = {"k": 4,
"mean_error": sum(y_res) / len(y_res),
"rmse": math.sqrt(mean_squared_error(train_y, y_pred)),
"mae": sum(abs(y_res)) / len(y_res)}
# Convert results to a pandas data frame
results = pd.DataFrame(results, index=[0])
results
```
**d.** Why is the validation data error overly optimistic compared to the error rate when applying this *k*-NN predictor to new data?
When we use the validation data to assess multiple models and then choose the model that performs best with the validation data, we again encounter another (lesser) facet of the overfitting problem - chance aspects of the validation data that happen to match the chosen model better than they match other models. In other words, by using the validation data to choose one of several models, the performance of the chosen model on the validation data will be overly optimistic.
In other words, chances are that the training/validation sets can be biased, so cross-validation would give a better approximation in this scenario.
**e.** If the purpose is to predict `MEDV` for several thousands of new tracts, what would be the disadvantage of using *k*-NN prediction? List the operations that the algorithm goes through in order to produce each prediction.
The disadvantage of the *k*-NN in this case would be it's laziness characteristic meaning that it would take too much time to predict all the cases.
Basically, the algorithm would need to perform the following operations repeatedly for each case to predict the `MEDV` value for them:
- Normalize the data of each new variable for each case based on the mean and standard deviation in training data set;
- Calculate the distance of this case from all the training data;
- Sorting the new data based on the calculated distances;
- Use the majority rule on the first *k* nearest neighbors to predict the new case;
And as mentioned this process would be repeated for each of the thousands of new cases which would be computationally expensive and time consuming.
|
github_jupyter
|
```
%matplotlib inline
import control
from control.matlab import *
import numpy as np
import matplotlib.pyplot as plt
def pole_plot(poles, title='Pole Map'):
plt.title(title)
plt.scatter(np.real(poles), np.imag(poles), s=50, marker='x')
plt.axhline(y=0, color='black');
plt.axvline(x=0, color='black');
plt.xlabel('Re');
plt.ylabel('Im');
```
# Tune Those Gains!
State space control requires that you fill in up to three gain matrices (K, L & I), each potentially containing a number of elements. Given the heurism that goes into selecting PID gains (of which there are only three) tuning a state space controller can seem a bit daunting. Fortunately, the state space control framework includes a formal way to calculate gains to arrive at what is called a Linear Quadratic Regulator as well as a Linear Quadratic Estimator.
The goal of this notebook will be to walk you through the steps necessary to formulate the gains for an LQR and LQE. Once we've arrived at some acceptable gains, you can cut and paste them directly into your arduino sketch and start controlling some cool, complex dynamic systems with hopefully less guesswork and gain tweaking than if you were use a PID controller.
We'll be working from the model for the cart pole system from the examples folder. This system has multiple outputs and is inherently unstable so it's a nice way of showing the power of state space control. Having said that, **this analysis can apply to any state space model**, so to adapt it your system, just modify the system matricies ($A,B,C,D$) and the and cost function weighting matrices ($ Q_{ctrl}, R_{ctrl}, Q_{est}, R_{est} $ ) and rerun the notebook.
## Cart Pole
You can find the details on the system modelling for the inverted pendulum [here](http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum§ion=SystemModeling) but by way of a quick introduction, the physical system consists of a pendulum mounted on top of a moving cart shown below. The cart is fitted with two sensors that measure the angle of the stick and the displacement of the cart. The cart also has an actuator to apply a horizontal force on the cart to drive it forwards and backwards. The aim of the controller is to manipulate the force on the cart in order to balance the stick upright.

The state for this system is defined as:
\begin{equation}
\mathbf{x} = [ cart \;displacement, \; cart \;velocity, \; stick \;angle, \; stick \;angular\; velocity ]^T
\end{equation}
and the state space model that describes this system is as follows:
```
A = [[0.0, 1.0, 0.0, 0.0 ],
[0.0, -0.18, 2.68, 0.0 ],
[0.00, 0.00, 0.00, 1.00],
[0.00, -0.45, 31.21, 0.00]]
B = [[0.00],
[1.82],
[0.00],
[4.55]]
C = [[1, 0, 0, 0],[0,0,1,0]]
D = [[0],[0]]
```
## Take a look at the Open Loop System Poles
To get an idea of how this system behaves before we add any feedback control we can look at the poles of the open loop (uncontrolled) system. Poles are the roots of a characteristic equation derived from the system model. They're often complex numbers (having a real and an imaginary component) and are useful in understanding how the output of a system will respond to changes in its input.
There's quite a bit of interesting information that can be gleaned from looking at system poles but for now we'll focus on stability; which is determined by the pole with the largest real component (the x-axis in the plot below). If a system has any poles with positive real components then that system will be inherently unstable (i.e some or all of the state will shoot off to infinity if the system is left to its own devices).
The inverted pendulum has a pole at Re(+5.56) which makes sense when you consider that if the stick is stood up on its end and let go it'd fall over (obviously it'd stop short of infinity when it hits the side of the cart, but this isn't taken into account by the model). Using a feedback controller we'll move this pole over to the left of the imaginary axis in the plot below and in so doing, stabilise the system.
```
plant = ss(A, B, C, D)
open_loop_poles = pole(plant)
print '\nThe poles of the open loop system are:\n'
print open_loop_poles
pole_plot(open_loop_poles, 'Open Loop Poles')
```
# Design a Control Law
With a model defined, we can get started on calculating the gain matrix K (the control law) which determines the control input necessary to regulate the system state to $ \boldsymbol{0} $ (all zeros, you might want to control it to other set points but to do so just requires offsetting the control law which can be calculated on the arduino).
## Check for Controllability
For it to be possible to control a system defined by a given state space model, that model needs to be controllable. Being controllable simply means that the available set of control inputs are capable of driving the entire state to a desired set point. If there's a part of the system that is totally decoupled from the actuators that are manipulated by the controller then a system won't be controllable.
A system is controllable if the rank of the controllability matrix is the same as the number of states in the system model.
```
controllability = ctrb(A, B)
print 'The controllability matrix is:\n'
print controllability
if np.linalg.matrix_rank(controllability) == np.array(B).shape[0]:
print '\nThe system is controllable!'
else:
print '\nThe system is not controllable, double-check your modelling and that you entered the system matrices correctly'
```
## Fill out the Quadratic Cost Function
Assuming the system is controllable, we can get started on calculating the control gains. The approach we take here is to calculate a Linear Quadratic Regulator. An LQR is basically a control law (K matrix) that minimises the quadratic cost function:
\begin{equation}
J = \int_0^\infty (\boldsymbol{x}' Q \boldsymbol{x} + \boldsymbol{u}' R \boldsymbol{u})\; dt
\end{equation}
The best way to think about this cost function is to realise that whenever we switch on the state space controller, it'll expend some amount of control effort to bring the state to $ \boldsymbol{0} $ (all zeros). Ideally it'll do that as quickly and with as little overshoot as possible. We can represent that with the expression $ \int_0^\infty \boldsymbol{x}' \;\boldsymbol{x} \; dt $. Similarly it's probably a good idea to keep the control effort to a minimum such that the controller is energy efficient and doesn't damage the system with overly aggressive control inputs. This total control effort can be represented with $ \int_0^\infty \boldsymbol{u}' \;\boldsymbol{u} \; dt $.
Inevitably, there'll be some parts of the state and some control inputs that we care about minimising more than others. To reflect that in the cost function we specify two matrices; $ Q $ and $ R $
$ Q_{ctrl} \in \mathbf{R}^{X \;\times\; X} $ is the state weight matrix; the elements on its diagonal represent how important it is to tighly control the corresponding state element (as in Q[0,0] corresponds to x[0]).
$ R_{ctrl} \in \mathbf{R}^{U \;\times\; U} $ is the input weight matrix; the elements on its diagonal represent how important it is to minimise the use of the corresponding control input (as in R[0,0] corresponds to u[0]).
```
Q_ctrl = [[5000, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 100, 0],
[0, 0, 0, 0]]
R_ctrl = [1]
```
## Calculate the Gain Matrix for a Linear Quadratic Regulator
With a cost function defined, the cell below will calculate the gain matrix K for an LQR. Bear in mind it usually takes a bit of tweaking of the cost function to arrive at a good K matrix. Also note that it's the relative value of each weight that's important, not their absolute values. You can multiply both $ Q_{ctrl} $ and $ R_{ctrl} $ by 1e20 and you'll still wind up with the same gains.
To guide your tweaking it's helpful to see the effect that different weightings have on the closed loop system poles. Remember that the further the dominant pole (largest real component) is to the left of the Im axis, the more stable your system will be. That said, don't get carried away and set ridiculously high gains; your actual system might not behave in exactly the same way as the model and an overly aggressive controller might just end up destabilising the system under realistic conditions.
```
K, _, _ = lqr(A, B, Q_ctrl, R_ctrl)
print 'The control gain is:\n'
print 'K = ', K
plant_w_full_state_feedback = ss(A,
B,
np.identity(plant.states),
np.zeros([plant.states, plant.inputs]))
controller = ss(np.zeros([plant.states, plant.states]),
np.zeros([plant.states, plant.states]),
np.zeros([plant.inputs, plant.states]),
K)
closed_loop_system = feedback(plant_w_full_state_feedback, controller)
closed_loop_poles = pole(closed_loop_system)
print '\nThe poles of the closed loop system are:\n'
print closed_loop_poles
pole_plot(closed_loop_poles, 'Closed Loop Poles')
```
# Design an Estimator
If you're lucky enough to be able to directly observe the system's entire state (in which case the $ C $ matrix will be an identity matrix) then you're done!
This obviously isn't the case for the cart pole since given our sensors we aren't able to directly observe the cart velocity or the stick angular velocity (we can differentiate the sensor readings ofcourse, but doing so is a bad idea if the sensors are even a little bit noisy). Because of this, we'll need to introduce an estimator into the feedback controller to reconstruct those parts of the state based on our sensor readings $ \mathbf{y} $.
There's a nice duality between the estimator and the controller so the basic approach we take to calculate the estimator gain ($ L $ matrix) are very similar those for the control law above.
## Check for Observability
Observability tells us whether the sensors we've attached to our system (as defined by the C matrix) are sufficient to derive an estimate of the state to feed into our control law. If for example, a part of the state was completely decoupled from all of the sensor measurements we take, then the system won't be observable and it'll be impossible to estimate and ultimately, to control.
Similar to controllability, a system is observable if the rank of the observability matrix is the same as the number of states in the model.
```
observability = obsv(A, C)
print 'The observability matrix is:\n'
print observability
if np.linalg.matrix_rank(observability) == plant.states:
print '\nThe system is observable!'
else:
print '\nThe system is not observable, double-check your modelling and that you entered the matrices correctly'
```
## Fill out the Noise Covariance Matrices
To calculate the estimator gain L, we can use the same algorithm as that used to calculate the control law. Again we define two matrices, $ Q $ and $ R $ however in this case their interpretations are slightly different.
$ Q_{est} \in \mathbf{R}^{X \;\times\; X} $ is referred to as the process noise covariance, it represents the accuracy of the state space model in being able to predict the next state based on the last state and the control input. It's assumed that the actual system is subject to some unknown noise which throws out the estimate of the state and in cases where that noise is very high, it's best to rely more heavily on the sensor readings.
$ R_{est} \in \mathbf{R}^{Y \;\times\; Y} $ is referred to as the sensor noise covariance, it represents the accuracy of the sensor readings in being able to observe the state. Here again, it's assumed that the actual sensors are subject to some unknown noise which throws out their measurements. In cases where this noise is very high, it's best to be less reliant on sensor readings.
```
Q_est = [[100, 0, 0, 0 ],
[0, 1000, 0, 0 ],
[0, 0, 100, 0 ],
[0, 0, 0, 10000]]
R_est = [[1,0],[0,1]]
```
## Calculate the Gain Matrix for a Linear Quadratic Estimator (aka Kalman Filter)
Ideally, the estimator's covariance matrices can be calculated empirically using data collected from the system's actual sensors and its model. Doing so is a bit outside of the scope of this notebook, so instead we can just tweak the noise values to come up with an estimator that converges on the actual state with a reasonable settling time based on the closed loop poles.
```
L, _, _ = lqr(np.array(A).T, np.array(C).T, Q_est, R_est)
L = L.T
print 'The estimator gain is:\n'
print 'L = ', L
controller_w_estimator = ss(A - np.matmul(B , K) - np.matmul(L , C),
L,
K,
np.zeros([plant.inputs, plant.outputs]))
closed_loop_system_w_estimator = feedback(plant, controller_w_estimator)
closed_loop_estimator_poles = pole(closed_loop_system_w_estimator)
print '\nThe poles of the closed loop system are:\n'
print closed_loop_estimator_poles
pole_plot(closed_loop_estimator_poles, 'Closed Loop Poles with Estimation')
```
# And You're Done!
Congratulations! You've tuned an LQR and an LQE to suit your system model and you can now cut and paste the gains into your arduino sketch.
## Coming soon, Integral Gain selection!
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import TrialPathfinder as tp
```
# Trial PathFinder
## Load Data Tables
TrialPathfinder reads tables in Pandas dataframe structure (pd.dataframe) as default. The date information should be read as datetime (use function pd.to_datetime to convert if not).
**1. Features**:
- <font color=darkblue>*Patient ID*</font>
- Treatment Information
- <font color=darkblue>*Drug name*</font>.
- <font color=darkblue>*Start date*</font>.
- <font color=darkblue>*Date of outcome*</font>. For example, if overall survival (OS) is used as metric, the date of outcome is the date of death. If progression-free survival (PFS) is used as metric, the date of outcome is the date of progression.
- <font color=darkblue>*Date of last visit*</font>. The patient's last record date of visit, used for censoring.
- <font color=darkblue>*Covariates (optional)*</font>: adjusted to emulate the blind assignment, used by Inverse probability of treatment weighting (IPTW) or propensity score matching (PSM). Some examples: age, gender, composite race/ethnicity, histology, smoking status, staging, ECOG, and biomarkers status.
**2. Tables used by eligibility criteria.**
- Use the same Patient ID as the features table.
We provide a synthetic example data in directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data). The features (*'features.csv'*) contain the required treatment information and three covariates (gender, race, ecog). Two tables (*'demographics.csv'* and *'lab.csv'*) are used by its eligibility criteria (*'criteria.csv'*).
[tutorial.ipynb](https://github.com/RuishanLiu/TrialPathfinder/blob/master/tutorial/tutorial.ipynb) provides a detailed tutorial and example to use the library TrialPathFinder.
We provide a synthetic example data in directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data).
- Eligibility criteria (*'criteria.csv'*) have five rules: Age, Histology_Squamous, ECOG, Platelets, Bilirubin.
- The features (*'features.csv'*) contain the required treatment information and three covariates (gender, race, ecog).
- Two tables (*'demographics.csv'* and *'lab.csv'*) are used.
```
features = pd.read_csv('data/features.csv')
demographics = pd.read_csv('data/demographics.csv')
lab = pd.read_csv('data/lab.csv')
indicator_miss = 'Missing'
# Process date information to be datetime format and explicitly define annotation for missing values
for table in [lab, features, demographics]:
for col in table.columns:
if 'Date' in col:
table[col] = pd.to_datetime(table[col])
table.loc[table[col].isna(), col] = indicator_miss
features.head()
demographics.head()
lab.head()
```
## Stadards of encoding eligibility criteria
We built a computational workflow to encode the description of eligibility criteria in the protocols into standardized instructions which can be parsed by Trial Pathfinder for cohort selection use.
**1. Basic logic.**
- Name of the criteria is written in the first row.
- A new statement starts with “#inclusion” or “#exclusion” to indicate the criterion’s type. Whether to include patients who have missing entries in the criteria: “(missing include)” or “(missing exclude)”. The default choice is including patients with missing entries.
- Data name format: “Table[‘featurename’]”. For example, “demographics[‘birthdate’]” denotes column date of birth in table demographics.
- Equation: ==, !=, <, <=, >, >=.
- Logic: AND, OR.
- Other operations: MIN, MAX, ABS.
- Time is encoded as “DAYS(80)”: 80 days; “MONTHS(4)”: 4 months; “YEARS(3)”: 3 years.
---
*Example: criteria "Age" - include patients more than 18 years old when they received the treatment.*
> Age \
\#Inclusion \
features['StartDate'] >= demographics['BirthDate'] + @YEARS(18)
---
**2. Complex rule with hierachy.**
- Each row is operated in sequential order
- The tables are prepared before the last row.
- The patients are selected at the last row.
---
*Example: criteria "Platelets" - include patients whose platelet count ≥ 100 x 10^3/μL*. \
To encode this criterion, we follow the procedure:
1. Prepare the lab table:
1. Pick the lab tests for platelet count
2. The lab test date should be within a -28 to +0 window around the treatment start date
3. Use the record closest to the treatment start date to do selection.
2. Select patients: lab value larger than 100 x 10^3/μL.
> Platelets \
\#Inclusion \
(lab['LabName'] == 'Platelet count') \
(lab['TestDate'] >= features['StartDate'] - @DAYS(28) ) AND (lab['TestDate'] <= features['StartDate']) \
MIN(ABS(lab['TestDate'] - features['StartDate'])) \
(lab['LabValue'] >= 100)
---
Here we load the example criteria 'criteria.csv' under directory [tutorial/data](https://github.com/RuishanLiu/TrialPathfinder/tree/master/tutorial/data).
```
criteria = pd.read_csv('data/criteria.csv', header=None).values.reshape(-1)
print(*criteria, sep='\n\n')
```
## Preparation
Before simulating real trials, we first encode all the eligibility criteria --- load pseudo-code, input to the algorithm and figure out which patient is excluded by each rule.
1. Create an empty cohort object
- tp.cohort_selection() requires all the patients ids used in the study. Here we analyze all the patients in the dataset, people can also use a subset of patients based on their needs.
```
patientids = features['PatientID']
cohort = tp.cohort_selection(patientids, name_PatientID='PatientID')
```
2. Add the tables needed in the eligibility criterion.
```
cohort.add_table('demographics', demographics)
cohort.add_table('lab', lab)
cohort.add_table('features', features)
```
3. Add individual eligibility criterion
```
# Option 1: add rules individually
for rule in criteria[:]:
name_rule, select, missing = cohort.add_rule(rule)
print('Rule %s: exclude patients %d/%d' % (name_rule, select.shape[0]-np.sum(select), select.shape[0]))
# # Option 2: add the list of criteria
# cohort.add_rules(criteria)
```
# Analysis
- Treatment drug: B
- Control drug: A
- Criteria used: Age, ECOG, Histology_Squamous, Platelets, Bilirubin
```
drug_treatment = ['drug B']
drug_control = ['drug A']
name_rules = ['Age', 'Histology_Squamous', 'ECOG', 'Platelets', 'Bilirubin']
covariates_cat = ['Gender', 'Race', 'ECOG'] # categorical covariates
covariates_cont = [] # continuous covariates
```
1. Original trial crieria
- Criteria includes Age, ECOG, Histology_Squamous, Platelets, Bilirubin.
```
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, name_rules,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))
```
2. Fully-relaxed criteria
- No rule applied (name_rules=[]).
```
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, [],
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))
```
3. Compute shapley values
```
shapley_values = tp.shapley_computation(cohort, features, drug_treatment, drug_control, name_rules,
tolerance=0.01, iter_max=1000,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss,
random_seed=1001, verbose=1)
pd.DataFrame([shapley_values], columns=name_rules, index=['Shapley Value'])
```
4. Data-driven criteria
```
name_rules_relax = np.array(name_rules)[shapley_values < 0]
HR, CI, data_cox = tp.emulate_trials(cohort, features, drug_treatment, drug_control, name_rules_relax,
covariates_cat=covariates_cat, covariates_cont=covariates_cont,
name_DrugName='DrugName', name_StartDate='StartDate',
name_OutcomeDate='OutcomeDate', name_LastVisitDate='LastVisitDate',
indicator_miss=indicator_miss)
print('Hazard Ratio: %.2f (%.2f-%.2f)' % (HR, CI[0], CI[1]))
print('Number of Patients: %d' % (data_cox.shape[0]))
```
|
github_jupyter
|
# Team Surface Velocity
### **Members**: Grace Barcheck, Canyon Breyer, Rodrigo Gómez-Fell, Trevor Hillebrand, Ben Hills, Lynn Kaluzienski, Joseph Martin, David Polashenski
### **Science Advisor**: Daniel Shapero
### **Special Thanks**: Ben Smith, David Shean
### Motivation
**Speaker: Canyon Breyer**
Previous work by Marsh and Rack (2012), and Lee and others (2012), have demonstrated the value of using satellite altimetry as a method of calculating ice surface velocity utilizing the Geoscience Laser Altimeter System (GLAS) on board ICESat. This altimetry method has several advantages over more traditional techniques due to high pointing accuracy for geo-location and an ability to measure velocity in regions that lack visible surface features (Marsh and Rack, 2012). The method also has the added benefit of dealing with tidal fluctuations without the need for a tidal correction model. The motivation for this project was to expand the methodology outlined in Marsh and Rack (2012) to the ICE-Sat2 dataset. The smaller footprint of the ICE-Sat2 mission will likely improve the overall accuracy of velocity measurements and the nature of its precise repeat passes would provide an avenue for studying temporal variations of glacier velocities.
### Project Objective:
**Speaker: Rodrigo Gómez-Fell**
Extract surface ice velocity on polar regions from ICESat-2 along track measurements
##### Goals:
- Compare the capabilities of ICESat-2 to extract surface ice velocity from ice shelves and ice streams
- Compare ICESat GLAS methodology (along track) to ICESat-2 across track
- Use crossovers for calculating velocities and determine how the measurements compare with simple along track and across track.
-Does this resolve different directions of ice flow?
- Can a surface velocity product be extracted from ATL06, or is ATL03 the more suitable product.
### Study Area:
When looking for a study region to test our ICESat-2 velocity derivation method, we prioritized regions that **1)** included both grounded and floating ice and **2)** had a good alignment between satellite track position and overall flow direction. We found Foundation Ice Stream, a large ice stream draining into the Filchner-Ronne Ice Shelf, to meet both of these criteria.

### Data Selection
We used the ICESat-2 Land Ice Height ATL06 product and then used the MEaSUREs Antarctic Velocity Map V2 (Rignot, 2017) for validation of our derived velocities
### Method
**Speaker: Ben Hills**
Following Marsh and Rack (2012) we used the slope of elevation for analysis, this helped amplify differences in the ice profile between repeat measurements and also removed the influence of tidal effects. This is portrayed in the figure below.


Fig.2: From Marsh and Rack 2012. Schematic of the method used to reduce the effect of oblique surface features and ice flow which is non-parallel to ICESat tracks. Black lines indicate satellite tracks, grey ticks indicate the orientation of surface features, and ⍺ is the feature-track angle. Bottom right profile illustrates that after adjustment there is no relative feature displacement due to cross-track separation, therefore all displacement is due to ice movement in the track direction.
### Our Methods:
**Cross-correlation background**
**Speaker: Grace Barcheck**
##### Test.scipy.signal.correlate on some ATL06 data from Foundation Ice Stream (FIS)
```
import numpy as np
import scipy, sys, os, pyproj, glob, re, h5py
import matplotlib.pyplot as plt
from scipy.signal import correlate
from astropy.time import Time
%matplotlib widget
%load_ext autoreload
%autoreload 2
```
##### Test scipy.signal.correlate
Generate test data
```
dx = 0.1
x = np.arange(0,10,dx)
y = np.zeros(np.shape(x))
ix0 = 30
ix1 = 30 + 15
y[ix0:ix1] = 1
fig,axs = plt.subplots(1,2)
axs[0].plot(x,y,'k')
axs[0].set_xlabel('distance (m)')
axs[0].set_ylabel('value')
axs[1].plot(np.arange(len(x)), y,'k')
axs[1].set_xlabel('index')
```
Next, we generate a signal to correlate the test data with
```
imposed_offset = int(14/dx) # 14 meters, in units of samples
x_noise = np.arange(0,50,dx) # make the vector we're comparing with much longer
y_noise = np.zeros(np.shape(x_noise))
y_noise[ix0 + imposed_offset : ix1 + imposed_offset] = 1
# uncomment the line below to add noise
# y_noise = y_noise * np.random.random(np.shape(y_noise))
fig,axs = plt.subplots(1,2)
axs[0].plot(x,y,'k')
axs[0].set_xlabel('distance (m)')
axs[0].set_ylabel('value')
axs[1].plot(np.arange(len(x)), y, 'k')
axs[1].set_xlabel('index')
axs[0].plot(x_noise,y_noise, 'b')
axs[0].set_xlabel('distance (m)')
axs[0].set_ylabel('value')
axs[1].plot(np.arange(len(x_noise)), y_noise,'b')
axs[1].set_xlabel('index')
fig.suptitle('black = original, blue = shifted')
```
##### Try scipy.signal.correlate:
mode ='full' returns the entire cross correlation; could be 'valid' to return only non- zero-padded part
method = direct (not fft)
```
corr = correlate(y_noise,y, mode = 'full', method = 'direct')
norm_val = np.sqrt(np.sum(y_noise**2)*np.sum(y**2))
corr = corr / norm_val
```
Let's look at the dimensions of corr
```
print('corr: ', np.shape(corr))
print('x: ', np.shape(x))
print('x: ', np.shape(x_noise))
```
##### Look at the correlation visualized in the plots below
```
# lagvec = np.arange(0,len(x_noise) - len(x) + 1)
lagvec = np.arange( -(len(x) - 1), len(x_noise), 1)
shift_vec = lagvec * dx
ix_peak = np.arange(len(corr))[corr == np.nanmax(corr)][0]
best_lag = lagvec[ix_peak]
best_shift = shift_vec[ix_peak]
fig,axs = plt.subplots(3,1)
axs[0].plot(lagvec,corr)
axs[0].plot(lagvec[ix_peak],corr[ix_peak], 'r*')
axs[0].set_xlabel('lag (samples)')
axs[0].set_ylabel('correlation coefficient')
axs[1].plot(shift_vec,corr)
axs[1].plot(shift_vec[ix_peak],corr[ix_peak], 'r*')
axs[1].set_xlabel('shift (m)')
axs[1].set_ylabel('correlation coefficient')
axs[2].plot(x + best_shift, y,'k')
axs[2].plot(x_noise, y_noise, 'b--')
axs[2].set_xlabel('shift (m)')
fig.suptitle(' '.join(['Shift ', str(best_lag), ' samples, or ', str(best_shift), ' m to line up signals']))
```
### A little Background on cross-correlation...

### Applying our method to ATL06 data
**Speaker: Ben Hills**
Load repeat data:
Import readers, etc.
```
# ! cd ..; [ -d pointCollection ] || git clone https://www.github.com/smithB/pointCollection.git
# sys.path.append(os.path.join(os.getcwd(), '..'))
#!python3 -m pip install --user git+https://github.com/tsutterley/pointCollection.git@pip
import pointCollection as pc
moa_datapath = '/srv/tutorial-data/land_ice_applications/'
datapath = '/home/jovyan/shared/surface_velocity/FIS_ATL06/'
```
#### **Geographic setting: Foundation Ice Stream**
```
print(pc.__file__)
spatial_extent = np.array([-65, -86, -55, -81])
lat=spatial_extent[[1, 3, 3, 1, 1]]
lon=spatial_extent[[2, 2, 0, 0, 2]]
print(lat)
print(lon)
# project the coordinates to Antarctic polar stereographic
xy=np.array(pyproj.Proj(3031)(lon, lat))
# get the bounds of the projected coordinates
XR=[np.nanmin(xy[0,:]), np.nanmax(xy[0,:])]
YR=[np.nanmin(xy[1,:]), np.nanmax(xy[1,:])]
MOA=pc.grid.data().from_geotif(os.path.join(moa_datapath, 'MOA','moa_2009_1km.tif'), bounds=[XR, YR])
# show the mosaic:
plt.figure()
MOA.show(cmap='gray', clim=[14000, 17000])
plt.plot(xy[0,:], xy[1,:])
plt.title('Mosaic of Antarctica for Foundation Ice Stream')
```
##### Load the repeat track data
ATL06 reader
```
def atl06_to_dict(filename, beam, field_dict=None, index=None, epsg=None):
"""
Read selected datasets from an ATL06 file
Input arguments:
filename: ATl06 file to read
beam: a string specifying which beam is to be read (ex: gt1l, gt1r, gt2l, etc)
field_dict: A dictinary describing the fields to be read
keys give the group names to be read,
entries are lists of datasets within the groups
index: which entries in each field to read
epsg: an EPSG code specifying a projection (see www.epsg.org). Good choices are:
for Greenland, 3413 (polar stereographic projection, with Greenland along the Y axis)
for Antarctica, 3031 (polar stereographic projection, centered on the Pouth Pole)
Output argument:
D6: dictionary containing ATL06 data. Each dataset in
dataset_dict has its own entry in D6. Each dataset
in D6 contains a numpy array containing the
data
"""
if field_dict is None:
field_dict={None:['latitude','longitude','h_li', 'atl06_quality_summary'],\
'ground_track':['x_atc','y_atc'],\
'fit_statistics':['dh_fit_dx', 'dh_fit_dy']}
D={}
# below: file_re = regular expression, it will pull apart the regular expression to get the information from the filename
file_re=re.compile('ATL06_(?P<date>\d+)_(?P<rgt>\d\d\d\d)(?P<cycle>\d\d)(?P<region>\d\d)_(?P<release>\d\d\d)_(?P<version>\d\d).h5')
with h5py.File(filename,'r') as h5f:
for key in field_dict:
for ds in field_dict[key]:
if key is not None:
ds_name=beam+'/land_ice_segments/'+key+'/'+ds
else:
ds_name=beam+'/land_ice_segments/'+ds
if index is not None:
D[ds]=np.array(h5f[ds_name][index])
else:
D[ds]=np.array(h5f[ds_name])
if '_FillValue' in h5f[ds_name].attrs:
bad_vals=D[ds]==h5f[ds_name].attrs['_FillValue']
D[ds]=D[ds].astype(float)
D[ds][bad_vals]=np.NaN
D['data_start_utc'] = h5f['/ancillary_data/data_start_utc'][:]
D['delta_time'] = h5f['/' + beam + '/land_ice_segments/delta_time'][:]
D['segment_id'] = h5f['/' + beam + '/land_ice_segments/segment_id'][:]
if epsg is not None:
xy=np.array(pyproj.proj.Proj(epsg)(D['longitude'], D['latitude']))
D['x']=xy[0,:].reshape(D['latitude'].shape)
D['y']=xy[1,:].reshape(D['latitude'].shape)
temp=file_re.search(filename)
D['rgt']=int(temp['rgt'])
D['cycle']=int(temp['cycle'])
D['beam']=beam
return D
```
##### Next we will read in the files
```
# find all the files in the directory:
# ATL06_files=glob.glob(os.path.join(datapath, 'PIG_ATL06', '*.h5'))
rgt = '0848'
ATL06_files=glob.glob(os.path.join(datapath, '*' + rgt + '*.h5'))
D_dict={}
error_count=0
for file in ATL06_files[:10]:
try:
D_dict[file]=atl06_to_dict(file, '/gt2l', index=slice(0, -1, 25), epsg=3031)
except KeyError as e:
print(f'file {file} encountered error {e}')
error_count += 1
print(f"read {len(D_dict)} data files of which {error_count} gave errors")
# find all the files in the directory:
# ATL06_files=glob.glob(os.path.join(datapath, 'PIG_ATL06', '*.h5'))
rgt = '0537'
ATL06_files=glob.glob(os.path.join(datapath, '*' + rgt + '*.h5'))
#D_dict={}
error_count=0
for file in ATL06_files[:10]:
try:
D_dict[file]=atl06_to_dict(file, '/gt2l', index=slice(0, -1, 25), epsg=3031)
except KeyError as e:
print(f'file {file} encountered error {e}')
error_count += 1
print(f"read {len(D_dict)} data files of which {error_count} gave errors")
```
##### Then, we will plot the ground tracks
```
plt.figure(figsize=[8,8])
hax0=plt.gcf().add_subplot(211, aspect='equal')
MOA.show(ax=hax0, cmap='gray', clim=[14000, 17000]);
hax1=plt.gcf().add_subplot(212, aspect='equal', sharex=hax0, sharey=hax0)
MOA.show(ax=hax1, cmap='gray', clim=[14000, 17000]);
for fname, Di in D_dict.items():
cycle=Di['cycle']
if cycle <= 2:
ax=hax0
else:
ax=hax1
#print(fname)
#print(f'\t{rgt}, {cycle}, {region}')
ax.plot(Di['x'], Di['y'])
if True:
try:
if cycle < 3:
ax.text(Di['x'][0], Di['y'][0], f"rgt={Di['rgt']}, cyc={cycle}", clip_on=True)
elif cycle==3:
ax.text(Di['x'][0], Di['y'][0], f"rgt={Di['rgt']}, cyc={cycle}+", clip_on=True)
except IndexError:
pass
hax0.set_title('cycles 1 and 2');
hax1.set_title('cycle 3+');
# find all the files in the directory:
# ATL06_files=glob.glob(os.path.join(datapath, 'PIG_ATL06', '*.h5'))
rgt = '0848'
ATL06_files=glob.glob(os.path.join(datapath, '*' + rgt + '*.h5'))
D_dict={}
error_count=0
for file in ATL06_files[:10]:
try:
D_dict[file]=atl06_to_dict(file, '/gt2l', index=slice(0, -1, 25), epsg=3031)
except KeyError as e:
print(f'file {file} encountered error {e}')
error_count += 1
print(f"read {len(D_dict)} data files of which {error_count} gave errors")
```
##### Repeat track elevation profile
```
# A revised code to plot the elevations of segment midpoints (h_li):
def plot_elevation(D6, ind=None, **kwargs):
"""
Plot midpoint elevation for each ATL06 segment
"""
if ind is None:
ind=np.ones_like(D6['h_li'], dtype=bool)
# pull out heights of segment midpoints
h_li = D6['h_li'][ind]
# pull out along track x coordinates of segment midpoints
x_atc = D6['x_atc'][ind]
plt.plot(x_atc, h_li, **kwargs)
```
**Data Visualization**
```
D_2l={}
D_2r={}
# specify the rgt here:
rgt="0027"
rgt="0848" #Ben's suggestion
# iterate over the repeat cycles
for cycle in ['03','04','05','06','07']:
for filename in glob.glob(os.path.join(datapath, f'*ATL06_*_{rgt}{cycle}*_003*.h5')):
try:
# read the left-beam data
D_2l[filename]=atl06_to_dict(filename,'/gt2l', index=None, epsg=3031)
# read the right-beam data
D_2r[filename]=atl06_to_dict(filename,'/gt2r', index=None, epsg=3031)
# plot the locations in the previous plot
map_ax.plot(D_2r[filename]['x'], D_2r[filename]['y'],'k');
map_ax.plot(D_2l[filename]['x'], D_2l[filename]['y'],'k');
except Exception as e:
print(f'filename={filename}, exception={e}')
plt.figure();
for filename, Di in D_2l.items():
#Plot only points that have ATL06_quality_summary==0 (good points)
hl=plot_elevation(Di, ind=Di['atl06_quality_summary']==0, label=f"cycle={Di['cycle']}")
#hl=plt.plot(Di['x_atc'][Di['atl06_quality_summary']==0], Di['h_li'][Di['atl06_quality_summary']==0], '.', label=f"cycle={Di['cycle']}")
plt.legend()
plt.xlabel('x_atc')
plt.ylabel('elevation');
```
##### Now, we need to pull out a segment and cross correlate:
Let's try 2.93e7 through x_atc=2.935e7
```
cycles = [] # names of cycles with data
for filename, Di in D_2l.items():
cycles += [str(Di['cycle']).zfill(2)]
cycles.sort()
# x1 = 2.93e7
# x2 = 2.935e7
beams = ['gt1l','gt1r','gt2l','gt2r','gt3l','gt3r']
# try and smooth without filling nans
smoothing_window_size = int(np.round(60 / dx)) # meters / dx; odd multiples of 20 only! it will break
filt = np.ones(smoothing_window_size)
smoothed = True
### extract and plot data from all available cycles
fig, axs = plt.subplots(4,1)
x_atc = {}
h_li_raw = {}
h_li = {}
h_li_diff = {}
times = {}
for cycle in cycles:
# find Di that matches cycle:
Di = {}
x_atc[cycle] = {}
h_li_raw[cycle] = {}
h_li[cycle] = {}
h_li_diff[cycle] = {}
times[cycle] = {}
filenames = glob.glob(os.path.join(datapath, f'*ATL06_*_{rgt}{cycle}*_003*.h5'))
for filename in filenames:
try:
for beam in beams:
Di[filename]=atl06_to_dict(filename,'/'+ beam, index=None, epsg=3031)
times[cycle][beam] = Di[filename]['data_start_utc']
# extract h_li and x_atc for that section
x_atc_tmp = Di[filename]['x_atc']
h_li_tmp = Di[filename]['h_li']#[ixs]
# segment ids:
seg_ids = Di[filename]['segment_id']
# print(len(seg_ids), len(x_atc_tmp))
# make a monotonically increasing x vector
# assumes dx = 20 exactly, so be carefull referencing back
ind = seg_ids - np.nanmin(seg_ids) # indices starting at zero, using the segment_id field, so any skipped segment will be kept in correct location
x_full = np.arange(np.max(ind)+1) * 20 + x_atc_tmp[0]
h_full = np.zeros(np.max(ind)+1) + np.NaN
h_full[ind] = h_li_tmp
x_atc[cycle][beam] = x_full
h_li_raw[cycle][beam] = h_full
# running average smoother /filter
if smoothed == True:
h_smoothed = (1/smoothing_window_size) * np.convolve(filt, h_full, mode="same")
#h_smoothed = h_smoothed[int(np.floor(smoothing_window_size/2)):int(-np.floor(smoothing_window_size/2))] # cut off ends
h_li[cycle][beam] = h_smoothed
# # differentiate that section of data
h_diff = (h_smoothed[1:] - h_smoothed[0:-1]) / (x_full[1:] - x_full[0:-1])
else:
h_li[cycle][beam] = h_full
h_diff = (h_full[1:] - h_full[0:-1]) / (x_full[1:] - x_full[0:-1])
h_li_diff[cycle][beam] = h_diff
# plot
axs[0].plot(x_full, h_full)
axs[1].plot(x_full[1:], h_diff)
# axs[2].plot(x_atc_tmp[1:] - x_atc_tmp[:-1])
axs[2].plot(np.isnan(h_full))
axs[3].plot(seg_ids[1:]- seg_ids[:-1])
except:
print(f'filename={filename}, exception={e}')
```
**Speaker: Grace Barcheck**
```
n_veloc = len(cycles) - 1
segment_length = 3000 # m
x1 = 2.935e7# 2.925e7#x_atc[cycles[0]][beams[0]][1000] <-- the very first x value in a file; doesn't work, I think b/c nans # 2.93e7
#x1=2.917e7
search_width = 800 # m
dx = 20 # meters between x_atc points
for veloc_number in range(n_veloc):
cycle1 = cycles[veloc_number]
cycle2 = cycles[veloc_number+1]
t1_string = times[cycle1]['gt1l'][0].astype(str) #figure out later if just picking hte first one it ok
t1 = Time(t1_string)
t2_string = times[cycle2]['gt1l'][0].astype(str) #figure out later if just picking hte first one it ok
t2 = Time(t2_string)
dt = (t2 - t1).jd # difference in julian days
velocities = {}
for beam in beams:
fig1, axs = plt.subplots(4,1)
# cut out small chunk of data at time t1 (first cycle)
x_full_t1 = x_atc[cycle1][beam]
ix_x1 = np.arange(len(x_full_t1))[x_full_t1 >= x1][0]
ix_x2 = ix_x1 + int(np.round(segment_length/dx))
x_t1 = x_full_t1[ix_x1:ix_x2]
h_li1 = h_li_diff[cycle1][beam][ix_x1-1:ix_x2-1] # start 1 index earlier because the data are differentiated
# cut out a wider chunk of data at time t2 (second cycle)
x_full_t2 = x_atc[cycle2][beam]
ix_x3 = ix_x1 - int(np.round(search_width/dx)) # offset on earlier end by # indices in search_width
ix_x4 = ix_x2 + int(np.round(search_width/dx)) # offset on later end by # indices in search_width
x_t2 = x_full_t2[ix_x3:ix_x4]
h_li2 = h_li_diff[cycle2][beam][ix_x3:ix_x4]
# plot data
axs[0].plot(x_t2, h_li2, 'r')
axs[0].plot(x_t1, h_li1, 'k')
axs[0].set_xlabel('x_atc (m)')
# correlate old with newer data
corr = correlate(h_li1, h_li2, mode = 'valid', method = 'direct')
norm_val = np.sqrt(np.sum(h_li1**2)*np.sum(h_li2**2)) # normalize so values range between 0 and 1
corr = corr / norm_val
# lagvec = np.arange( -(len(h_li1) - 1), len(h_li2), 1)# for mode = 'full'
# lagvec = np.arange( -int(search_width/dx) - 1, int(search_width/dx) +1, 1) # for mode = 'valid'
lagvec = np.arange(- int(np.round(search_width/dx)), int(search_width/dx) +1,1)# for mode = 'valid'
shift_vec = lagvec * dx
ix_peak = np.arange(len(corr))[corr == np.nanmax(corr)][0]
best_lag = lagvec[ix_peak]
best_shift = shift_vec[ix_peak]
velocities[beam] = best_shift/(dt/365)
axs[1].plot(lagvec,corr)
axs[1].plot(lagvec[ix_peak],corr[ix_peak], 'r*')
axs[1].set_xlabel('lag (samples)')
axs[2].plot(shift_vec,corr)
axs[2].plot(shift_vec[ix_peak],corr[ix_peak], 'r*')
axs[2].set_xlabel('shift (m)')
# plot shifted data
axs[3].plot(x_t2, h_li2, 'r')
axs[3].plot(x_t1 - best_shift, h_li1, 'k')
axs[3].set_xlabel('x_atc (m)')
axs[0].text(x_t2[100], 0.6*np.nanmax(h_li2), beam)
axs[1].text(lagvec[5], 0.6*np.nanmax(corr), 'best lag: ' + str(best_lag) + '; corr val: ' + str(np.round(corr[ix_peak],3)))
axs[2].text(shift_vec[5], 0.6*np.nanmax(corr), 'best shift: ' + str(best_shift) + ' m'+ '; corr val: ' + str(np.round(corr[ix_peak],3)))
axs[2].text(shift_vec[5], 0.3*np.nanmax(corr), 'veloc of ' + str(np.round(best_shift/(dt/365),1)) + ' m/yr')
plt.tight_layout()
fig1.suptitle('black = older cycle data, red = newer cycle data to search across')
n_veloc = len(cycles) - 1
segment_length = 2000 # m
search_width = 800 # m
dx = 20 # meters between x_atc points
correlation_threshold = 0.65
x1 = 2.915e7#x_atc[cycles[0]][beams[0]][1000] <-- the very first x value in a file; doesn't work, I think b/c nans # 2.93e7
x1s = x_atc[cycles[veloc_number]][beams[0]][search_width:-segment_length-2*search_width:10]
velocities = {}
correlations = {}
for beam in beams:
velocities[beam] = np.empty_like(x1s)
correlations[beam] = np.empty_like(x1s)
for xi,x1 in enumerate(x1s):
for veloc_number in range(n_veloc):
cycle1 = cycles[veloc_number]
cycle2 = cycles[veloc_number+1]
t1_string = times[cycle1]['gt1l'][0].astype(str) #figure out later if just picking hte first one it ok
t1 = Time(t1_string)
t2_string = times[cycle2]['gt1l'][0].astype(str) #figure out later if just picking hte first one it ok
t2 = Time(t2_string)
dt = (t2 - t1).jd # difference in julian days
for beam in beams:
# cut out small chunk of data at time t1 (first cycle)
x_full_t1 = x_atc[cycle1][beam]
ix_x1 = np.arange(len(x_full_t1))[x_full_t1 >= x1][0]
ix_x2 = ix_x1 + int(np.round(segment_length/dx))
x_t1 = x_full_t1[ix_x1:ix_x2]
h_li1 = h_li_diff[cycle1][beam][ix_x1-1:ix_x2-1] # start 1 index earlier because the data are differentiated
# cut out a wider chunk of data at time t2 (second cycle)
x_full_t2 = x_atc[cycle2][beam]
ix_x3 = ix_x1 - int(np.round(search_width/dx)) # offset on earlier end by # indices in search_width
ix_x4 = ix_x2 + int(np.round(search_width/dx)) # offset on later end by # indices in search_width
x_t2 = x_full_t2[ix_x3:ix_x4]
h_li2 = h_li_diff[cycle2][beam][ix_x3:ix_x4]
# correlate old with newer data
corr = correlate(h_li1, h_li2, mode = 'valid', method = 'direct')
norm_val = np.sqrt(np.sum(h_li1**2)*np.sum(h_li2**2)) # normalize so values range between 0 and 1
corr = corr / norm_val
# lagvec = np.arange( -(len(h_li1) - 1), len(h_li2), 1)# for mode = 'full'
# lagvec = np.arange( -int(search_width/dx) - 1, int(search_width/dx) +1, 1) # for mode = 'valid'
lagvec = np.arange(- int(np.round(search_width/dx)), int(search_width/dx) +1,1)# for mode = 'valid'
shift_vec = lagvec * dx
if all(np.isnan(corr)):
velocities[beam][xi] = np.nan
correlations[beam][xi] = np.nan
else:
correlation_value = np.nanmax(corr)
if correlation_value >= correlation_threshold:
ix_peak = np.arange(len(corr))[corr == correlation_value][0]
best_lag = lagvec[ix_peak]
best_shift = shift_vec[ix_peak]
velocities[beam][xi] = best_shift/(dt/365)
correlations[beam][xi] = correlation_value
else:
velocities[beam][xi] = np.nan
correlations[beam][xi] = correlation_value
plt.figure()
ax1 = plt.subplot(211)
for filename, Di in D_2l.items():
#Plot only points that have ATL06_quality_summary==0 (good points)
hl=plot_elevation(Di, ind=Di['atl06_quality_summary']==0, label=f"cycle={Di['cycle']}")
#hl=plt.plot(Di['x_atc'][Di['atl06_quality_summary']==0], Di['h_li'][Di['atl06_quality_summary']==0], '.', label=f"cycle={Di['cycle']}")
plt.legend()
plt.ylabel('elevation');
ax2 = plt.subplot(212,sharex=ax1)
for beam in beams:
plt.plot(x1s+dx*(segment_length/2),velocities[beam],'.',alpha=0.2,ms=3,label=beam)
plt.ylabel('velocity (m/yr)')
plt.xlabel('x_atc')
plt.ylim(0,1500)
plt.legend()
plt.suptitle('Along track velocity: all beams')
```
#### **Median velocity for all 6 beams:**
**Above a cross-correlation threshold of 0.65**
```
plt.figure()
ax1 = plt.subplot(211)
for filename, Di in D_2l.items():
#Plot only points that have ATL06_quality_summary==0 (good points)
hl=plot_elevation(Di, ind=Di['atl06_quality_summary']==0, label=f"cycle={Di['cycle']}")
#hl=plt.plot(Di['x_atc'][Di['atl06_quality_summary']==0], Di['h_li'][Di['atl06_quality_summary']==0], '.', label=f"cycle={Di['cycle']}")
plt.legend()
plt.ylabel('elevation');
ax2 = plt.subplot(212,sharex=ax1)
medians = np.empty(len(x1s))
stds = np.empty(len(x1s))
for xi, x1 in enumerate(x1s):
corr_vals = []
velocs = []
for beam in beams:
corr_vals += [correlations[beam][xi]]
velocs += [velocities[beam][xi]]
n_obs = len(velocs)
if n_obs >0:
corr_mask = np.array(corr_vals) >= correlation_threshold
veloc_mask = np.abs(np.array(velocs)) < 0.67*segment_length # get rid of segments that are nailed against one edge for some reason
mask = corr_mask * veloc_mask
median_veloc = np.nanmedian(np.array(velocs)[mask])
std_veloc = np.nanstd(np.array(velocs)[mask])
medians[xi] = median_veloc
stds[xi] = std_veloc
ax2.plot([x1,x1], [median_veloc - std_veloc, median_veloc +std_veloc], '-', color= [0.7, 0.7, 0.7])
ax2.plot(x1s, medians, 'k.', markersize=2)
# for beam in beams:
# plt.plot(x1s+dx*(segment_length/2),velocities[beam],'.',alpha=0.2,ms=3,label=beam)
plt.ylabel('velocity (m/yr)')
plt.xlabel('x_atc')
plt.ylim(0,1500)
plt.legend()
plt.suptitle('Median along track velocity')
plt.figure()
ax1 = plt.subplot(211)
for beam in beams:
xvals = x1s+dx*(segment_length/2)
corrs = correlations[beam]
ixs = corrs >= correlation_threshold
ax1.plot(xvals[ixs], corrs[ixs],'.',alpha=0.2,ms=3,label=beam)
plt.ylabel('correlation values, 0->1')
plt.xlabel('x_atc')
plt.legend()
plt.suptitle('Correlation values > threshold, all beams')
ax1 = plt.subplot(212)
for beam in beams:
ax1.plot(x1s+dx*(segment_length/2),correlations[beam],'.',alpha=0.2,ms=3,label=beam)
plt.ylabel('correlation values, 0->1')
plt.xlabel('x_atc')
plt.legend()
plt.suptitle('Correlation values, all beams')
```
Comparison between measures
### Results:
**848**
**Speaker: Lynn Kaluzienski**


**537**
**Speaker: Joseph Martin**



### Future Work for the Surface Velocity Team:
**Speaker: David Polashenski**
- Calculating correlation uncertainty
- Considering larger, more complex areas
- Pending objectives
- Develop methodology to extract Across Track velocities and test efficacy
- Compare ICESat GLAS methodology (Along Track) to ICESat-2 methodology (Across Track)
- Compare the capabilites of ICESat-2 to extract surface ice velocity from ice shelves and ice streams
|
github_jupyter
|
# aitextgen Training Hello World
_Last Updated: Feb 21, 2021 (v.0.4.0)_
by Max Woolf
A "Hello World" Tutorial to show how training works with aitextgen, even on a CPU!
```
from aitextgen.TokenDataset import TokenDataset
from aitextgen.tokenizers import train_tokenizer
from aitextgen.utils import GPT2ConfigCPU
from aitextgen import aitextgen
```
First, download this [text file of Shakespeare's plays](https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt), to the folder with this notebook, then put the name of the downloaded Shakespeare text for training into the cell below.
```
file_name = "input.txt"
```
You can now train a custom Byte Pair Encoding Tokenizer on the downloaded text!
This will save one file: `aitextgen.tokenizer.json`, which contains the information needed to rebuild the tokenizer.
```
train_tokenizer(file_name)
tokenizer_file = "aitextgen.tokenizer.json"
```
`GPT2ConfigCPU()` is a mini variant of GPT-2 optimized for CPU-training.
e.g. the # of input tokens here is 64 vs. 1024 for base GPT-2. This dramatically speeds training up.
```
config = GPT2ConfigCPU()
```
Instantiate aitextgen using the created tokenizer and config
```
ai = aitextgen(tokenizer_file=tokenizer_file, config=config)
```
You can build datasets for training by creating TokenDatasets, which automatically processes the dataset with the appropriate size.
```
data = TokenDataset(file_name, tokenizer_file=tokenizer_file, block_size=64)
data
```
Train the model! It will save pytorch_model.bin periodically and after completion to the `trained_model` folder. On a 2020 8-core iMac, this took ~25 minutes to run.
The configuration below processes 400,000 subsets of tokens (8 * 50000), which is about just one pass through all the data (1 epoch). Ideally you'll want multiple passes through the data and a training loss less than `2.0` for coherent output; when training a model from scratch, that's more difficult, but with long enough training you can get there!
```
ai.train(data, batch_size=8, num_steps=50000, generate_every=5000, save_every=5000)
```
Generate text from your trained model!
```
ai.generate(10, prompt="ROMEO:")
```
With your trained model, you can reload the model at any time by providing the `pytorch_model.bin` model weights, the `config`, and the `tokenizer`.
```
ai2 = aitextgen(model_folder="trained_model",
tokenizer_file="aitextgen.tokenizer.json")
ai2.generate(10, prompt="ROMEO:")
```
# MIT License
Copyright (c) 2021 Max Woolf
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
github_jupyter
|
# Train-Eval
---
## Import Libraries
```
import os
import sys
from pathlib import Path
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.data import BucketIterator
sys.path.append("../")
from meta_infomax.datasets.fudan_reviews import prepare_data, get_data
```
## Global Constants
```
BSIZE = 16
ENCODER_DIM = 100
CLASSIFIER_DIM = 100
NUM_TASKS = 14
EPOCHS = 1
DATASETS = ['apparel', 'baby', 'books', 'camera_photo', 'electronics',
'health_personal_care', 'imdb', 'kitchen_housewares', 'magazines',
'music', 'software', 'sports_outdoors', 'toys_games', 'video']
```
# Load Data
```
from torchtext.vocab import GloVe
# prepare_data()
train_set, dev_set, test_set, vocab = get_data()
train_iter, dev_iter, test_iter = BucketIterator.splits((train_set, dev_set, test_set),
batch_sizes=(BSIZE, BSIZE*2, BSIZE*2),
sort_within_batch=False,
sort_key=lambda x: len(x.text))
batch = next(iter(train_iter))
batch
batch.text[0].shape, batch.label.shape, batch.task.shape
vocab.stoi["<pad>"]
```
# Baseline Model
```
class Encoder(nn.Module):
def __init__(self,emb_dim, hidden_dim, num_layers):
super().__init__()
self.lstm = nn.LSTM(emb_dim, hidden_dim, num_layers, batch_first=True, bidirectional=True)
def forward(self, x):
self.h0 = self.h0.to(x.device)
self.c0 = self.c0.to(x.device)
out, _ = self.lstm(x, (self.h0, self.c0))
return out
class Classifier(nn.Module):
def __init__(self, in_dim, hidden_dim, out_dim):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(in_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, out_dim)
)
def forward(self, x):
return self.layers(x)
class MultiTaskInfoMax(nn.Module):
def __init__(self, shared_encoder, embeddings, vocab, encoder_dim, encoder_layers, classifier_dim, out_dim):
super().__init__()
self.emb = nn.Embedding.from_pretrained(embeddings, freeze=True, padding_idx=vocab.stoi["<pad>"])
self.shared_encoder = shared_encoder
self.private_encoder = Encoder(embeddings.shape[-1], encoder_dim, encoder_layers)
self.classifier = Classifier(encoder_dim*4, classifier_dim, out_dim)
def forward(self, sentences, lengths):
sent_embed = self.emb(sentences)
shared_out = self.shared_encoder(sent_embed)
private_out = self.private_encoder(sent_embed)
h = torch.cat((shared_out, private_out), dim=1)
out = self.classifier(h)
return out, shared_out, private_out
```
# Train
## Overfit Batch
```
vocab.vectors.shape
shared_encoder = Encoder(vocab.vectors.shape[1], ENCODER_DIM, 1)
shared_encoder
multitask_models = [MultiTaskInfoMax(shared_encoder=shared_encoder, embeddings=vocab.vectors, vocab=vocab,
encoder_dim=ENCODER_DIM,encoder_layers=1, classifier_dim=CLASSIFIER_DIM, out_dim=2)
for i in range(len(DATASETS))]
multitask_models[1]
multitask_models[batch]
```
|
github_jupyter
|
# Introduction to Jupyter Notebooks
Today we are going to learn about [Jupyter Notebooks](https://jupyter.org/)! The advantage of notebooks is that they can include explanatory text, code, and plots in the same document. **This makes notebooks an ideal playground for explaining and learning new things without having to jump between several documents.** You will use notebooks a LOT during your studies, and this is why I decided to teach you how they work very early in your programming journey.
This document itself is a notebook. It is a simple text file with a `.ipynb` file path ending. Notebooks are best opened in "JupyterLab", the development environment that you learned about last week.
**Note: learning about notebooks is less urgent than learning how to write correct code. If you are feeling overwhelmed by coding, you should focus on the coding exercises for now and learn about notebooks a bit later. They will only become mandatory in about 2 to 3 weeks.**
Let's start with a short video introduction, which will invite you to download this notebook to try it yourself:
```
from IPython.display import VimeoVideo
VimeoVideo(691294249, width=900)
```
## First steps
Have you downloaded and opened the notebook as explained in the video? If not, do that first and continue this lesson on your own latpop.
At first sight the notebook looks like a text editor. Below this line, you can see a **cell**. The default purpose of a cell is to write code:
```
# Click on this cell, so that its frame gets highlighted
m = 'Hello'
print(m)
```
You can write one or more lines of code in a cell. You can **run** this code by clicking on the "Run" button from the toolbar above when the cell's frame is highlighted. Try it now!
Clicking "play" through a notebook is possible, but it is much faster to use the keybord shortcut instead: `[Shift+Enter]`. Once you have executed a cell, the next cell is selected. You can **insert** cells in a notebook with the `+` button in the toolbar. Again, it is much faster to learn the keybord shortcut for this: `[Ctrl+m]` or `[ESC]` to enter in command mode (blue frame) then press `[a]` to insert a cell "above" the active cell or `[b]` for "below".
Create a few empty cells above and below the current one and try to create some variables. Instead of clicking on a cell to enter in edit mode press `[Enter]`.
You can **delete** a cell by clicking "Delete" in the "Edit" menu, or you can use the shortcut: `[Ctrl+m]` to enter in command mode then press `[d]` two times!
## More cell editing
When you have a look into the "Edit" menu, you will see that there are more possibilities to edit cells, like:
- **copy** / **cut** and **paste**
- **splitting** and **merging** cells
and more.
```
a = 'This cell needs to be splitted.'
b = 'Put the cursor in the row between the variables a and b, then choose [split cell] in the "Edit" menu!'
```
Another helpful command is "Undo delete cell', which is sometimes needed when the key `[d]` was pressed too fast.
## Writing and executing code
The variables created in one cell can be used (or overwritten) in subsequent cells:
```
s = 'Hello'
print(s)
s = s + ' Python!'
# Lines which start with # are not executed. These are for comments.
s
```
Note that we ommited the `print` commmand above (this is OK if you want to print something at the end of the cell only).
In jupyter notebooks, **code autocompletion** is supported. This is very useful when writing code. Variable names, functions and methods can be completed by pressing `[TAB]`.
```
# Let's define a random sentence as string.
sentence = 'How Are You?'
# Now try autocompletion! Type 'se' in the next row and press [TAB].
```
An advantage of notebooks is that each single cell can be executed separately. That provides an easy way to execute code step by step, one cell after another. **It is important to notice that the order in which you execute the cells is the order with which the jupyter notebook calculates and saves variables - the execution order therefore depends on you, NOT on the order of the cells in the document**. That means that it makes a difference, whether you execute the cells top down one after another, or you mix them (cell 1, then cell 5, then cell 2 etc.).
The numbers on the left of each cell show you your order of execution. When a calculation is running longer, you will see an asterisk in the place of the number. That leads us to the next topic:
## Restart or interrupt the kernel
Sometimes calculations last too long and you want to **interrupt** them. You can do this by clicking the "Stop button" in the toolbar.
The "**kernel**" of a notebook is the actual python interpreter which runs your code. There is one kernel per opened notebook (i.e. the notebooks cannot share data or variables between each other). In certain situations (for example, if you got confused about the order of your cells and variables and want a fresh state), you might want to **retart the kernel**. You can do so (as well as other options such as **clearing the output** of a notebook) in the "Kernel" menu in the top jupyterlab bar.
## Errors in a cell
Sometimes, a piece of code in a cell won't run properly. This happens to everyone! Here is an example:
```
# This will produce a "NameError"
test = 1 + 3
print(tesT)
```
When a cell ends with an error, don't panic! Nothing we cannot recover from. First of all, the other cells will still run (unless they depend on the output of the failing cell): i.e., your kernel is still active. If you want to recover from the error, adress the problem (here a capsize issue) and re-run the cell.
## Formatting your notebook with text, titles and formulas
The default role of a cell is to run code, but you can tell the notebook to format a cell as "text" by clicking in the menu bar on "Cell", choose "Cell Type" $\rightarrow$ "Markdown". The current cell will now be transformed to a normal text.
Again, there is a keyboard shortcut for this: press `[Ctrl+m]` to enter in command mode and then `[m]` to convert the active cell to text. The opposite (converting a text cell to code) can be done with `[Ctrl+m]` to enter in command mode and then `[y]`.
As we have seen, the notebook editor has two simple modes: the "command mode" to navigate between cells and activate them, and the "edit mode" to edit their content. To edit a cell you have two choices:
- press `[enter]` on a selected (highlighted) cell
- double click on a cell (any cell)
Now, try to edit the cell below!
A text cell is formatted with the [Markdown](https://en.wikipedia.org/wiki/Markdown) format, e.g. it is possible to write lists:
- item 1
- item 2
Numbered lists:
1. part a
2. part b
Titles with the `#` syntax (the number of `#` indicating the level of the title:
### This is a level 3 title (with 3 `#` symbols)
Mathematical formulas can be written down with the familiar Latex notation:
$$ E = m c^2$$
You can also write text in **bold** or *cursive*.
## Download a notebook
Jupyter notebooks can be downloaded in various formats:
- Standard notebook (`*.ipynb`): a text file only useful within the Jupyter framework
- Python (`*.py`): a python script that can be executed separately.
- HTML (`*.html`): an html text file that can be opened in any web browser (doens't require python or jupyter!)
- ... and a number of other formats that may or may not work depending on your installation
**To download a jupyter notebook in the notebook format** (`.ipynb`), select the file on the left-hand side bar, right-click and select "Download". Try it now!
**For all other formats**, go to the "File" menu, then "Export notebook as..."
## Take home points
- jupyter notebooks consist of cells, which can be either code or text (not both)
- one can navigate between cells in "control mode" (`[ctrl+m]`) and edit them in "edit mode" (`[enter]` or double click)
- to exectute a cell, do: `[shift+enter]`
- the order of execution of cells *does* matter
- a text cell is written in markdown format, which allows lots of fancy formatting
These were the most important features of jupyter-notebook. In the notebook's menu bar the tab "Help" provides links to the documentation. Keyboard shortcuts are listed in the "Palette" icon on the left-hand side toolbar. Furthermore, there are more tutorials [on the Jupyter website](https://jupyter.org/try).
But with the few commands you learned today, you are already well prepared for the rest of the class!
|
github_jupyter
|
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="Banana.app", no_graphics=True)
```
### 1. Define imports
python 3, numpy, matplotlib, torch
```
# General imports
import numpy as np
import random
from collections import namedtuple, deque
import matplotlib.pyplot as plt
%matplotlib inline
# torch imports
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Constants Defin itions
BUFFER_SIZE = int(1e5) # replay buffer size
BATCH_SIZE = 64 # minibatch size
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
# Number of neurons in the layers of the q Network
FC1_UNITS = 32 # 16 # 32 # 64
FC2_UNITS = 16 # 8 # 16 # 64
FC3_UNITS = 8
# Work area to quickly test utility functions
import time
from datetime import datetime, timedelta
start_time = time.time()
time.sleep(10)
print('Elapsed : {}'.format(timedelta(seconds=time.time() - start_time)))
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
from unityagents import UnityEnvironment
env = UnityEnvironment(file_name="Banana.app", no_graphics=True)
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
The cell below tests to make sure the environment is up and running by printing some information about the environment.
```
# reset the environment for training agents via external python API
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class QNetwork(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc1_units = FC1_UNITS, fc2_units = FC2_UNITS, fc3_units = FC3_UNITS):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
"""
super(QNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size,fc1_units)
self.fc2 = nn.Linear(fc1_units,fc2_units)
self.fc3 = nn.Linear(fc2_units,fc3_units)
self.fc4 = nn.Linear(fc3_units,action_size)
def forward(self, state):
"""Build a network that maps state -> action values."""
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
class ReplayBuffer:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, seed):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
seed (int): random seed
"""
self.action_size = action_size
self.memory = deque(maxlen=buffer_size)
self.batch_size = batch_size
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
self.seed = random.seed(seed)
def add(self, state, action, reward, next_state, done):
"""Add a new experience to memory."""
e = self.experience(state, action, reward, next_state, done)
self.memory.append(e)
def sample(self):
"""Randomly sample a batch of experiences from memory."""
experiences = random.sample(self.memory, k=self.batch_size)
states = torch.from_numpy(np.vstack([e.state for e in experiences if e is not None])).float().to(device)
actions = torch.from_numpy(np.vstack([e.action for e in experiences if e is not None])).long().to(device)
rewards = torch.from_numpy(np.vstack([e.reward for e in experiences if e is not None])).float().to(device)
next_states = torch.from_numpy(np.vstack([e.next_state for e in experiences if e is not None])).float().to(device)
dones = torch.from_numpy(np.vstack([e.done for e in experiences if e is not None]).astype(np.uint8)).float().to(device)
return (states, actions, rewards, next_states, dones)
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory)
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, seed):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
seed (int): random seed
"""
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Q-Network
self.qnetwork_local = QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = QNetwork(state_size, action_size, seed).to(device)
self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR)
# Replay memory
self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed)
# Initialize time step (for updating every UPDATE_EVERY steps)
self.t_step = 0
def step(self, state, action, reward, next_state, done):
# Save experience in replay memory
self.memory.add(state, action, reward, next_state, done)
# Learn every UPDATE_EVERY time steps.
self.t_step = (self.t_step + 1) % UPDATE_EVERY
if self.t_step == 0:
# If enough samples are available in memory, get random subset and learn
if len(self.memory) > BATCH_SIZE:
experiences = self.memory.sample()
self.learn(experiences, GAMMA)
def act(self, state, eps=0.):
"""Returns actions for given state as per current policy.
Params
======
state (array_like): current state
eps (float): epsilon, for epsilon-greedy action selection
"""
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
self.qnetwork_local.eval()
with torch.no_grad():
action_values = self.qnetwork_local(state)
self.qnetwork_local.train()
# Epsilon-greedy action selection
if random.random() > eps:
return np.argmax(action_values.cpu().data.numpy())
else:
return random.choice(np.arange(self.action_size))
def learn(self, experiences, gamma):
"""Update value parameters using given batch of experience tuples.
Params
======
experiences (Tuple[torch.Variable]): tuple of (s, a, r, s', done) tuples
gamma (float): discount factor
"""
states, actions, rewards, next_states, dones = experiences
# compute and minimize the loss
# Get max predicted Q values (for next states) from target model
Q_targets_next = self.qnetwork_target(next_states).detach().max(1)[0].unsqueeze(1)
# Compute Q targets for current states
Q_targets = rewards + gamma * Q_targets_next * (1 - dones)
# Get expected Q values from local model
Q_expected = self.qnetwork_local(states).gather(1,actions)
# Compute Loss
loss = F.mse_loss(Q_expected,Q_targets)
#Minimize Loss
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# ------------------- update target network ------------------- #
self.soft_update(self.qnetwork_local, self.qnetwork_target, TAU)
def soft_update(self, local_model, target_model, tau):
"""Soft update model parameters.
θ_target = τ*θ_local + (1 - τ)*θ_target
Params
======
local_model (PyTorch model): weights will be copied from
target_model (PyTorch model): weights will be copied to
tau (float): interpolation parameter
"""
for target_param, local_param in zip(target_model.parameters(), local_model.parameters()):
target_param.data.copy_(tau*local_param.data + (1.0-tau)*target_param.data)
agent = Agent(state_size=state_size, action_size=action_size, seed=42)
print(agent.qnetwork_local)
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
has_seen_13 = False
max_score = 0
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0
max_steps = 0
for t in range(max_t):
action = agent.act(state, eps)
# next_state, reward, done, _ = env.step(action)
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0]
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
max_steps += 1
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode : {}\tAverage Score : {:.2f}\tMax_steps : {}'.format(i_episode, np.mean(scores_window),max_steps), end="")
if i_episode % 100 == 0:
print('\rEpisode : {}\tAverage Score : {:.2f}\tMax_steps : {}'.format(i_episode, np.mean(scores_window),max_steps))
if (np.mean(scores_window)>=13.0) and (not has_seen_13):
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
has_seen_13 = True
# break
# To see how far it can go
# Store the best model
if np.mean(scores_window) > max_score:
max_score = np.mean(scores_window)
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
return scores
start_time = time.time()
scores = dqn() # The env ends at 300 steps. Tried max_t > 1K. Didn't see any complex adaptive temporal behavior
env.close() # Close the environment
print('Elapsed : {}'.format(timedelta(seconds=time.time() - start_time)))
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
agent.qnetwork_local
print('Max Score {:2f} at {}'.format(np.max(scores), np.argmax(scores)))
print('Percentile [25,50,75] : {}'.format(np.percentile(scores,[25,50,75])))
print('Variance : {:.3f}'.format(np.var(scores)))
# Run Logs
'''
Max = 2000 episodes,
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
fc1:64-fc2:64-fc3:4 -> 510 episodes/13.04, Max 26 @ 1183 episodes, Runnimg 100 mean 16.25 @1200, @1800
Elapsed : 1:19:28.997291
fc1:32-fc2:16-fc3:4 -> 449 episodes/13.01, Max 28 @ 1991 episodes, Runnimg 100 mean 16.41 @1300, 16.66 @1600, 17.65 @2000
Elapsed : 1:20:27.390989
Less Variance ? Overall learns better & steady; keeps high scores onc it learned them - match impedence
percentile[25,50,75] = [11. 15. 18.]; var = 30.469993749999997
fc1:16-fc2:8-fc3:4 -> 502 episodes/13.01, Max 28 @ 1568 episodes, Runnimg 100 mean 16.41 @1400, 16.23 @1500, 16.32 @1600
Elapsed : 1:18:33.396898
percentile[25,50,75] = [10. 14. 17.]; var = 30.15840975
Very calm CPU ! Embed in TX2 or raspberry Pi environment - definitely this network
Doesn't reach the highs of a larger network
fc1:32-fc2:16-fc3:8-fc4:4 -> 405 episodes/13.03, Max 28 @ 1281 episodes, Runnimg 100 mean 17.05 @1500, 16.69 @1700
Elapsed : 1:24:07.507518
percentile[25,50,75] = [11. 15. 18.]; var = 34.83351975
Back to heavy CPU usage. Reaches solution faster, so definitely more fidelity. Depth gives early advantage
fc1:64-fc2:32-fc3:16-fc4:8-fc5:4 -> 510 episodes, max score : 15.50 @ 700 episodes
Monstrous atrocity !
fc1:32-fc2:4-> 510 episodes, max score : 15.50 @ 700 episodes
Minimalist
'''
print(np.percentile(scores,[25,50,75]))
print(np.var(scores))
print(agent.qnetwork_local)
print('Max Score {:2f} at {}'.format(np.max(scores), np.argmax(scores)))
len(scores)
np.median(scores)
# Run best model
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv('train_FD001.txt', sep=' ', header=None)
# dropping NAN values
df = df.dropna(axis=1, how='all')
# Naming the columns
df.columns = ["unit", "cycles", "Op1",
"Op2", "Op3", "S1", "S2",
"S3", "S4", "S5", "S6", "S7", "S8", "S9", "S10", "S11",
"S12", "S13", "S14", "S15", "S16", "S17", "S18", "S19", "S20", "S21"]
# # show dataframe
# df.head()
# data preprocessing; removing unnecessary data
df.drop(['Op3','S1', 'S5', 'S6', 'S16', 'S10', 'S18', 'S19'], axis=1, inplace=True)
df.head()
# MinMaxScaler
scaler = MinMaxScaler()
df.iloc[:,2:18] = scaler.fit_transform(df.iloc[:,2:18])
# finding the max cycles of a unit which is used to find the Time to Failure (TTF)
df = pd.merge(df, df.groupby('unit', as_index=False)['cycles'].max(), how='left', on='unit')
df.rename(columns={"cycles_x": "cycles", "cycles_y": "maxcycles"}, inplace=True)
df['TTF'] = df['maxcycles'] - df['cycles']
# defining Fraction of Time to Failure (fTTF), where value of 1 denotes healthy engine and 0 denotes failure
def fractionTTF(dat,q):
return(dat.TTF[q]-dat.TTF.min()) / (dat.TTF.max()-dat.TTF.min())
fTTFz = []
fTTF = []
for i in range(df['unit'].min(),df['unit'].max()+1):
dat=df[df.unit==i]
dat = dat.reset_index(drop=True)
for q in range(len(dat)):
fTTFz = fractionTTF(dat, q)
fTTF.append(fTTFz)
df['fTTF'] = fTTF
df
cycles = df.groupby('unit', as_index=False)['cycles'].max()
mx = cycles.iloc[0:4,1].sum()
plt.plot(df.fTTF[0:mx])
plt.legend(['Time to failure (fraction)'], bbox_to_anchor=(0., 1.02, 1., .102), loc=3, mode="expand", borderaxespad=0)
plt.ylabel('Scaled unit')
plt.show()
# splitting train and test data, test size 20%
# train set
df_train = df[(df.unit <= 80)]
X_train = df_train[['cycles', 'Op1', 'Op2', 'S2', 'S3', 'S4', 'S7', 'S8', 'S9', 'S11', 'S12',
'S13', 'S14', 'S15', 'S17', 'S20', 'S21']].values
y_train = df_train[['fTTF']].values.ravel()
# test set
df_test = df[(df.unit > 80)]
X_test = df_test[['cycles', 'Op1', 'Op2', 'S2', 'S3', 'S4', 'S7', 'S8', 'S9', 'S11', 'S12',
'S13', 'S14', 'S15', 'S17', 'S20', 'S21']].values
y_test = df_test[['fTTF']].values.ravel()
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.wrappers.scikit_learn import KerasRegressor
model = Sequential()
model.add(Dense(50, input_dim=17, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs = 50)
score = model.predict(X_test)
df_test['predicted'] = score
plt.figure(figsize = (16, 8))
plt.plot(df_test.fTTF)
plt.plot(df_test.predicted)
# RMSE
from sklearn.metrics import mean_squared_error, r2_score
from math import sqrt
nn_rmse = sqrt(mean_squared_error(y_test, score))
print("RMSE: ", nn_rmse)
# r2 score
nn_r2 = r2_score(y_test, score)
print("r2 score: ", nn_r2)
df_test
def totcycles(data):
return(data['cycles'] / (1-data['predicted']))
df_test['maxpredcycles'] = totcycles(df_test)
df_test
from google.colab import drive
drive.mount('/drive')
df_test.to_csv('/drive/My Drive/test_dataframe.csv') # export dataframe to google drive as as csv file to analyse the findings
# upon observation it is noticed that the prediction gets more accurate the further the cycle is in the time series
# df_test.groupby('unit', as_index=False)['maxpredcycles'].quantile(.10) // pd.groupby().quantile() to get the quantile result
# df_test.groupby('unit', as_index=False)['cycles'].max()
dff = pd.merge(df_test.groupby('unit', as_index=False)['maxpredcycles'].quantile(.72), df.groupby('unit', as_index=False)['maxcycles'].max(), how='left', on='unit')
dff # display the preliminary results for obervation
MAXPRED = dff.maxpredcycles
MAXI = dff.maxcycles
dff_rmse = sqrt(mean_squared_error(MAXPRED, MAXI))
print("RMSE: ", dff_rmse)
```
|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Multi-dimensional-Particle-in-a-Box" data-toc-modified-id="Multi-dimensional-Particle-in-a-Box-4"><span class="toc-item-num">4 </span>Multi-dimensional Particle-in-a-Box</a></span><ul class="toc-item"><li><span><a href="#🥅-Learning-Objectives" data-toc-modified-id="🥅-Learning-Objectives-4.1"><span class="toc-item-num">4.1 </span>🥅 Learning Objectives</a></span></li><li><span><a href="#The-2-Dimensional-Particle-in-a-Box" data-toc-modified-id="The-2-Dimensional-Particle-in-a-Box-4.2"><span class="toc-item-num">4.2 </span>The 2-Dimensional Particle-in-a-Box</a></span></li><li><span><a href="#📝-Exercise:-Verify-the-above-equation-for-the-energy-eigenvalues-of-a-particle-confined-to-a-rectangular-box." data-toc-modified-id="📝-Exercise:-Verify-the-above-equation-for-the-energy-eigenvalues-of-a-particle-confined-to-a-rectangular-box.-4.3"><span class="toc-item-num">4.3 </span>📝 Exercise: Verify the above equation for the energy eigenvalues of a particle confined to a rectangular box.</a></span></li><li><span><a href="#Separation-of-Variables" data-toc-modified-id="Separation-of-Variables-4.4"><span class="toc-item-num">4.4 </span>Separation of Variables</a></span></li><li><span><a href="#📝-Exercise:-By-direct-substitution,-verify-that-the-above-expressions-for-the-eigenvalues-and-eigenvectors-of-a-Hamiltonian-sum-are-correct." data-toc-modified-id="📝-Exercise:-By-direct-substitution,-verify-that-the-above-expressions-for-the-eigenvalues-and-eigenvectors-of-a-Hamiltonian-sum-are-correct.-4.5"><span class="toc-item-num">4.5 </span>📝 Exercise: By direct substitution, verify that the above expressions for the eigenvalues and eigenvectors of a Hamiltonian-sum are correct.</a></span></li><li><span><a href="#Degenerate-States" data-toc-modified-id="Degenerate-States-4.6"><span class="toc-item-num">4.6 </span>Degenerate States</a></span></li><li><span><a href="#Electrons-in-a-3-dimensional-box-(cuboid)" data-toc-modified-id="Electrons-in-a-3-dimensional-box-(cuboid)-4.7"><span class="toc-item-num">4.7 </span>Electrons in a 3-dimensional box (cuboid)</a></span></li><li><span><a href="#📝-Exercise:-Verify-the-expressions-for-the-eigenvalues-and-eigenvectors-of-a-particle-in-a-cuboid." data-toc-modified-id="📝-Exercise:-Verify-the-expressions-for-the-eigenvalues-and-eigenvectors-of-a-particle-in-a-cuboid.-4.8"><span class="toc-item-num">4.8 </span>📝 Exercise: Verify the expressions for the eigenvalues and eigenvectors of a particle in a cuboid.</a></span></li><li><span><a href="#📝-Exercise:-Construct-an-accidentally-degenerate-state-for-the-particle-in-a-cuboid." data-toc-modified-id="📝-Exercise:-Construct-an-accidentally-degenerate-state-for-the-particle-in-a-cuboid.-4.9"><span class="toc-item-num">4.9 </span>📝 Exercise: Construct an accidentally degenerate state for the particle-in-a-cuboid.</a></span></li><li><span><a href="#Particle-in-a-circle" data-toc-modified-id="Particle-in-a-circle-4.10"><span class="toc-item-num">4.10 </span>Particle-in-a-circle</a></span><ul class="toc-item"><li><span><a href="#The-Schrödinger-equation-for-a-particle-confined-to-a-circular-disk." data-toc-modified-id="The-Schrödinger-equation-for-a-particle-confined-to-a-circular-disk.-4.10.1"><span class="toc-item-num">4.10.1 </span>The Schrödinger equation for a particle confined to a circular disk.</a></span></li><li><span><a href="#The-Schrödinger-Equation-in-Polar-Coordinates" data-toc-modified-id="The-Schrödinger-Equation-in-Polar-Coordinates-4.10.2"><span class="toc-item-num">4.10.2 </span>The Schrödinger Equation in Polar Coordinates</a></span></li><li><span><a href="#The-Angular-Schrödinger-equation-in-Polar-Coordinates" data-toc-modified-id="The-Angular-Schrödinger-equation-in-Polar-Coordinates-4.10.3"><span class="toc-item-num">4.10.3 </span>The Angular Schrödinger equation in Polar Coordinates</a></span></li><li><span><a href="#The-Radial-Schrödinger-equation-in-Polar-Coordinates" data-toc-modified-id="The-Radial-Schrödinger-equation-in-Polar-Coordinates-4.10.4"><span class="toc-item-num">4.10.4 </span>The Radial Schrödinger equation in Polar Coordinates</a></span></li><li><span><a href="#The-Radial-Schrödinger-Equation-for-a-Particle-Confined-to-a-Circular-Disk" data-toc-modified-id="The-Radial-Schrödinger-Equation-for-a-Particle-Confined-to-a-Circular-Disk-4.10.5"><span class="toc-item-num">4.10.5 </span>The Radial Schrödinger Equation for a Particle Confined to a Circular Disk</a></span></li><li><span><a href="#Eigenvalues-and-Eigenfunctions-for-a-Particle-Confined-to-a-Circular-Disk" data-toc-modified-id="Eigenvalues-and-Eigenfunctions-for-a-Particle-Confined-to-a-Circular-Disk-4.10.6"><span class="toc-item-num">4.10.6 </span>Eigenvalues and Eigenfunctions for a Particle Confined to a Circular Disk</a></span></li></ul></li><li><span><a href="#Particle-in-a-Spherical-Ball" data-toc-modified-id="Particle-in-a-Spherical-Ball-4.11"><span class="toc-item-num">4.11 </span>Particle-in-a-Spherical Ball</a></span><ul class="toc-item"><li><span><a href="#The-Schrödinger-equation-for-a-particle-confined-to-a-spherical-ball" data-toc-modified-id="The-Schrödinger-equation-for-a-particle-confined-to-a-spherical-ball-4.11.1"><span class="toc-item-num">4.11.1 </span>The Schrödinger equation for a particle confined to a spherical ball</a></span></li><li><span><a href="#The-Schrödinger-Equation-in-Spherical-Coordinates" data-toc-modified-id="The-Schrödinger-Equation-in-Spherical-Coordinates-4.11.2"><span class="toc-item-num">4.11.2 </span>The Schrödinger Equation in Spherical Coordinates</a></span></li><li><span><a href="#The-Angular-Wavefunction-in-Spherical-Coordinates" data-toc-modified-id="The-Angular-Wavefunction-in-Spherical-Coordinates-4.11.3"><span class="toc-item-num">4.11.3 </span>The Angular Wavefunction in Spherical Coordinates</a></span></li><li><span><a href="#The-Radial-Schrödinger-equation-in-Spherical-Coordinates" data-toc-modified-id="The-Radial-Schrödinger-equation-in-Spherical-Coordinates-4.11.4"><span class="toc-item-num">4.11.4 </span>The Radial Schrödinger equation in Spherical Coordinates</a></span></li></ul></li><li><span><a href="#📝-Exercise:-Use-the-equation-for-the-zeros-of-$\sin-x$-to-write-the-wavefunction-and-energy-for-the-one-dimensional-particle-in-a-box-in-a-form-similar-to-the-expressions-for-the-particle-in-a-disk-and-the-particle-in-a-sphere." data-toc-modified-id="📝-Exercise:-Use-the-equation-for-the-zeros-of-$\sin-x$-to-write-the-wavefunction-and-energy-for-the-one-dimensional-particle-in-a-box-in-a-form-similar-to-the-expressions-for-the-particle-in-a-disk-and-the-particle-in-a-sphere.-4.12"><span class="toc-item-num">4.12 </span>📝 Exercise: Use the equation for the zeros of $\sin x$ to write the wavefunction and energy for the one-dimensional particle-in-a-box in a form similar to the expressions for the particle-in-a-disk and the particle-in-a-sphere.</a></span><ul class="toc-item"><li><span><a href="#Solutions-to-the-Schrödinger-Equation-for-a-Particle-Confined-to-a-Spherical-Ball" data-toc-modified-id="Solutions-to-the-Schrödinger-Equation-for-a-Particle-Confined-to-a-Spherical-Ball-4.12.1"><span class="toc-item-num">4.12.1 </span>Solutions to the Schrödinger Equation for a Particle Confined to a Spherical Ball</a></span></li></ul></li><li><span><a href="#📝-Exercise:-For-the-hydrogen-atom,-the-2s-orbital-($n=2$,-$l=0$)-and-2p-orbitals-($n=2$,$l=1$,$m_l=-1,0,1$)-have-the-same-energy.-Is-this-true-for-the-particle-in-a-ball?" data-toc-modified-id="📝-Exercise:-For-the-hydrogen-atom,-the-2s-orbital-($n=2$,-$l=0$)-and-2p-orbitals-($n=2$,$l=1$,$m_l=-1,0,1$)-have-the-same-energy.-Is-this-true-for-the-particle-in-a-ball?-4.13"><span class="toc-item-num">4.13 </span>📝 Exercise: For the hydrogen atom, the 2s orbital ($n=2$, $l=0$) and 2p orbitals ($n=2$,$l=1$,$m_l=-1,0,1$) have the same energy. Is this true for the particle-in-a-ball?</a></span></li><li><span><a href="#🪞-Self-Reflection" data-toc-modified-id="🪞-Self-Reflection-4.14"><span class="toc-item-num">4.14 </span>🪞 Self-Reflection</a></span></li><li><span><a href="#🤔-Thought-Provoking-Questions" data-toc-modified-id="🤔-Thought-Provoking-Questions-4.15"><span class="toc-item-num">4.15 </span>🤔 Thought-Provoking Questions</a></span></li><li><span><a href="#🔁-Recapitulation" data-toc-modified-id="🔁-Recapitulation-4.16"><span class="toc-item-num">4.16 </span>🔁 Recapitulation</a></span></li><li><span><a href="#🔮-Next-Up..." data-toc-modified-id="🔮-Next-Up...-4.17"><span class="toc-item-num">4.17 </span>🔮 Next Up...</a></span></li><li><span><a href="#📚-References" data-toc-modified-id="📚-References-4.18"><span class="toc-item-num">4.18 </span>📚 References</a></span></li></ul></li></ul></div>
# Multi-dimensional Particle-in-a-Box

## 🥅 Learning Objectives
- Hamiltonian for a two-dimensional particle-in-a-box
- Hamiltonian for a three-dimensional particle-in-a-box
- Hamiltonian for a 3-dimensional spherical box
- Separation of variables
- Solutions for the 2- and 3- dimensional particle-in-a-box (rectangular)
- Solutions for the 3-dimensional particle in a box (spherical)
- Expectation values
## The 2-Dimensional Particle-in-a-Box

We have treated the particle in a one-dimensional box, where the (time-independent) Schrödinger equation was:
$$
\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) \right)\psi_n(x) = E_n \psi_n(x)
$$
where
$$
V(x) =
\begin{cases}
+\infty & x\leq 0\\
0 & 0\lt x \lt a\\
+\infty & a \leq x
\end{cases}
$$
However, electrons really move in three dimensions. However, just as sometimes electrons are (essentially) confined to one dimension, sometimes they are effectively confined to two dimensions. If the confinement is to a rectangle with side-widths $a_x$ and $a_y$, then the Schrödinger equation is:
$$
\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} -\frac{\hbar^2}{2m} \frac{d^2}{dy^2} + V(x,y) \right)\psi_{n_x,n_y}(x,y) = E_{n_x,n_y} \psi_{n_x,n_y}(x,y)
$$
where
$$
V(x,y) =
\begin{cases}
+\infty & x\leq 0 \text{ or }y\leq 0 \\
0 & 0\lt x \lt a_x \text{ and } 0 \lt y \lt a_y \\
+\infty & a_x \leq x \text{ or } a_y \leq y
\end{cases}
$$
The first thing to notice is that there are now two quantum numbers, $n_x$ and $n_y$.
> The number of quantum numbers that are needed to label the state of a system is equal to its dimensionality.
The second thing to notice is that the Hamiltonian in this Schrödinger equation can be written as the sum of two Hamiltonians,
$$
\left[\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V_x(x) \right)
+\left(-\frac{\hbar^2}{2m} \frac{d^2}{dy^2} + V_y(y) \right) \right]\psi_{n_x,n_y}(x,y) = E_{n_x,n_y} \psi_{n_x,n_y}(x,y)
$$
where
$$
V_x(x) =
\begin{cases}
+\infty & x\leq 0\\
0 & 0\lt x \lt a_x\\
+\infty & a_x \leq x
\end{cases} \\
V_y(y) =
\begin{cases}
+\infty & y\leq 0\\
0 & 0\lt y \lt a_y\\
+\infty & a_y \leq y
\end{cases}
$$
By the same logic as we used for the 1-dimensional particle in a box, we can deduce that the ground-state wavefunction for an electron in a rectangular box is:
$$
\psi_{n_x n_y}(x,y) = \frac{2}{\sqrt{a_x a_y}} \sin\left(\tfrac{n_x \pi x}{a_x}\right) \sin\left(\tfrac{n_y \pi y}{a_y}\right) \qquad \qquad n_x=1,2,3,\ldots;n_y=1,2,3,\ldots
$$
The corresponding energy is thus:
$$
E_{n_x n_y} = \frac{h^2}{8m}\left(\frac{n_x^2}{a_x^2}+\frac{n_y^2}{a_y^2}\right)\qquad \qquad n_x,n_y=1,2,3,\ldots
$$
## 📝 Exercise: Verify the above equation for the energy eigenvalues of a particle confined to a rectangular box.
## Separation of Variables
The preceding solution is a very special case of a general approach called separation of variables.
> Given a $D$-dimensional Hamiltonian that is a sum of independent terms,
$$
\hat{H}(x_1,x_2,\ldots,x_D) = \sum_{d=1}^D\hat{H}_d(x_d)
$$
where the solutions to the individual Schrödinger equations are known:
$$
\hat{H}_d \psi_{d;n_d}(x_d) = E_{d;n_d} \psi_{d;n_d}(x_d)
$$
the solution to the $D$-dimensional Schrödinger equation is
$$
\hat{H}(x_1,x_2,\ldots,x_D) \Psi_{n_1,n_2,\ldots,n_D}(x_1,x_2,\ldots,x_D) = E_{n_1,n_2,\ldots,n_D}\Psi_{n_1,n_2,\ldots,n_D}(x_1,x_2,\ldots,x_D)
$$
where the $D$-dimensional eigenfunctions are products of the Schrödinger equations of the individual terms
$$
\Psi_{n_1,n_2,\ldots,n_D}(x_1,x_2,\ldots,x_D) = \prod_{d=1}^D\psi_{d;n_d}(x_d)
$$
and the $D$-dimensional eigenvalues are sums of the eigenvalues of the individual terms,
$$
E_{n_1,n_2,\ldots,n_D} = \sum_{d=1}^D E_{d;n_d}
$$
This expression can be verified by direct substitution. Interpretatively, in a Hamiltonian with the form
$$
\hat{H}(x_1,x_2,\ldots,x_D) = \sum_{d=1}^D\hat{H}_d(x_d)
$$
the coordinates $x_1,x_2,\ldots,x_D$ are all independent, because there are no terms that couple them together in the Hamiltonian. This means that the probability of observing a particle with values $x_1,x_2,\ldots,x_D$ are all independent. Recall that when probabilities are independent, they are multiplied together. E.g., if the probability that your impoverished professor will be paid today is independent of the probability that will rain today, then
$$
p_{\text{paycheck + rain}} = p_{\text{paycheck}} p_{\text{rain}}
$$
Similarly, because $x_1,x_2,\ldots,x_D$ are all independent,
$$
p(x_1,x_2,\ldots,x_D) = p_1(x_1) p_2(x_2) \ldots p_D(x_D)
$$
Owing to the Born postulate, the probability distribution function for observing a particle at $x_1,x_2,\ldots,x_D$ is the wavefunction squared, so
$$
\left| \Psi_{n_1,n_2,\ldots,n_D}(x_1,x_2,\ldots,x_D)\right|^2 = \prod_{d=1}^D \left| \psi_{d;n_d}(x_d) \right|^2
$$
It is reassuring that the separation-of-independent-variables solution to the $D$-dimensional Schrödinger equation reproduces this intuitive conclusion.
## 📝 Exercise: By direct substitution, verify that the above expressions for the eigenvalues and eigenvectors of a Hamiltonian-sum are correct.
## Degenerate States
When two quantum states have the same energy, they are said to be degenerate. For example, for a square box, where $a_x = a_y = a$, the states with $n_x=1; n_y=2$ and $n_x=2;n_y=1$ are degenerate because:
$$
E_{1,2} = \frac{h^2}{8ma^2}\left(1+4\right) = \frac{h^2}{8ma^2}\left(4+1\right) = E_{2,1}
$$
This symmetry reflects physical symmetry, whereby the $x$ and $y$ coordinates are equivalent. The state with energy $E=\frac{5h^2}{8ma^2}$ is said to be two-fold degenerate, or to have degeneracy of two.
For the particle in a square box, degeneracies of all levels exist. An example of a three-fold degeneracy is:
$$
E_{1,7} = E_{7,1} = E_{5,5} = \frac{50h^2}{8ma^2}
$$
and an example of a four-fold degeneracy is:
$$
E_{1,8} = E_{8,1} = E_{7,4} = E_{4,7} = \frac{65h^2}{8ma^2}
$$
It isn't trivial to show that degeneracies of all orders are possible (it's tied up in the theory of [Diophantine equations](https://en.wikipedia.org/wiki/Diophantine_equation)), but perhaps it becomes plausible by giving an example with an eight-fold degeneracy:
$$
E_{8,49} = E_{49,8} = E_{16,47} = E_{47,16} = E_{23,44} = E_{44,23} = E_{28,41} = E_{41,28} = \frac{2465 h^2}{8ma^2}
$$
Notice that all of these degeneracies are removed if the symmetry of the box is broken. For example, if a slight change of the box changes it from square to rectangulary, $a_x \rightarrow a_x + \delta x$, then the aforementioned states have different energies. This doesn't mean, however, that rectangular boxes do not have degeneracies. If $\tfrac{a_x}{a_y}$ is a rational number, $\tfrac{p}{q}$, then there will be an *accidental* degeneracies when $n_x$ is divisible by $p$ and $n_y$ is divisible by $q$. For example, if $a_x = 2a$ and $a_y = 3a$ (so $p=2$ and $q=3$), there is a degeneracy associated with
$$
E_{4,3} = \frac{h^2}{8m}\left(\frac{4^2}{(2a)^2}+\frac{3^2}{(3a)^2}\right)
= \frac{h^2}{8m}\left(\frac{2^2}{(2a)^2}+\frac{6^2}{(3a)^2}\right) = \frac{5h^2}{8ma^2}= E_{2,6}
$$
This is called an *accidental* degeneracy because it is not related to a symmetry of the system, like the $x \sim y$ symmetry that induces the degeneracy in the case of particles in a square box.
## Electrons in a 3-dimensional box (cuboid)

Suppose that particles are confined to a [cuboid](https://en.wikipedia.org/wiki/Cuboid) (or rectangular prism) with side-lengths $a_x$, $a_y$, and $a_z$. Then the Schrödinger equation is:
$$
\left[-\frac{\hbar^2}{2m}\left( \frac{d^2}{dx^2} + \frac{d^2}{dy^2} + \frac{d^2}{dz^2} \right) + V(x,y,z) \right]\psi_{n_x,n_y,n_z}(x,y,z) = E_{n_x,n_y,n_z} \psi_{n_x,n_y,n_z}(x,y,z)
$$
where
$$
V(x,y,z) =
\begin{cases}
+\infty & x\leq 0 \text{ or }y\leq 0 \text{ or }z\leq 0 \\
0 & 0\lt x \lt a_x \text{ and } 0 \lt y \lt a_y \text{ and } 0 \lt z \lt a_z \\
+\infty & a_x \leq x \text{ or } a_y \leq y \text{ or } a_z \leq z
\end{cases}
$$
There are three quantum numbers because the system is three-dimensional.
The three-dimensional second derivative is called the Laplacian, and is denoted
$$
\nabla^2 = \frac{d^2}{dx^2} + \frac{d^2}{dy^2} + \frac{d^2}{dz^2} = \nabla \cdot \nabla
$$
where
$$
\nabla = \left[ \frac{d}{dx}, \frac{d}{dy}, \frac{d}{dz} \right]^T
$$
is the operator that defines the gradient vector. The 3-dimensional momentum operator is
$$
\hat{\mathbf{p}} = i \hbar \nabla
$$
which explains why the kinetic energy is given by
$$
\hat{T} = \frac{\hat{\mathbf{p}} \cdot \hat{\mathbf{p}}}{2m} = \frac{\hbar^2}{2m} \nabla^2
$$
As with the 2-dimensional particle-in-a-rectangle, the 3-dimensional particle-in-a-cuboid can be solved by separation of variables. Rewriting the Schrödinger equation as:
$$
\left[\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V_x(x) \right)
+\left(-\frac{\hbar^2}{2m} \frac{d^2}{dy^2} + V_y(y) \right)
+\left(-\frac{\hbar^2}{2m} \frac{d^2}{dz^2} + V_z(z) \right) \right]\psi_{n_x,n_y,n_z}(x,y,z) = E_{n_x,n_y,n_z} \psi_{n_x,n_y,n_z}(x,y,z)
$$
where
$$
V_x(x) =
\begin{cases}
+\infty & x\leq 0\\
0 & 0\lt x \lt a_x\\
+\infty & a_x \leq x
\end{cases} \\
V_y(y) =
\begin{cases}
+\infty & y\leq 0\\
0 & 0\lt y \lt a_y\\
+\infty & a_y \leq y
\end{cases} \\
V_z(z) =
\begin{cases}
+\infty & z\leq 0\\
0 & 0\lt z \lt a_z\\
+\infty & a_z \leq z
\end{cases}
$$
By the same logic as we used for the 1-dimensional particle in a box, we can deduce that the ground-state wavefunction for an electron in a cuboid is:
$$
\psi_{n_x n_y n_z}(x,y,z) = \frac{2\sqrt{2}}{\sqrt{a_x a_y z_z}} \
\sin\left(\tfrac{n_x \pi x}{a_x}\right) \sin\left(\tfrac{n_y \pi y}{a_y}\right)\sin\left(\tfrac{n_z \pi z}{a_z}\right) \qquad \qquad n_x=1,2,3,\ldots;n_y=1,2,3,\ldots;n_z=1,2,3,\ldots
$$
The corresponding energy is thus:
$$
E_{n_x n_y n_z} = \frac{h^2}{8m}\left(\frac{n_x^2}{a_x^2}+\frac{n_y^2}{a_y^2}+\frac{n_z^2}{a_z^2}\right)\qquad \qquad n_x,n_y,n_z=1,2,3,\ldots
$$
As before, especially when there is symmetry, there are many degenerate states. For example, for a particle-in-a-cube, where $a_x = a_y = a_z = a$, the first excited state is three-fold degenerate since:
$$
E_{2,1,1} = E_{1,2,1} = E_{1,1,2} = \frac{6h^2}{8ma^2}
$$
There are other states of even higher degeneracy. For example, there is a twelve-fold degenerate state:
$$
E_{5,8,15} = E_{5,15,8} = E_{8,5,15} = E_{8,15,5} = E_{15,5,8} = E_{15,8,5} = E_{3,4,17} = E_{3,17,4} = E_{4,3,17} = E_{4,17,3} = E_{17,3,4} = E_{17,4,3} = \frac{314 h^2}{8ma^2}
$$
## 📝 Exercise: Verify the expressions for the eigenvalues and eigenvectors of a particle in a cuboid.
## 📝 Exercise: Construct an accidentally degenerate state for the particle-in-a-cuboid.
(Hint: this is a lot easier than you may think.)
## Particle-in-a-circle
### The Schrödinger equation for a particle confined to a circular disk.
.jpg?raw=true "The states of particle confined by a ring of atoms (a quantum corral) is similar to a particle-in-acircle. Image licensed CC-SA by Julian Voss-Andreae")
What happens if instead of being confined to a rectangle or a cuboid, the electrons were confined to some other shape? For example, it is not uncommon to have electrons confined in a circular disk ([quantum corral](https://en.wikipedia.org/wiki/Quantum_mirage)) or a sphere ([quantum dot](https://en.wikipedia.org/wiki/Quantum_dot)). It is relatively easy to write the Hamiltonian in these cases, but less easy to solve it because Cartesian (i.e., $x,y,z$) coordinates are less natural for these geometries.
The Schrödinger equation for a particle confined to a circular disk of radius $a$ is:
$$
\left(-\frac{\hbar^2}{2m} \nabla^2 + V(x,y) \right)\psi(x,y) = E \psi(x,y)
$$
where
$$
V(x,y) =
\begin{cases}
0 & \sqrt{x^2 + y^2} \lt a\\
+\infty & a \leq \sqrt{x^2 + y^2}
\end{cases}
$$
However, it is more useful to write this in polar coordinates (i.e., $r,\theta$):
\begin{align}
x &= r \cos \theta \\
y &= r \sin \theta \\
\\
r &= \sqrt{x^2 + y^2} \\
\theta &= \arctan{\tfrac{y}{x}}
\end{align}
because the potential depends only on the distance from the center of the system,
$$
\left(-\frac{\hbar^2}{2m} \nabla^2 + V(r) \right)\psi(r,\theta) = E \psi(r,\theta)
$$
where
$$
V(r) =
\begin{cases}
0 & r \lt a\\
+\infty & a \leq r
\end{cases}
$$
### The Schrödinger Equation in Polar Coordinates
In order to solve this Schrödinger equation, we need to rewrite the Laplacian, $\nabla^2 = \frac{d^2}{dx^2} + \frac{d^2}{dy^2}$ in polar coordinates. Deriving the Laplacian in alternative coordinate systems is a standard (and tedious) exercise that you hopefully saw in your calculus class. (If not, cf. [wikipedia](https://en.wikipedia.org/wiki/Laplace_operator) or this [meticulous derivation](https://www.math.ucdavis.edu/~saito/courses/21C.w11/polar-lap.pdf).) The result is that:
$$
\nabla^2 = \frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + \frac{1}{r^2}\frac{d^2}{d\theta^2}
$$
The resulting Schrödinger equation is,
$$
\left[-\frac{\hbar^2}{2m} \left(\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + \frac{1}{r^2}\frac{d^2}{d\theta^2} \right)+ V(r) \right] \psi(r,\theta) = E \psi(r,\theta)
$$
This looks like it might be amenable to solution by separation of variables insofar as the potential doesn't compute the radial and angular positions of the particles, and the kinetic energy doesn't couple the particles angular and radial momenta (i.e., the derivatives). So we propose the solution $\psi(r,\theta) = R(r) \Theta(\theta)$. Substituting this into the Schrödinger equation, we obtain:
\begin{align}
\left[-\frac{\hbar^2}{2m} \left(\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + \frac{1}{r^2}\frac{d^2}{d\theta^2} \right)+ V(r) \right] R(r) \Theta(\theta) &= E R(r) \Theta(\theta) \\
\left[-\Theta(\theta)\frac{\hbar^2}{2m} \left(\frac{d^2 R(r)}{dr^2} + \frac{1}{r} \frac{d R(r)}{dr} \right) -\frac{\hbar^2}{2m} \frac{R(r)}{r^2}\left(\frac{d^2 \Theta(\theta)}{d \theta^2} \right) + V(r) R(r) \Theta(\theta) \right] &= E R(r)\Theta(\theta)
\end{align}
Dividing both sides by $R(r) \Theta(\theta)$ and multiplying both sides by $r^2$ we obtain:
$$
E r^2 +\frac{\hbar^2}{2m}\frac{r^2}{R(r)} \left(\frac{d^2 R(r)}{dr^2} + \frac{1}{r} \frac{d R(r)}{dr} \right) - r^2 V(r)=-\frac{\hbar^2}{2m} \frac{1}{\Theta(\theta)}\left(\frac{d^2 \Theta(\theta)}{d \theta^2} \right)
$$
The right-hand-side depends only on $r$ and the left-hand-side depends only on $\theta$; this can only be true for all $r$ and all $\theta$ if both sides are equal to the same constant. This problem can therefore be solved by separation of variables, though it is a slightly different form from the one we considered previously.
### The Angular Schrödinger equation in Polar Coordinates
To find the solution, we first solve the set the left-hand-side equal to a constant, which gives a 1-dimensional Schrödinger equations for the angular motion of the particle around the circle,
$$
-\frac{\hbar^2}{2m} \frac{d^2 \Theta_l(\theta)}{d \theta^2} = E_{\theta;l} \Theta_l(\theta)
$$
This equation is identical to the particle-in-a-box, and has (unnormalized) solutions
$$
\Theta_l(\theta) = e^{i l \theta} \qquad \qquad l = 0, \pm 1, \pm 2, \ldots
$$
where $l$ must be an integer because otherwise the fact the periodicity of the wavefunction (i.e., that $\Theta_l(\theta) = \Theta_l(\theta + 2 k \pi)$ for any integer $k$) is not achieved. Using the expression for $\Theta_l(\theta)$, the angular kinetic energy of the particle in a circle is seen to be:
$$
E_{\theta,l} = \tfrac{\hbar^2 l^2}{2m}
$$
### The Radial Schrödinger equation in Polar Coordinates
Inserting the results for the angular wavefunction into the Schrödinger equation, we obtain
$$
E r^2 +\frac{\hbar^2}{2m}\frac{r^2}{R(r)} \left(\frac{d^2 R(r)}{dr^2} + \frac{1}{r} \frac{d R(r)}{dr} \right) - r^2 V(r)=-\frac{\hbar^2}{2m} \frac{1}{\Theta_l(\theta)}\left(\frac{d^2 \Theta_l(\theta)}{d \theta^2} \right) = \frac{\hbar^2 l^2}{2m}
$$
which can be rearranged into the radial Schrödinger equation,
$$
-\frac{\hbar^2}{2m} \left(\frac{1}{r^2} \frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} - \frac{l^2}{r^2} \right)R_{n,l}(r) + V(r) R_{n,l}(r) = E_{n,l} R_{n,l}(r)
$$
Notice that the radial eigenfunctions, $R_{n,l}(r)$, and the energy eigenvalues, $E_{n,l}$, depend on the angular motion of the particle, as quantized by $l$. The term $\frac{\hbar^2 l^2}{2mr^2}$ is exactly the centrifugal potential, indicating that it takes energy to hold a rotating particle in an orbit with radius $r$, and that the potential energy that is required grows with $r^{-2}$. Notice also that no assumptions have been made about the nature of the circular potential, $V(r)$. The preceding analysis holds for *any* circular-symmetric potential.
### The Radial Schrödinger Equation for a Particle Confined to a Circular Disk
For the circular disk, where
$$
V(r) =
\begin{cases}
0 & r \lt a\\
+\infty & a \leq r
\end{cases}
$$
it is somewhat more convenient to rewrite the radial Schrödinger equation as a [homogeneous linear differential equation](https://en.wikipedia.org/wiki/Homogeneous_differential_equation)
$$
-\frac{\hbar^2}{2m} \left(r^2 \frac{d^2 R_{n,l}(r)}{dr^2} + r \frac{d R_{n,l}(r)}{dr} + \left[\left(\frac{2m E_{n,l}}{\hbar^2} \right) r^2 - l^2\right]R_{n,l}(r) \right) + r^2 V(r) R_{n,l}(r) = 0
$$
Using the specific form of the equation, we have that, for $0 \lt r \lt a$,
$$
-\frac{\hbar^2}{2m} \left(r^2 \frac{d^2 R_{n,l}(r)}{dr^2} + r \frac{d R_{n,l}(r)}{dr} + \left[\left(\frac{2m E_{n,l}}{\hbar^2} \right) r^2 - l^2\right]R_{n,l}(r) \right) = 0
$$
While this equation can be solved by the usual methods, that is [beyond the scope of this course](https://opencommons.uconn.edu/cgi/viewcontent.cgi?article=1013&context=chem_educ). However, we recognize that this equation is strongly resembles [Bessel's differential equation](https://en.wikipedia.org/wiki/Bessel_function),
$$
x^2 \frac{d^2 f}{dx^2} + x \frac{df}{dx} + \left(x^2 - \alpha^2 \right) f(x) = 0
$$
The solutions to Bessel's equation are called the *Bessel functions of the first kind* and denoted, $J_{\alpha}(r)$. However, the radial wavefunction must vanish at the edge of the disk, $R_{n,l}(a) = 0$, and the Bessel functions generally do not satisfy this requirement. Recall that the boundary condition in the 1-dimensional particle in a box was satisfied by moving from the generic solution, $\psi(x) \propto \sin(x)$ to the scaled solution, $\psi(x) \propto \sin(k x)$, where $k=\tfrac{2 \pi}{a}$ was chosen to satisfy the boundary condition. Similarly, we write the solutions as $R_{n,l}(r) \propto J_l(kr)$. Substituting this form into the Schrödinger equation and using the fact that:
$$
(kr)^n \frac{d^n}{d(kr)^n} = r^n \frac{d^n}{dr^n} \qquad \qquad n=1,2,\ldots
$$
we have
$$
-\frac{\hbar^2}{2m} \left((kr)^2 \frac{d^2 J_{l}(kr)}{d(kr)^2} + (kr) \frac{d J_{l}(kr)}{d(kr)} + \left[\left(\frac{2m E_{n,l}}{k^2 \hbar^2} \right) (kr)^2 - l^2\right]J_{l}(kr) \right) = 0
$$
Referring back to the Bessel equation, it is clear that this equation is satisfied when
$$
\frac{2m E_{n,l}}{k^2 \hbar^2} = 1
$$
or, equivalently,
$$
E_{n,l} = \frac{\hbar^2 k^2}{2m}
$$
where $k$ is chosen so that $J_l(ka) = 0$. If we label the zeros of the Bessel functions,
$$
J_l(x_{n,l}) = 0 \qquad \qquad n=1,2,3,\ldots \\
x_{1,l} \lt x_{2,l} \lt x_{3,l} \lt \cdots
$$
then
$$
k_{n,l} = \frac{x_{n,l}}{a}
$$
and the energies of the particle-in-a-disk are
$$
E_{n,l} = \frac{\hbar^2 x_{n,l}^2}{2ma^2} = \frac{h^2 x_{n,l}^2}{8 m \pi^2 a^2}
$$
and the eigenfunctions of the particle-in-a-disk are:
$$
\psi_{n,l}(r,\theta) \propto J_{l}\left(\frac{x_{n,l}r}{a} \right) e^{i l \theta}
$$
### Eigenvalues and Eigenfunctions for a Particle Confined to a Circular Disk

The energies of a particle confined to a circular disk of radius $a$ are:
$$
E_{n,l} = \frac{\hbar^2 x_{n,l}^2}{2ma^2} = \frac{h^2 x_{n,l}^2}{8 m \pi^2 a^2} \qquad \qquad n=1,2,\ldots; \quad l=0,\pm 1,\pm 2, \ldots
$$
and its eigenfunctions are:
$$
\psi_{n,l}(r,\theta) \propto J_{l}\left(\frac{x_{n,l}r}{a} \right) e^{i l \theta}
$$
where $x_{n,l}$ are the zeros of the Bessel function, $J_l(x_{n,l}) = 0$. The first zero is $x_{1,0} = 2.4048$. These solutions are exactly the resonant energies and modes that are associated with the vibration of a circular drum whose membrane has uniform thickness/density.
You can find elegant video animations of the eigenvectors of the particle-in-a-circle at the following links:
- [Interactive Demonstration of the States of a Particle-in-a-Circular-Disk](https://demonstrations.wolfram.com/ParticleInAnInfiniteCircularWell/)
- [Movie animation of the quantum states of a particle-in-a-circular-disk](https://www.reddit.com/r/dataisbeautiful/comments/mfx5og/first_70_states_of_a_particle_trapped_in_a/?utm_source=share&utm_medium=web2x&context=3)
A subtle result, [originally proposed by Bourget](https://en.wikipedia.org/wiki/Bessel_function#Bourget's_hypothesis), is that no two Bessel functions ever have the same zeros, which means that the values of $\{x_{n,l} \}$ are all distinct. A corollary of this is that eigenvalues of the particle confined to a circular disk are either nondegenerate ($l=0$) or doubly degenerate ($|l| /ge 1$). There are no accidental degeneracies for a particle in a circular disk.
The energies of an electron confined to a circular disk with radius $a$ Bohr are:
$$
E_{n,l} = \tfrac{x_{n,l}^2}{2a^2}
$$
The following code block computes these energies.
```
from scipy import constants
from scipy import special
import ipywidgets as widgets
import mpmath
#The next few lines just set up the sliders for setting parameters.
#Principle quantum number slider:
n = widgets.IntSlider(
value=1,
min=1,
max=10,
step=1,
description='n (princ. #):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='d')
#Angular quantum number slider:
l = widgets.IntSlider(
value=0,
min=-10,
max=10,
step=1,
description='l (ang. #):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='d')
#Box length slider:
a = widgets.FloatSlider(
value=1,
min=.01,
max=10.0,
step=0.01,
description='a (length):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='.2f',
)
# Define a function for the energy (in a.u.) of a particle in a circular disk
# with length a, principle quantum number n, and angular quantum number l
# The length is input in Bohr (atomic units).
def compute_energy_disk(n, l, a):
"Compute 1-dimensional particle-in-a-spherical-ball energy."
#Compute the first n zeros of the Bessel function
zeros = special.jn_zeros(l,n)
#Compute the energy from the n-th zero.
return (zeros[-1])**2/ (2 * a**2)
#This next bit of code just prints out the energy in atomic units
def print_energy_disk(a, n, l):
print(f'The energy of an electron confined to a disk with radius {a:.2f} a.u.,'
f' principle quantum number {n}, and angular quantum number {l}'
f' is {compute_energy_disk(n, l, a):.3f} a.u..')
out = widgets.interactive_output(print_energy_disk, {'a': a, 'n': n, 'l': l})
widgets.VBox([widgets.VBox([a, n, l]),out])
```
## Particle-in-a-Spherical Ball
### The Schrödinger equation for a particle confined to a spherical ball

The final model we will consider for now is a particle confined to a spherical ball with radius $a$,
$$
\left(-\frac{\hbar^2}{2m} \nabla^2 + V(x,y,z) \right)\psi(x,y,z) = E \psi(x,y,z)
$$
where
$$
V(x,y,z) =
\begin{cases}
0 & \sqrt{x^2 + y^2 + z^2} \lt a\\
+\infty & a \leq \sqrt{x^2 + y^2 + z^2}
\end{cases}
$$
However, it is more useful to write this in spherical polar coordinates (i.e., $r,\theta, \phi$):
\begin{align}
x &= r \sin \theta \cos \phi\\
y &= r \sin \theta \sin \phi\\
z &= r cos \theta \\
\\
r &= \sqrt{x^2 + y^2 + z^2} \\
\theta &= \arccos{\tfrac{z}{r}} \\
\phi &= \arctan{\tfrac{y}{x}}
\end{align}
because the potential depends only on the distance from the center of the system,
$$
\left(-\frac{\hbar^2}{2m} \nabla^2 + V(r) \right)\psi(r,\theta, \phi) = E \psi(r,\theta, \phi)
$$
where
$$
V(r) =
\begin{cases}
0 & r \lt a\\
+\infty & a \leq r
\end{cases}
$$
### The Schrödinger Equation in Spherical Coordinates
In order to solve the Schrödinger equation for a particle in a spherical ball, we need to rewrite the Laplacian, $\nabla^2 = \frac{d^2}{dx^2} + \frac{d^2}{dy^2} + \frac{d^2}{dz^2}$ in polar coordinates. A meticulous derivation of the [Laplacian](https://en.wikipedia.org/wiki/Laplace_operator) and its [eigenfunctions](http://galileo.phys.virginia.edu/classes/252/Classical_Waves/Classical_Waves.html) The result is that:
$$
\nabla^2 = \frac{1}{r^2}\frac{d}{dr}r^2\frac{d}{dr} + \frac{1}{r^2 \sin \theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta} + \frac{1}{r^2 \sin^2 \theta} \frac{d^2}{d\phi^2}
$$
which can be rewritten in a more familiar form as:
$$
\nabla^2 = \frac{d^2}{dr^2}+ \frac{2}{r}\frac{d}{dr} + \frac{1}{r^2}\left[\frac{1}{\sin \theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta} + \frac{1}{\sin^2 \theta} \frac{d^2}{d\phi^2}\right]
$$
Inserting this into the Schrödinger equation for a particle confined to a spherical potential, $V(r)$, one has:
$$
\left({} -\frac{\hbar^2}{2m} \left( \frac{d^2}{dr^2}
+ \frac{2}{r} \frac{d}{dr}\right)
- \frac{\hbar^2}{2mr^2}\left[\frac{1}{\sin \theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta}
+ \frac{1}{\sin^2 \theta} \frac{d^2}{d\phi^2}\right] \\
+ V(r) \right)\psi_{n,l,m_l}(r,\theta,\phi)
= E_{n,l,m_l}\psi_{n,l,m_l}(r,\theta,\phi)
$$
The solutions of the Schrödinger equation are characterized by three quantum numbers because this equation is 3-dimensional.
Recall that the classical equation for the kinetic energy of a set of points rotating around the origin, in spherical coordinates, is:
$$
T = \sum_{i=1}^{N_{\text{particles}}} \frac{p_{r;i}^2}{2m_i} + \frac{\mathbf{L}_i \cdot \mathbf{L}_i}{2m_i r_i^2}
$$
where $p_{r,i}$ and $\mathbf{L}_i$ are the radial and [angular momenta](https://en.wikipedia.org/wiki/Angular_momentum#In_Hamiltonian_formalism) of the $i^{\text{th}}$ particle, respectively. It's apparent, then, that the quantum-mechanical operator for the square of the angular momentum is:
$$
\hat{L}^2 = - \hbar^2 \left[\frac{1}{\sin \theta}\frac{d}{d\theta}\sin\theta\frac{d}{d\theta}
+ \frac{1}{\sin^2 \theta} \frac{d^2}{d\phi^2}\right]
$$
The Schrödinger equation for a spherically-symmetric system is thus,
$$
\left(-\frac{\hbar^2}{2m} \left( \frac{d^2}{dr^2}
+ \frac{2}{r} \frac{d}{dr}\right)
+ \frac{\hat{L}^2}{2mr^2} + V(r) \right)
\psi_{n,l,m_l}(r,\theta,\phi)
= E_{n,l,m_l}\psi_{n,l,m_l}(r,\theta,\phi)
$$
### The Angular Wavefunction in Spherical Coordinates

The Schrödinger equation for a spherically-symmetric system can be solved by separation of variables. If one compares to the result in polar coordinates, it is already clear that the angular wavefunction has the form $\Theta{\theta,\phi} = P(\theta)e^{im_l\phi}$. From this starting point we could deduce the eigenfunctions of $\hat{L}^2$, but instead we will just present the eigenfunctions and eigenvalues,
$$
\hat{L}^2 Y_l^{m_l} (\theta, \phi) = \hbar^2 l(l+1)Y_l^{m_l} (\theta, \phi) \qquad l=0,1,2,\ldots m_l=0, \pm 1, \ldots, \pm l
$$
The functions [$Y_l^{m_l} (\theta, \phi)$](https://en.wikipedia.org/wiki/Spherical_harmonics) are called [spherical harmonics](https://mathworld.wolfram.com/SphericalHarmonic.html), and they are the fundamental vibrational modes of the surface of a spherical membrane. Note that these functions resemble s-orbitals ($l=0$), p-orbitals ($l=1$), d-orbitals ($l=2$), etc., and that the number of choices for $m_l$, $2l+1$, is equal to the number of $l$-type orbitals.
### The Radial Schrödinger equation in Spherical Coordinates
Using separation of variables, we write the wavefunction as:
$$
\psi_{n,l,m_l}(r,\theta,\phi) = R_{n,l}(r) Y_l^{m_l}(\theta,\phi)
$$
Inserting this expression into the Schrödinger equation, and exploiting the fact that the spherical harmonics are eigenfunctions of $\hat{L}^2$, one obtains the radial Schrödinger equation:
$$
\left(-\frac{\hbar^2}{2m} \left( \frac{d^2}{dr^2}
+ \frac{2}{r} \frac{d}{dr}\right)
+ \frac{\hbar^2 l(l+1)}{2mr^2} + V(r) \right)
R_{n,l}(r)
= E_{n,l}R_{n,l}(r)
$$
Inside the sphere, $V(r) = 0$. Rearranging the radial Schrödinger equation for the interior of the sphere as a homogeneous linear differential equation,
$$
r^2 \frac{d^2R_{n,l}}{dr^2}
+ 2r \frac{dR_{n,l}}{dr}
+ \left( \frac{2mr^2E_{n,l}}{\hbar^2} - l(l+1) \right)
R_{n,l}(r) = 0
$$
This equation strongly resembles the differential equations satisfied by the [spherical Bessel functions](https://en.wikipedia.org/wiki/Bessel_function#Spherical_Bessel_functions:_jn,_yn), $j_l(x)$
$$
x^2 \frac{d^2 j_l}{dx^2} + 2x \frac{dj_l}{dx} + \left(x^2 - l(l+1) \right)j_l(x) = 0
$$
The spherical Bessel functions are eigenfunctions for the particle-in-a-spherical-ball, but do not satisfy the boundary condition that the wavefunction be zero on the sphere, $R_{n,l}(a) = 0$. To satisfy this constraint, we propose
$$
R_{n,l}(r) = j_l(k r)
$$
Using this form in the radial Schrödinger equation, we have
$$
(kr)^2 \frac{d^2 j_l(kr)}{d(kr)^2}
+ 2kr \frac{d j_l(kr)}{d(kr)}
+ \left( \frac{2m(kr)^2E_{n,l}}{\hbar^2k^2} - l(l+1) \right)
j_l(kr) = 0
$$
Referring back to the differential equation satisfied by the spherical Bessel functions, it is clear that this equation is satisfied when
$$
\frac{2m E_{n,l}}{k^2 \hbar^2} = 1
$$
or, equivalently,
$$
E_{n,l} = \frac{\hbar^2 k^2}{2m}
$$
where $k$ is chosen so that $j_l(ka) = 0$. If we label the zeros of the spherical Bessel functions,
$$
j_l(y_{n,l}) = 0 \qquad \qquad n=1,2,3,\ldots \\
y_{1,l} \lt y_{2,l} \lt y_{3,l} \lt \cdots
$$
then
$$
k_{n,l} = \frac{y_{n,l}}{a}
$$
and the energies of the particle-in-a-ball are
$$
E_{n,l} = \frac{\hbar^2 y_{n,l}^2}{2ma^2} = \frac{h^2 y_{n,l}^2}{8 m \pi^2 a^2}
$$
and the eigenfunctions of the particle-in-a-ball are:
$$
\psi_{n,l,m}(r,\theta,\phi) \propto j_l\left(\frac{y_{n,l}r}{a} \right) Y_l^{m_l}(\theta, \phi)
$$
Notice that the eigenenergies have the same form as those in the particle-in-a-disk and even the same as those for a particle-in-a-one-dimensional-box; the only difference is the identity of the function whose zeros we are considering.
## 📝 Exercise: Use the equation for the zeros of $\sin x$ to write the wavefunction and energy for the one-dimensional particle-in-a-box in a form similar to the expressions for the particle-in-a-disk and the particle-in-a-sphere.
### Solutions to the Schrödinger Equation for a Particle Confined to a Spherical Ball

The energies of a particle confined to a spherical ball of radius $a$ are:
$$
E_{n,l} = \frac{\hbar^2 y_{n,l}^2}{2ma^2} \qquad n=1,2,\ldots; \quad l=0,1,\ldots; \quad m=0, \pm1, \ldots, \pm l
$$
and its eigenfunctions are:
$$
\psi_{n,l,m}(r,\theta,\phi) \propto j_l\left(\frac{y_{n,l}r}{a} \right) Y_l^{m_l}(\theta, \phi)
$$
where $y_{n,l}$ are the zeros of the spherical Bessel function, $j_l(y_{n,l}) = 0$. The first spherical Bessel function, which corresponds to s-like solutions ($l=0$), is
$$
j_l(y) = \frac{\sin y}{y}
$$
and so
$$
y_{n,0} = n \pi
$$
This gives explicit and useful expressions for the $l=0$ wavefunctions and energies,
$$
E_{n,0} = \frac{h^2 n^2}{8 m a^2}
$$
and its eigenfunctions are:
$$
\psi_{n,0,0}(r,\theta,\phi) \propto \frac{a}{n \pi r} \sin \left( \frac{n \pi r}{a} \right)
$$
Notice that the $l=0$ energies are the same as in the one-dimensional particle-in-a-box and the eigenfunctions are very similar.
In atomic units, the energies of an electron confined to a spherical ball with radius $a$ Bohr are:
$$
E_{n,l} = \tfrac{y_{n,l}^2}{2a^2}
$$
The following code block computes these energies. It uses the relationship between the spherical Bessel functions and the ordinary Bessel functions,
$$
j_l(x) = \sqrt{\frac{\pi}{2x}} J_{l+\tfrac{1}{2}}(x)
$$
which indicates that the zeros of $j_l(x)$ and $J_{l+\tfrac{1}{2}}(x)$ occur the same places.
```
#The next few lines just set up the sliders for setting parameters.
#Principle quantum number slider:
n = widgets.IntSlider(
value=1,
min=1,
max=10,
step=1,
description='n (princ. #):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='d')
#Angular quantum number slider:
l = widgets.IntSlider(
value=0,
min=-10,
max=10,
step=1,
description='l (ang. #):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='d')
#Box length slider:
a = widgets.FloatSlider(
value=1,
min=.01,
max=10.0,
step=0.01,
description='a (length):',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True,
readout_format='.2f',
)
# Define a function for the energy (in a.u.) of a particle in a spherical ball
# with length a, principle quantum number n, and angular quantum number l
# The length is input in Bohr (atomic units).
def compute_energy_ball(n, l, a):
"Compute 1-dimensional particle-in-a-spherical-ball energy."
#Compute the energy from the n-th zero.
return float((mpmath.besseljzero(l+0.5,n))**2/ (2 * a**2))
#This next bit of code just prints out the energy in atomic units
def print_energy_ball(a, n, l):
print(f'The energy of an electron confined to a ball with radius {a:.2f} a.u.,'
f' principle quantum number {n}, and angular quantum number {l}'
f' is {compute_energy_ball(n, l, a):5.2f} a.u..')
out = widgets.interactive_output(print_energy_ball, {'a': a, 'n': n, 'l': l})
widgets.VBox([widgets.VBox([a, n, l]),out])
```
## 📝 Exercise: For the hydrogen atom, the 2s orbital ($n=2$, $l=0$) and 2p orbitals ($n=2$,$l=1$,$m_l=-1,0,1$) have the same energy. Is this true for the particle-in-a-ball?
## 🪞 Self-Reflection
- Can you think of other physical or chemical systems where a multi-dimensional particle-in-a-box Hamiltonian would be appropriate? What shape would the box be?
- Can you think of another chemical system where separation of variables would be useful?
- Explain why separation of variables is consistent with the Born Postulate that the square-magnitude of a particle's wavefunction is the probability distribution function for the particle.
- What is the expression for the zero-point energy and ground-state wavefunction of an electron in a 4-dimensional box? Can you identify some states of the 4-dimensional box with especially high degeneracy?
- What is the degeneracy of the k-th excited state of a particle confined to a circular disk? What is the degeneracy of the k-th excited state of a particle confined in a spherical ball?
- Write a Python function that computes the normalization constant for the particle-in-a-disk and the particle-in-a-sphere.
## 🤔 Thought-Provoking Questions
- How would you write the Hamiltonian for two electrons confined to a box? Could you solve this system with separation of variables? Why or why not?
- Write the time-independent Schrödinger equation for an electron in an cylindrical box. What are its eigenfunctions and eigenvalues?
- What would the time-independent Schrödinger equation for electrons in an elliptical box look like? (You may find it useful to reference [elliptical](https://en.wikipedia.org/wiki/Elliptic_coordinate_system) and [ellipsoidal](https://en.wikipedia.org/wiki/Ellipsoidal_coordinates) coordinates.
- Construct an example of a four-fold *accidental* degeneracy for a particle in a 2-dimensional rectangular (not square!) box.
- If there is any degenerate state (either accidental or due to symmetry) for the multi-dimensional particle-in-a-box, then there are always an infinite number of other degenerate states. Why?
- Consider an electron confined to a circular harmonic well, $V(r) = k r^2$. What is the angular kinetic energy and angular wavefunction for this system?
## 🔁 Recapitulation
- Write the Hamiltonian, time-independent Schrödinger equation, eigenfunctions, and eigenvalues for two-dimensional and three-dimensional particles in a box.
- What is the definition of degeneracy? How does an "accidental" degeneracy differ from an ordinary degeneracy?
- What is the zero-point energy for an electron in a circle and an electron in a sphere?
- What are the spherical harmonics?
- Describe the technique of separation of variables? When can it be applied?
- What is the Laplacian operator in Cartesian coordinates? In spherical coordinates?
- What are *boundary conditions* and how are they important in quantum mechanics?
## 🔮 Next Up...
- 1-electron atoms
- Approximate methods for quantum mechanics
- Multielectron particle-in-a-box
- Postulates of Quantum Mechanics
## 📚 References
My favorite sources for this material are:
- [Randy's book](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/DumontBook.pdf?raw=true) (See Chapter 3)
- Also see my (pdf) class [notes]
(https://github.com/PaulWAyers/IntroQChem/blob/main/documents/PinBox.pdf?raw=true).
- [McQuarrie and Simon summary](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_(McQuarrie_and_Simon)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box)
- [Wolfram solutions for particle in a circle](https://demonstrations.wolfram.com/ParticleInAnInfiniteCircularWell/)
- [Meticulous solution for a particle confined to a circular disk](https://opencommons.uconn.edu/cgi/viewcontent.cgi?article=1013&context=chem_educ)
- [General discussion of the particle-in-a-region, which is the same as the classical wave equation](http://galileo.phys.virginia.edu/classes/252/Classical_Waves/Classical_Waves.html)
- [More on the radial Schrödinger equation](https://quantummechanics.ucsd.edu/ph130a/130_notes/node222.html)
- Python tutorial ([part 1](https://physicspython.wordpress.com/2020/05/28/the-problem-of-the-hydrogen-atom-part-1/) and [part 2](https://physicspython.wordpress.com/2020/06/04/the-problem-of-the-hydrogen-atom-part-2/)) on the Hydrogen atom.
There are also some excellent wikipedia articles:
- [Particle in a Sphere](https://en.wikipedia.org/wiki/Particle_in_a_spherically_symmetric_potential)
- [Other exactly solvable models](https://en.wikipedia.org/wiki/List_of_quantum-mechanical_systems_with_analytical_solutions)
|
github_jupyter
|
# Challenge
In this challenge, we will practice on dimensionality reduction with PCA and selection of variables with RFE. We will use the _data set_ [Fifa 2019](https://www.kaggle.com/karangadiya/fifa19), originally containing 89 variables from over 18 thousand players of _game_ FIFA 2019.
## _Setup_
```
from math import sqrt
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
import statsmodels.api as sm
import statsmodels.stats as st
from sklearn.decomposition import PCA
from loguru import logger
from IPython import get_ipython
%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
fifa = pd.read_csv("fifa.csv")
columns_to_drop = ["Unnamed: 0", "ID", "Name", "Photo", "Nationality", "Flag",
"Club", "Club Logo", "Value", "Wage", "Special", "Preferred Foot",
"International Reputation", "Weak Foot", "Skill Moves", "Work Rate",
"Body Type", "Real Face", "Position", "Jersey Number", "Joined",
"Loaned From", "Contract Valid Until", "Height", "Weight", "LS",
"ST", "RS", "LW", "LF", "CF", "RF", "RW", "LAM", "CAM", "RAM", "LM",
"LCM", "CM", "RCM", "RM", "LWB", "LDM", "CDM", "RDM", "RWB", "LB", "LCB",
"CB", "RCB", "RB", "Release Clause"
]
try:
fifa.drop(columns_to_drop, axis=1, inplace=True)
except KeyError:
logger.warning(f"Columns already dropped")
```
## Starts analysis
```
fifa.head()
fifa.shape
fifa.info()
fifa.isna().sum()
fifa = fifa.dropna()
fifa.isna().sum()
```
## Question 1
Which fraction of the variance can be explained by the first main component of `fifa`? Respond as a single float (between 0 and 1) rounded to three decimal places.
```
def q1():
pca = PCA(n_components = 1).fit(fifa)
return round(float(pca.explained_variance_ratio_), 3)
q1()
```
## Question 2
How many major components do we need to explain 95% of the total variance? Answer as a single integer scalar.
```
def q2():
pca_095 = PCA(n_components=0.95)
X_reduced = pca_095.fit_transform(fifa)
return X_reduced.shape[1]
q2()
```
## Question 3
What are the coordinates (first and second main components) of the `x` point below? The vector below is already centered. Be careful to __not__ center the vector again (for example, by invoking `PCA.transform ()` on it). Respond as a float tuple rounded to three decimal places.
```
x = [0.87747123, -1.24990363, -1.3191255, -36.7341814,
-35.55091139, -37.29814417, -28.68671182, -30.90902583,
-42.37100061, -32.17082438, -28.86315326, -22.71193348,
-38.36945867, -20.61407566, -22.72696734, -25.50360703,
2.16339005, -27.96657305, -33.46004736, -5.08943224,
-30.21994603, 3.68803348, -36.10997302, -30.86899058,
-22.69827634, -37.95847789, -22.40090313, -30.54859849,
-26.64827358, -19.28162344, -34.69783578, -34.6614351,
48.38377664, 47.60840355, 45.76793876, 44.61110193,
49.28911284
]
def q3():
pca_q3 = PCA(n_components = 2)
pca_q3.fit(fifa)
return tuple(np.round(pca_q3.components_.dot(x),3))
q3()
```
## Question 4
Performs RFE with linear regression estimator to select five variables, eliminating them one by one. What are the selected variables? Respond as a list of variable names.
```
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
def q4():
x = fifa.drop('Overall', axis=1)
y = fifa['Overall']
reg = LinearRegression().fit(x,y)
rfe = RFE(reg, n_features_to_select=5).fit(x, y)
nom_var = x.loc[:,rfe.get_support()].columns
return list(nom_var)
q4()
```
|
github_jupyter
|
```
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
```
# Loading a pre-trained model in inference mode
In this tutorial, we will show how to instantiate a model pre-trained with VISSL to use it in inference mode to extract features from its trunk.
We will concentrate on loading a model pre-trained via SimCLR to use it in inference mode and extract features from an image, but this whole training is portable to any another pre-training methods (MoCo, SimSiam, SwAV, etc).
Through it, we will show:
1. How to instantiate a model associated to a pre-training configuration
2. Load the weights of the pre-trained model (taking the weights from our Model Zoo)
3. Use it to extract features associated to the VISSL Logo
**NOTE:** For a tutorial focused on how to use VISSL to schedule a feature extraction job, please refer to [the dedicated tutorial](https://colab.research.google.com/github/facebookresearch/vissl/blob/stable/tutorials/Feature_Extraction.ipynb)
**NOTE:** Please ensure your Collab Notebook has GPU available: `Edit -> Notebook Settings -> select GPU`.
**NOTE:** You can make a copy of this tutorial by `File -> Open in playground mode` and make changes there. DO NOT request access to this tutorial.
## Install VISSL
We will start this tutorial by installing VISSL, following the instructions [here](https://github.com/facebookresearch/vissl/blob/master/INSTALL.md#install-vissl-pip-package).
```
# Install: PyTorch (we assume 1.5.1 but VISSL works with all PyTorch versions >=1.4)
!pip install torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# install opencv
!pip install opencv-python
# install apex by checking system settings: cuda version, pytorch version, python version
import sys
import torch
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{torch.__version__[0:5:2]}"
])
print(version_str)
# install apex (pre-compiled with optimizer C++ extensions and CUDA kernels)
!pip install -f https://dl.fbaipublicfiles.com/vissl/packaging/apexwheels/{version_str}/download.html apex
# install VISSL
!pip install vissl
```
VISSL should be successfuly installed by now and all the dependencies should be available.
```
import vissl
import tensorboard
import apex
import torch
```
## Loading a VISSL SimCLR pre-trained model
## Download the configuration
VISSL provides yaml configuration files for training a SimCLR model [here](https://github.com/facebookresearch/vissl/tree/master/configs/config/pretrain/simclr). We will start by fetching the configuration files we need.
```
!mkdir -p configs/config/simclr
!mkdir -p vissl/config
!wget -q -O configs/config/simclr/simclr_8node_resnet.yaml https://raw.githubusercontent.com/facebookresearch/vissl/master/configs/config/pretrain/simclr/simclr_8node_resnet.yaml
!wget -q -O vissl/config/defaults.yaml https://raw.githubusercontent.com/facebookresearch/vissl/master/vissl/config/defaults.yaml
```
```
# This is formatted as code
```
## Download the ResNet-50 weights from the Model Zoo
```
!wget -q -O resnet_simclr.torch https://dl.fbaipublicfiles.com/vissl/model_zoo/simclr_rn101_1000ep_simclr_8node_resnet_16_07_20.35063cea/model_final_checkpoint_phase999.torch
```
## Create the model associated to the configuration
Load the configuration and merge it with the default configuration.
```
from omegaconf import OmegaConf
from vissl.utils.hydra_config import AttrDict
config = OmegaConf.load("configs/config/simclr/simclr_8node_resnet.yaml")
default_config = OmegaConf.load("vissl/config/defaults.yaml")
cfg = OmegaConf.merge(default_config, config)
```
Edit the configuration to freeze the trunk (inference mode) and ask for the extraction of the last layer feature.
```
cfg = AttrDict(cfg)
cfg.config.MODEL.WEIGHTS_INIT.PARAMS_FILE = "resnet_simclr.torch"
cfg.config.MODEL.FEATURE_EVAL_SETTINGS.EVAL_MODE_ON = True
cfg.config.MODEL.FEATURE_EVAL_SETTINGS.FREEZE_TRUNK_ONLY = True
cfg.config.MODEL.FEATURE_EVAL_SETTINGS.EXTRACT_TRUNK_FEATURES_ONLY = True
cfg.config.MODEL.FEATURE_EVAL_SETTINGS.SHOULD_FLATTEN_FEATS = False
cfg.config.MODEL.FEATURE_EVAL_SETTINGS.LINEAR_EVAL_FEAT_POOL_OPS_MAP = [["res5avg", ["Identity", []]]]
```
And then build the model:
```
from vissl.models import build_model
model = build_model(cfg.config.MODEL, cfg.config.OPTIMIZER)
```
## Loading the pre-trained weights
```
from classy_vision.generic.util import load_checkpoint
from vissl.utils.checkpoint import init_model_from_weights
weights = load_checkpoint(checkpoint_path=cfg.config.MODEL.WEIGHTS_INIT.PARAMS_FILE)
init_model_from_weights(
config=cfg.config,
model=model,
state_dict=weights,
state_dict_key_name="classy_state_dict",
skip_layers=[], # Use this if you do not want to load all layers
)
print("Loaded...")
```
## Trying the model on the VISSL Logo
```
!wget -q -O test_image.jpg https://raw.githubusercontent.com/facebookresearch/vissl/master/.github/logo/Logo_Color_Light_BG.png
from PIL import Image
import torchvision.transforms as transforms
image = Image.open("test_image.jpg")
image = image.convert("RGB")
pipeline = transforms.Compose([
transforms.CenterCrop(224),
transforms.ToTensor(),
])
x = pipeline(image)
features = model(x.unsqueeze(0))
```
The output is a list with as many representation layers as required in the configuration (in our case, `cfg.config.MODEL.FEATURE_EVAL_SETTINGS.LINEAR_EVAL_FEAT_POOL_OPS_MAP` asks for one representation layer, so we have just one output.
```
features[0].shape
```
|
github_jupyter
|
# Enterprise Time Series Forecasting and Decomposition Using LSTM
This notebook is a tutorial on time series forecasting and decomposition using LSTM.
* First, we generate a signal (time series) that includes several components that are commonly found in enterprise applications: trend, seasonality, covariates, and covariates with memory effects.
* Second, we fit a basic LSTM model, produce the forecast, and introspect the evolution of the hidden state of the model.
* Third, we fit the LSTM with attention model and visualize attention weights that provide some insights into the memory effects.
## Detailed Description
Please see blog post [D006](https://github.com/ikatsov/tensor-house/blob/master/resources/descriptions.md) for more details.
## Data
This notebook generates synthetic data internally, no external datset are used.
---
# Step 1: Generate the Data
We generate a time series that includes a trend, seasonality, covariates, and covariates. This signals mimics some of the effects usually found in sales data (cannibalization, halo, pull forward, and other effects). The covariates are just independent variables, but they can enter the signal in two modes:
* Linear. The covariate series is directly added to the main signal with some coeffecient. E.g. the link function is indentity.
* Memory. The covariate is transformed using a link function that include some delay and can be nonlinear. We use a simple smoothing filter as a link function. We observe the original covariate, but the link function is unknown.
```
import numpy as np
import pandas as pd
import datetime
import collections
from matplotlib import pylab as plt
plt.style.use('ggplot')
import seaborn as sns
import matplotlib.dates as mdates
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
pd.options.mode.chained_assignment = None
import tensorflow as tf
from sklearn import preprocessing
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import LSTM, Dense, Input
from tensorflow.keras.layers import Lambda, RepeatVector, Permute, Flatten, Activation, Multiply
from tensorflow.keras.constraints import NonNeg
from tensorflow.keras import backend as K
from tensorflow.keras.regularizers import l1
from tensorflow.keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
def step_series(n, mean, scale, n_steps):
s = np.zeros(n)
step_idx = np.random.randint(0, n, n_steps)
value = mean
for t in range(n):
s[t] = value
if t in step_idx:
value = mean + scale * np.random.randn()
return s
def linear_link(x):
return x
def mem_link(x, length = 50):
mfilter = np.exp(np.linspace(-10, 0, length))
return np.convolve(x, mfilter/np.sum(mfilter), mode='same')
def create_signal(links = [linear_link, linear_link]):
days_year = 365
quaters_year = 4
days_week = 7
# three years of data, daily resolution
idx = pd.date_range(start='2017-01-01', end='2020-01-01', freq='D')
df = pd.DataFrame(index=idx, dtype=float)
df = df.fillna(0.0)
n = len(df.index)
trend = np.zeros(n)
seasonality = np.zeros(n)
for t in range(n):
trend[t] = 2.0 * t/n
seasonality[t] = 4.0 * np.sin(np.pi * t/days_year*quaters_year)
covariates = [step_series(n, 0, 1.0, 80), step_series(n, 0, 1.0, 80)]
covariate_links = [ links[i](covariates[i]) for i in range(2) ]
noise = 0.5 * np.random.randn(n)
signal = trend + seasonality + np.sum(covariate_links, axis=0) + noise
df['signal'], df['trend'], df['seasonality'], df['noise'] = signal, trend, seasonality, noise
for i in range(2):
df[f'covariate_0{i+1}'] = covariates[i]
df[f'covariate_0{i+1}_link'] = covariate_links[i]
return df
df = create_signal()
fig, ax = plt.subplots(len(df.columns), figsize=(20, 15))
for i, c in enumerate(df.columns):
ax[i].plot(df.index, df[c])
ax[i].set_title(c)
plt.tight_layout()
plt.show()
```
# Step 2: Define and Fit the Basic LSTM Model
We fit LSTM model that consumes patches of the observed signal and covariates, i.e. each input sample is a matrix where rows are time steps and columns are observed metrics (signal, covariate, and calendar features).
```
#
# engineer features and create input tensors
#
def prepare_features_rnn(df):
df_rnn = df[['signal', 'covariate_01', 'covariate_02']]
df_rnn['year'] = df_rnn.index.year
df_rnn['month'] = df_rnn.index.month
df_rnn['day_of_year'] = df_rnn.index.dayofyear
def normalize(df):
x = df.values
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
return pd.DataFrame(x_scaled, index=df.index, columns=df.columns)
return normalize(df_rnn)
#
# train-test split and adjustments
#
def train_test_split(df, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval):
# lenght of the input time window for each sample (the offset of the oldest sample in the input)
input_window_size = n_time_steps*time_step_interval
split_t = int(len(df)*train_ratio)
x_train, y_train = [], []
x_test, y_test = [], []
y_col_idx = list(df.columns).index('signal')
for i in range(input_window_size, len(df)):
t_start = df.index[i - input_window_size]
t_end = df.index[i]
# we zero out last forecast_days_ahead signal observations, but covariates are assumed to be known
x_t = df[t_start:t_end:time_step_interval].values.copy()
if time_step_interval <= forecast_days_ahead:
x_t[-int((forecast_days_ahead) / time_step_interval):, y_col_idx] = 0
y_t = df.iloc[i]['signal']
if i < split_t:
x_train.append(x_t)
y_train.append(y_t)
else:
x_test.append(x_t)
y_test.append(y_t)
return np.stack(x_train), np.hstack(y_train), np.stack(x_test), np.hstack(y_test)
#
# parameters
#
n_time_steps = 40 # lenght of LSTM input in samples
time_step_interval = 2 # sampling interval, days
hidden_units = 8 # LSTM state dimensionality
forecast_days_ahead = 7
train_ratio = 0.8
#
# generate data and fit the model
#
df = create_signal()
df_rnn = prepare_features_rnn(df)
x_train, y_train, x_test, y_test = train_test_split(df_rnn, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval)
print(f'Input tensor shape {x_train.shape}')
n_samples = x_train.shape[0]
n_features = x_train.shape[2]
input_model = Input(shape=(n_time_steps, n_features))
lstm_state_seq, state_h, state_c = LSTM(hidden_units, return_sequences=True, return_state=True)(input_model)
output_dense = Dense(1)(state_c)
model_lstm = Model(inputs=input_model, outputs=output_dense)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
model_lstm.compile(loss='mean_squared_error', metrics=['mean_absolute_percentage_error'], optimizer='RMSprop')
model_lstm.summary()
model_lstm.fit(x_train, y_train, epochs=20, batch_size=4, validation_data=(x_test, y_test), use_multiprocessing=True, verbose=1)
score = model_lstm.evaluate(x_test, y_test, verbose=0)
print('Test MSE:', score[0])
print('Test MAPE:', score[1])
```
# Step 3: Visualize the Forecast and Evolution of the Hidden State
We first plot the forecast to show that models fits well. Next, we visualize how individual components of the hidden state evolve over time:
* We can see that some states actually extract seasonal and trend components, but this is not guaranteed.
* We also overlay plots with covariates, to check if there are any correlation between states and covariates. We see that states do not correlate much with the covariate patterns.
```
input_window_size = n_time_steps*time_step_interval
x = np.vstack([x_train, x_test])
y_hat = model_lstm.predict(x)
forecast = np.append(np.zeros(input_window_size), y_hat)
#
# plot the forecast
#
fig, ax = plt.subplots(1, figsize=(20, 5))
ax.plot(df_rnn.index, forecast, label=f'Forecast ({forecast_days_ahead} days ahead)')
ax.plot(df_rnn.index, df_rnn['signal'], label='Signal')
ax.axvline(x=df.index[int(len(df) * train_ratio)], linestyle='--')
ax.legend()
plt.show()
#
# plot the evolution of the LSTM state
#
lstm_state_tap = Model(model_lstm.input, lstm_state_seq)
lstm_state_trace = lstm_state_tap.predict(x)
state_series = lstm_state_trace[:, -1, :].T
fig, ax = plt.subplots(len(state_series), figsize=(20, 15))
for i, state in enumerate(state_series):
ax[i].plot(df_rnn.index[:len(state)], state, label=f'State dimension {i}')
for j in [1, 2]:
ax[i].plot(df_rnn.index[:len(state)], df_rnn[f'covariate_0{j}'][:len(state)], color='#bbbbbb', label=f'Covariate 0{j}')
ax[i].legend(loc='upper right')
plt.show()
```
# Step 4: Define and Fit LSTM with Attention (LSTM-A) Model
We fit the LSTM with attention model to analyze the contribution of individual time steps from input patches. It can help to reconstruct the (unknown) memory link function.
```
#
# parameters
#
n_time_steps = 5 # lenght of LSTM input in samples
time_step_interval = 10 # sampling interval, days
hidden_units = 256 # LSTM state dimensionality
forecast_days_ahead = 14
train_ratio = 0.8
def fit_lstm_a(df, train_verbose = 0, score_verbose = 0):
df_rnn = prepare_features_rnn(df)
x_train, y_train, x_test, y_test = train_test_split(df_rnn, train_ratio, forecast_days_ahead, n_time_steps, time_step_interval)
n_steps = x_train.shape[0]
n_features = x_train.shape[2]
#
# define the model: LSTM with attention
#
main_input = Input(shape=(n_steps, n_features))
activations = LSTM(hidden_units, recurrent_dropout=0.1, return_sequences=True)(main_input)
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax', name = 'attention_weigths')(attention)
attention = RepeatVector(hidden_units * 1)(attention)
attention = Permute([2, 1])(attention)
weighted_activations = Multiply()([activations, attention])
weighted_activations = Lambda(lambda xin: K.sum(xin, axis=-2), output_shape=(hidden_units,))(weighted_activations)
main_output = Dense(1, activation='sigmoid')(weighted_activations)
model_attn = Model(inputs=main_input, outputs=main_output)
#
# fit the model
#
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
model_attn.compile(optimizer='rmsprop', loss='mean_squared_error', metrics=['mean_absolute_percentage_error'])
history = model_attn.fit(x_train, y_train, batch_size=4, epochs=30, verbose=train_verbose, validation_data=(x_test, y_test))
score = model_attn.evaluate(x_test, y_test, verbose=0)
if score_verbose > 0:
print(f'Test MSE [{score[0]}], MAPE [{score[1]}]')
return model_attn, df_rnn, x_train, x_test
df = create_signal(links = [linear_link, linear_link])
model_attn, df_rnn, x_train, x_test = fit_lstm_a(df, train_verbose = 1, score_verbose = 1)
input_window_size = n_time_steps*time_step_interval
x = np.vstack([x_train, x_test])
y_hat = model_attn.predict(x)
forecast = np.append(np.zeros(input_window_size), y_hat)
#
# plot the forecast
#
fig, ax = plt.subplots(1, figsize=(20, 5))
ax.plot(df_rnn.index, forecast, label=f'Forecast ({forecast_days_ahead} days ahead)')
ax.plot(df_rnn.index, df_rnn['signal'], label='Signal')
ax.axvline(x=df.index[int(len(df) * train_ratio)], linestyle='--')
ax.legend()
plt.show()
```
# Step 5: Analyze LSTM-A Model
The LSTM with attention model allows to extract the matrix of attention weights. For each time step, we have a vector of weights where each weight corresponds to one time step (lag) in the input patch.
* For the linear link, only the contemporaneous covariates/features have high contribution weights.
* For the memory link, the "LSTMogram" is more blurred becasue lagged samples have high contribution as well.
```
#
# evaluate atention weights for each time step
#
attention_model = Model(inputs=model_attn.input, outputs=model_attn.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
print(f'Weight matrix shape {a.shape}')
fig, ax = plt.subplots(1, figsize=(10, 2))
ax.imshow(a.T, cmap='viridis', interpolation='nearest', aspect='auto')
ax.grid(None)
#
# generate multiple datasets and perform LSTM-A analysis for each of them
#
n_evaluations = 4
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
fig, ax = plt.subplots(n_evaluations, 2, figsize=(16, n_evaluations * 2))
for j, link in enumerate([linear_link, mem_link]):
for i in range(n_evaluations):
print(f'Evaluating LSTMogram for link [{link.__name__}], trial [{i}]...')
df = create_signal(links = [link, link])
model_attn, df_rnn, x_train, _ = fit_lstm_a(df, score_verbose = 0)
attention_model = Model(inputs=model_attn.input, outputs=model_attn.get_layer('attention_weigths').output)
a = attention_model.predict(x_train)
ax[i, j].imshow(a.T, cmap='viridis', interpolation='nearest', aspect='auto')
ax[i, j].grid(None)
```
|
github_jupyter
|
# OSM Data Exploration
## Extraction of districts from shape files
For our experiments we consider two underdeveloped districts Araria, Bihar and Namsai, Arunachal Pradesh, the motivation of this comes from this [dna](https://www.dnaindia.com/india/report-out-of-niti-aayog-s-20-most-underdeveloped-districts-19-are-ruled-by-bjp-or-its-allies-2598984) news article, quoting a Niti Aayog Report. We also consider a developed city Bangalore in the south of India.
```
import os
from dotenv import load_dotenv
load_dotenv()
# Read India shape file with level 2 (contains district level administrative boundaries)
india_shape = os.environ.get("DATA_DIR") + "/gadm36_shp/gadm36_IND_2.shp"
import geopandas as gpd
india_gpd = gpd.read_file(india_shape)
#inspect
import matplotlib.pyplot as plt
%matplotlib inline
india_gpd.plot();
# Extract Araria district in Bihar state
araria_gdf = india_gpd[india_gpd['NAME_2'] == 'Araria']
araria_gdf
# Extract two main features of interest
araria_gdf = araria_gdf[['NAME_2', 'geometry']]
araria_gdf.plot()
# Extract Namsai district in Arunachal Pradesh state.
namsai_gdf = india_gpd[india_gpd['NAME_2'] == 'Namsai']
namsai_gdf
# Extract the two main features
namsai_gdf = namsai_gdf[['NAME_2', 'geometry']]
namsai_gdf.plot()
# Extract Bangalore district
bangalore_gdf = india_gpd[india_gpd['NAME_2'] == 'Bangalore']
bangalore_gdf = bangalore_gdf[['NAME_2', 'geometry']]
bangalore_gdf.plot()
```
## Creating geographic extracts from OpenStreetMap Data
Given a geopandas data frame representing a district boundary we find its bounding box
```
# Get the coordinate system for araria data frame
araria_gdf.crs
araria_bbox = araria_gdf.bounds
print(araria_bbox)
type(araria_gdf)
```
## Fetch Open Street Map Data within Boundaries as Data Frame
We use 'add_basemap' function of contextily to add a background map to our plot and make sure the added basemap has the same co-ordinate system (crs) as the boundary extracted from the shape file.
```
import contextily as ctx
araria_ax = araria_gdf.plot(figsize=(20, 20), alpha=0.5, edgecolor='k')
ctx.add_basemap(araria_ax, crs=araria_gdf.crs, zoom=12)
#Using contextily to download basemaps and store them in standard raster files Store the base maps as tif
w, s, e, n = (araria_bbox.minx.values[0], araria_bbox.miny.values[0], araria_bbox.maxx.values[0], araria_bbox.maxy.values[0])
_ = ctx.bounds2raster(w, s, e, n, ll=True, path = os.environ.get("OSM_DIR") + "araria.tif",
source=ctx.providers.CartoDB.Positron)
import rasterio
from rasterio.plot import show
r = rasterio.open(os.environ.get("OSM_DIR") + "araria.tif")
plt.imshow(r.read(1))
#show(r, 2)
plt.rcParams["figure.figsize"] = (20, 20)
plt.rcParams["grid.color"] = 'k'
plt.rcParams["grid.linestyle"] = ":"
plt.rcParams["grid.linewidth"] = 0.5
plt.rcParams["grid.alpha"] = 0.5
plt.show()
```
Other than the raster image tiles of the map there is also the Knots and Edges Model associated with a map, which is the vector data in the geopandas data frame and visualized below
```
import osmnx as ox
araria_graph = ox.graph_from_bbox(n, s, e, w)
type(araria_graph)
araria_fig, araria_ax = ox.plot_graph(araria_graph)
plt.tight_layout()
```
The following section deals with creation of GeoDataFrame of OSM entities within a N, S, E, W bounding box and tags which is a dictionary of tags used for finding objects in the selected area. Results returned are the union, not intersection of each individual tag. All Open Street Map tags can be found [here](https://wiki.openstreetmap.org/wiki/Map_features)
```
tags = {'amenity':True, 'building':True, 'emergency':True, 'highway':True, 'footway':True, 'landuse': True, 'water': True}
araria_osmdf = ox.geometries.geometries_from_bbox(n, s, e, w, tags=tags)
araria_osmdf.head()
# Copy the dataframe as a csv
araria_osmdf.to_csv(os.environ.get("OSM_DIR") + "araria_osmdf.csv")
```
|
github_jupyter
|
```
'''
Comparing single layer MLP with deep MLP (using TensorFlow)
'''
import numpy as np
from scipy.optimize import minimize
from scipy.io import loadmat
from scipy.stats import logistic
from math import sqrt
import time
import pickle
# Do not change this
def initializeWeights(n_in,n_out):
"""
# initializeWeights return the random weights for Neural Network given the
# number of node in the input layer and output layer
# Input:
# n_in: number of nodes of the input layer
# n_out: number of nodes of the output layer
# Output:
# W: matrix of random initial weights with size (n_out x (n_in + 1))"""
epsilon = sqrt(6) / sqrt(n_in + n_out + 1);
W = (np.random.rand(n_out, n_in + 1)*2* epsilon) - epsilon;
return W
def sigmoid(z):
return (1.0 / (1.0 + np.exp(-z)))
def nnObjFunction(params, *args):
n_input, n_hidden, n_class, training_data, training_label, lambdaval = args
w1 = params[0:n_hidden * (n_input + 1)].reshape((n_hidden, (n_input + 1)))
w2 = params[(n_hidden * (n_input + 1)):].reshape((n_class, (n_hidden + 1)))
obj_val = 0
n = training_data.shape[0]
'''
Step 01: Feedforward Propagation
'''
'''Input Layer --> Hidden Layer
'''
# Adding bias node to every training data. Here, the bias value is 1 for every training data
# A training data is a feature vector X.
# We have 717 features for every training data
biases1 = np.full((n,1), 1)
training_data_bias = np.concatenate((biases1, training_data),axis=1)
# aj is the linear combination of input data and weight (w1) at jth hidden node.
# Here, 1 <= j <= no_of_hidden_units
aj = np.dot( training_data_bias, np.transpose(w1))
# zj is the output from the hidden unit j after applying sigmoid as an activation function
zj = sigmoid(aj)
'''Hidden Layer --> Output Layer
'''
# Adding bias node to every zj.
m = zj.shape[0]
biases2 = np.full((m,1), 1)
zj_bias = np.concatenate((biases2, zj), axis=1)
# bl is the linear combination of hidden units output and weight(w2) at lth output node.
# Here, l = 10 as we are classifying 10 digits
bl = np.dot(zj_bias, np.transpose(w2))
ol = sigmoid(bl)
'''
Step 2: Error Calculation by error function
'''
# yl --> Ground truth for every training dataset
yl = np.full((n, n_class), 0)
for i in range(n):
trueLabel = training_label[i]
yl[i][trueLabel] = 1
yl_prime = (1.0-yl)
ol_prime = (1.0-ol)
lol = np.log(ol)
lol_prime = np.log(ol_prime)
# Our Error function is "negative log-likelihood"
# We need elementwise multiplication between the matrices
error = np.sum( np.multiply(yl,lol) + np.multiply(yl_prime,lol_prime) )/((-1)*n)
# error = -np.sum( np.sum(yl*lol + yl_prime*lol_prime, 1))/ n
'''
Step 03: Gradient Calculation for Backpropagation of error
'''
delta = ol- yl
gradient_w2 = np.dot(delta.T, zj_bias)
temp = np.dot(delta,w2) * ( zj_bias * (1.0-zj_bias))
gradient_w1 = np.dot( np.transpose(temp), training_data_bias)
gradient_w1 = gradient_w1[1:, :]
'''
Step 04: Regularization
'''
regularization = lambdaval * (np.sum(w1**2) + np.sum(w2**2)) / (2*n)
obj_val = error + regularization
gradient_w1_reg = (gradient_w1 + lambdaval * w1)/n
gradient_w2_reg = (gradient_w2 + lambdaval * w2)/n
obj_grad = np.concatenate((gradient_w1_reg.flatten(), gradient_w2_reg.flatten()), 0)
return (obj_val, obj_grad)
def nnPredict(w1, w2, training_data):
n = training_data.shape[0]
biases1 = np.full((n,1),1)
training_data = np.concatenate((biases1, training_data), axis=1)
aj = np.dot(training_data, w1.T)
zj = sigmoid(aj)
m = zj.shape[0]
biases2 = np.full((m,1), 1)
zj = np.concatenate((biases2, zj), axis=1)
bl = np.dot(zj, w2.T)
ol = sigmoid(bl)
labels = np.argmax(ol, axis=1)
return labels
# Do not change this
def preprocess():
pickle_obj = pickle.load(file=open('face_all.pickle', 'rb'))
features = pickle_obj['Features']
labels = pickle_obj['Labels']
train_x = features[0:21100] / 255
valid_x = features[21100:23765] / 255
test_x = features[23765:] / 255
labels = labels[0]
train_y = labels[0:21100]
valid_y = labels[21100:23765]
test_y = labels[23765:]
return train_x, train_y, valid_x, valid_y, test_x, test_y
"""**************Neural Network Script Starts here********************************"""
train_data, train_label, validation_data, validation_label, test_data, test_label = preprocess()
# Train Neural Network
trainingStart = time.time()
# set the number of nodes in input unit (not including bias unit)
n_input = train_data.shape[1]
# set the number of nodes in hidden unit (not including bias unit)
n_hidden = 256
# set the number of nodes in output unit
n_class = 2
# initialize the weights into some random matrices
initial_w1 = initializeWeights(n_input, n_hidden);
initial_w2 = initializeWeights(n_hidden, n_class);
# unroll 2 weight matrices into single column vector
initialWeights = np.concatenate((initial_w1.flatten(), initial_w2.flatten()),0)
# set the regularization hyper-parameter
lambdaval = 10;
args = (n_input, n_hidden, n_class, train_data, train_label, lambdaval)
#Train Neural Network using fmin_cg or minimize from scipy,optimize module. Check documentation for a working example
opts = {'maxiter' :50} # Preferred value.
nn_params = minimize(nnObjFunction, initialWeights, jac=True, args=args,method='CG', options=opts)
params = nn_params.get('x')
#Reshape nnParams from 1D vector into w1 and w2 matrices
w1 = params[0:n_hidden * (n_input + 1)].reshape( (n_hidden, (n_input + 1)))
w2 = params[(n_hidden * (n_input + 1)):].reshape((n_class, (n_hidden + 1)))
#Test the computed parameters
predicted_label = nnPredict(w1,w2,train_data)
#find the accuracy on Training Dataset
print('\n Training set Accuracy:' + str(100*np.mean((predicted_label == train_label).astype(float))) + '%')
predicted_label = nnPredict(w1,w2,validation_data)
#find the accuracy on Validation Dataset
print('\n Validation set Accuracy:' + str(100*np.mean((predicted_label == validation_label).astype(float))) + '%')
predicted_label = nnPredict(w1,w2,test_data)
#find the accuracy on Validation Dataset
print('\n Test set Accuracy:' + str(100*np.mean((predicted_label == test_label).astype(float))) + '%')
trainingEnd = time.time()
print('Training Time:',(trainingEnd-trainingStart))
```
|
github_jupyter
|
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Cirq basics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/basics"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/basics.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial will teach the basics of how to use Cirq. This tutorial will walk through how to use qubits, gates, and operations to create and simulate your first quantum circuit using Cirq. It will briefly introduce devices, unitary matrices, decompositions, and optimizers.
This tutorial isn’t a quantum computing 101 tutorial, we assume familiarity of quantum computing at about the level of the textbook “Quantum Computation and Quantum Information” by Nielsen and Chuang.
For more in-depth examples closer to those found in current work, check out our tutorials page.
To begin, please follow the instructions for [installing Cirq](../install.md).
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
```
## Qubits
The first part of creating a quantum circuit is to define a set of qubits (also known as a quantum registers) to act on.
Cirq has three main ways of defining qubits:
* `cirq.NamedQubit`: used to label qubits by an abstract name
* `cirq.LineQubit`: qubits labelled by number in a linear array
* `cirq.GridQubit`: qubits labelled by two numbers in a rectangular lattice.
Here are some examples of defining each type of qubit.
```
import cirq
# Using named qubits can be useful for abstract algorithms
# as well as algorithms not yet mapped onto hardware.
q0 = cirq.NamedQubit('source')
q1 = cirq.NamedQubit('target')
# Line qubits can be created individually
q3 = cirq.LineQubit(3)
# Or created in a range
# This will create LineQubit(0), LineQubit(1), LineQubit(2)
q0, q1, q2 = cirq.LineQubit.range(3)
# Grid Qubits can also be referenced individually
q4_5 = cirq.GridQubit(4,5)
# Or created in bulk in a square
# This will create 16 qubits from (0,0) to (3,3)
qubits = cirq.GridQubit.square(4)
```
There are also pre-packaged sets of qubits called [Devices](../devices.md). These are qubits along with a set of rules of how they can be used. A `cirq.Device` can be used to apply adjacency rules and other hardware constraints to a quantum circuit. For our example, we will use the `cirq.google.Foxtail` device that comes with cirq. It is a 2x11 grid that mimics early hardware released by Google.
```
print(cirq.google.Foxtail)
```
## Gates and operations
The next step is to use the qubits to create operations that can be used in our circuit. Cirq has two concepts that are important to understand here:
* A `Gate` is an effect that can be applied to a set of qubits.
* An `Operation` is a gate applied to a set of qubits.
For instance, `cirq.H` is the quantum [Hadamard](https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_(H)_gate) and is a `Gate` object. `cirq.H(cirq.LineQubit(1))` is an `Operation` object and is the Hadamard gate applied to a specific qubit (line qubit number 1).
Many textbook gates are included within cirq. `cirq.X`, `cirq.Y`, and `cirq.Z` refer to the single-qubit Pauli gates. `cirq.CZ`, `cirq.CNOT`, `cirq.SWAP` are a few of the common two-qubit gates. `cirq.measure` is a macro to apply a `MeasurementGate` to a set of qubits. You can find more, as well as instructions on how to creats your own custom gates, on the [Gates documentation](../gates.ipynb) page.
Many arithmetic operations can also be applied to gates. Here are some examples:
```
# Example gates
not_gate = cirq.CNOT
pauli_z = cirq.Z
# Using exponentiation to get square root gates
sqrt_x_gate = cirq.X**0.5
sqrt_iswap = cirq.ISWAP**0.5
# Some gates can also take parameters
sqrt_sqrt_y = cirq.YPowGate(exponent=0.25)
# Example operations
q0, q1 = cirq.LineQubit.range(2)
z_op = cirq.Z(q0)
not_op = cirq.CNOT(q0, q1)
sqrt_iswap_op = sqrt_iswap(q0, q1)
```
## Circuits and moments
We are now ready to construct a quantum circuit. A `Circuit` is a collection of `Moment`s. A `Moment` is a collection of `Operation`s that all act during the same abstract time slice. Each `Operation` must have a disjoint set of qubits from the other `Operation`s in the `Moment`. A `Moment` can be thought of as a vertical slice of a quantum circuit diagram.
Circuits can be constructed in several different ways. By default, cirq will attempt to slide your operation into the earliest possible `Moment` when you insert it.
```
circuit = cirq.Circuit()
# You can create a circuit by appending to it
circuit.append(cirq.H(q) for q in cirq.LineQubit.range(3))
# All of the gates are put into the same Moment since none overlap
print(circuit)
# We can also create a circuit directly as well:
print(cirq.Circuit(cirq.SWAP(q, q+1) for q in cirq.LineQubit.range(3)))
```
Sometimes, you may not want cirq to automatically shift operations all the way to the left. To construct a circuit without doing this, you can create the circuit moment-by-moment or use a different `InsertStrategy`, explained more in the [Circuit documentation](../circuits.ipynb).
```
# Creates each gate in a separate moment.
print(cirq.Circuit(cirq.Moment([cirq.H(q)]) for q in cirq.LineQubit.range(3)))
```
### Circuits and devices
One important consideration when using real quantum devices is that there are often hardware constraints on the circuit. Creating a circuit with a `Device` will allow you to capture some of these requirements. These `Device` objects will validate the operations you add to the circuit to make sure that no illegal operations are added.
Let's look at an example using the Foxtail device.
```
q0 = cirq.GridQubit(0, 0)
q1 = cirq.GridQubit(0, 1)
q2 = cirq.GridQubit(0, 2)
adjacent_op = cirq.CZ(q0, q1)
nonadjacent_op = cirq.CZ(q0, q2)
# This is an unconstrained circuit with no device
free_circuit = cirq.Circuit()
# Both operations are allowed:
free_circuit.append(adjacent_op)
free_circuit.append(nonadjacent_op)
print('Unconstrained device:')
print(free_circuit)
print()
# This is a circuit on the Foxtail device
# only adjacent operations are allowed.
print('Foxtail device:')
foxtail_circuit = cirq.Circuit(device=cirq.google.Foxtail)
foxtail_circuit.append(adjacent_op)
try:
# Not allowed, will throw exception
foxtail_circuit.append(nonadjacent_op)
except ValueError as e:
print('Not allowed. %s' % e)
```
## Simulation
The results of the application of a quantum circuit can be calculated by a `Simulator`. Cirq comes bundled with a simulator that can calculate the results of circuits up to about a limit of 20 qubits. It can be initialized with `cirq.Simulator()`.
There are two different approaches to using a simulator:
* `simulate()`: Since we are classically simulating a circuit, a simulator can directly access and view the resulting wave function. This is useful for debugging, learning, and understanding how circuits will function.
* `run()`: When using actual quantum devices, we can only access the end result of a computation and must sample the results to get a distribution of results. Running the simulator as a sampler mimics this behavior and only returns bit strings as output.
Let's try to simulate a 2-qubit "Bell State":
```
# Create a circuit to generate a Bell State:
# sqrt(2) * ( |00> + |11> )
bell_circuit = cirq.Circuit()
q0, q1 = cirq.LineQubit.range(2)
bell_circuit.append(cirq.H(q0))
bell_circuit.append(cirq.CNOT(q0,q1))
# Initialize Simulator
s=cirq.Simulator()
print('Simulate the circuit:')
results=s.simulate(bell_circuit)
print(results)
print()
# For sampling, we need to add a measurement at the end
bell_circuit.append(cirq.measure(q0, q1, key='result'))
print('Sample the circuit:')
samples=s.run(bell_circuit, repetitions=1000)
# Print a histogram of results
print(samples.histogram(key='result'))
```
### Using parameter sweeps
Cirq circuits allow for gates to have symbols as free parameters within the circuit. This is especially useful for variational algorithms, which vary parameters within the circuit in order to optimize a cost function, but it can be useful in a variety of circumstances.
For parameters, cirq uses the library `sympy` to add `sympy.Symbol` as parameters to gates and operations.
Once the circuit is complete, you can fill in the possible values of each of these parameters with a `Sweep`. There are several possibilities that can be used as a sweep:
* `cirq.Points`: A list of manually specified values for one specific symbol as a sequence of floats
* `cirq.Linspace`: A linear sweep from a starting value to an ending value.
* `cirq.ListSweep`: A list of manually specified values for several different symbols, specified as a list of dictionaries.
* `cirq.Zip` and `cirq.Product`: Sweeps can be combined list-wise by zipping them together or through their Cartesian product.
A parameterized circuit and sweep together can be run using the simulator or other sampler by changing `run()` to `run_sweep()` and adding the sweep as a parameter.
Here is an example of sweeping an exponent of a X gate:
```
import matplotlib.pyplot as plt
import sympy
# Perform an X gate with variable exponent
q = cirq.GridQubit(1,1)
circuit = cirq.Circuit(cirq.X(q) ** sympy.Symbol('t'),
cirq.measure(q, key='m'))
# Sweep exponent from zero (off) to one (on) and back to two (off)
param_sweep = cirq.Linspace('t', start=0, stop=2, length=200)
# Simulate the sweep
s = cirq.Simulator()
trials = s.run_sweep(circuit, param_sweep, repetitions=1000)
# Plot all the results
x_data = [trial.params['t'] for trial in trials]
y_data = [trial.histogram(key='m')[1] / 1000.0 for trial in trials]
plt.scatter('t','p', data={'t': x_data, 'p': y_data})
```
## Unitary matrices and decompositions
Most quantum operations have a unitary matrix representation. This matrix can be accessed by applying `cirq.unitary()`. This can be applied to gates, operations, and circuits that support this protocol and will return the unitary matrix that represents the object.
```
print('Unitary of the X gate')
print(cirq.unitary(cirq.X))
print('Unitary of SWAP operator on two qubits.')
q0, q1 = cirq.LineQubit.range(2)
print(cirq.unitary(cirq.SWAP(q0, q1)))
print('Unitary of a sample circuit')
print(cirq.unitary(cirq.Circuit(cirq.X(q0), cirq.SWAP(q0, q1))))
```
### Decompositions
Many gates can be decomposed into an equivalent circuit with simpler operations and gates. This is called decomposition and can be accomplished with the `cirq.decompose` protocol.
For instance, a Hadamard H gate can be decomposed into X and Y gates:
```
print(cirq.decompose(cirq.H(cirq.LineQubit(0))))
```
Another example is the 3-qubit Toffoli gate, which is equivalent to a controlled-controlled-X gate. Many devices do not support a three qubit gate, so it is important
```
q0, q1, q2 = cirq.LineQubit.range(3)
print(cirq.Circuit(cirq.decompose(cirq.TOFFOLI(q0, q1, q2))))
```
The above decomposes the Toffoli into a simpler set of one-qubit gates and CZ gates at the cost of lengthening the circuit considerably.
Some devices will automatically decompose gates that they do not support. For instance, if we use the `Foxtail` device from above, we can see this in action by adding an unsupported SWAP gate:
```
swap = cirq.SWAP(cirq.GridQubit(0, 0), cirq.GridQubit(0, 1))
print(cirq.Circuit(swap, device=cirq.google.Foxtail))
```
### Optimizers
The last concept in this tutorial is the optimizer. An optimizer can take a circuit and modify it. Usually, this will entail combining or modifying operations to make it more efficient and shorter, though an optimizer can, in theory, do any sort of circuit manipulation.
For example, the `MergeSingleQubitGates` optimizer will take consecutive single-qubit operations and merge them into a single `PhasedXZ` operation.
```
q=cirq.GridQubit(1, 1)
optimizer=cirq.MergeSingleQubitGates()
c=cirq.Circuit(cirq.X(q) ** 0.25, cirq.Y(q) ** 0.25, cirq.Z(q) ** 0.25)
print(c)
optimizer.optimize_circuit(c)
print(c)
```
Other optimizers can assist in transforming a circuit into operations that are native operations on specific hardware devices. You can find more about optimizers and how to create your own elsewhere in the documentation.
## Next steps
After completing this tutorial, you should be able to use gates and operations to construct your own quantum circuits, simulate them, and to use sweeps. It should give you a brief idea of the commonly used
There is much more to learn and try out for those who are interested:
* Learn about the variety of [Gates](../gates.ipynb) available in cirq and more about the different ways to construct [Circuits](../circuits.ipynb)
* Learn more about [Simulations](../simulation.ipynb) and how it works.
* Learn about [Noise](../noise.ipynb) and how to utilize multi-level systems using [Qudits](../qudits.ipynb)
* Dive into some [Examples](_index.yaml) and some in-depth tutorials of how to use cirq.
Also, join our [cirq-announce mailing list](https://groups.google.com/forum/#!forum/cirq-announce) to hear about changes and releases or go to the [cirq github](https://github.com/quantumlib/Cirq/) to file issues.
|
github_jupyter
|
```
from fake_useragent import UserAgent
import requests
ua = UserAgent()
from newspaper import Article
from queue import Queue
from urllib.parse import quote
from unidecode import unidecode
def get_date(load):
try:
date = re.findall(
'[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?', load
)
return '%s-%s-%s' % (date[2], date[0], date[1])
except Exce:
return False
def run_parallel_in_threads(target, args_list):
globalparas = []
result = Queue()
def task_wrapper(*args):
result.put(target(*args))
threads = [
threading.Thread(target = task_wrapper, args = args)
for args in args_list
]
for t in threads:
t.start()
for t in threads:
t.join()
while not result.empty():
globalparas.append(result.get())
globalparas = list(filter(None, globalparas))
return globalparas
def get_article(link, news, date):
article = Article(link)
article.download()
article.parse()
article.nlp()
lang = 'ENGLISH'
if len(article.title) < 5 or len(article.text) < 5:
lang = 'INDONESIA'
print('found BM/ID article')
article = Article(link, language = 'id')
article.download()
article.parse()
article.nlp()
return {
'title': article.title,
'url': link,
'authors': article.authors,
'top-image': article.top_image,
'text': article.text,
'keyword': article.keywords,
'summary': article.summary,
'news': news,
'date': date,
'language': lang,
}
NUMBER_OF_CALLS_TO_GOOGLE_NEWS_ENDPOINT = 0
GOOGLE_NEWS_URL = 'https://www.google.com.my/search?q={}&source=lnt&tbs=cdr%3A1%2Ccd_min%3A{}%2Ccd_max%3A{}&tbm=nws&start={}'
def forge_url(q, start, year_start, year_end):
global NUMBER_OF_CALLS_TO_GOOGLE_NEWS_ENDPOINT
NUMBER_OF_CALLS_TO_GOOGLE_NEWS_ENDPOINT += 1
return GOOGLE_NEWS_URL.format(
q.replace(' ', '+'), str(year_start), str(year_end), start
)
num_articles_index = 0
url = forge_url('america', num_articles_index, '2005', '2021')
url
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
headers
response = requests.get(url, headers = headers, timeout = 60)
soup = BeautifulSoup(response.content, 'html.parser')
style="text-decoration:none;display:block"
soup.find_all('div', {'class': 'XTjFC WF4CUc'})[0]
[dateparser.parse(v.text) for v in soup.find_all('span', {'class': 'WG9SHc'})]
import dateparser
str(dateparser.parse('2 weeks ago'))
from bs4 import BeautifulSoup
from datetime import datetime, timedelta
from dateutil import parser
import dateparser
def extract_links(content):
soup = BeautifulSoup(content, 'html.parser')
# return soup
today = datetime.now().strftime('%m/%d/%Y')
links_list = [v.attrs['href'] for v in soup.find_all('a', {'style': 'text-decoration:none;display:block'})]
dates_list = [v.text for v in soup.find_all('span', {'class': 'WG9SHc'})]
sources_list = [v.text for v in soup.find_all('div', {'class': 'XTjFC WF4CUc'})]
output = []
for (link, date, source) in zip(links_list, dates_list, sources_list):
try:
date = str(dateparser.parse(date))
except:
pass
output.append((link, date, source))
return output
r = extract_links(response.content)
r
```
|
github_jupyter
|
```
import os
import numpy as np
from torch.utils.data import DataLoader
from torchvision import transforms as T
import cv2
import pandas as pd
from self_sup_data.chest_xray import SelfSupChestXRay
from model.resnet import resnet18_enc_dec
from train_chest_xray import SETTINGS
from experiments.chest_xray_tasks import test_real_anomalies
def test(test_dat, setting, device, preact=False, pool=True, final=True, show=True, plots=False):
if final:
fname = os.path.join(model_dir, setting.get('out_dir'), 'final_' + setting.get('fname'))
else:
fname = os.path.join(model_dir, setting.get('out_dir'), setting.get('fname'))
print(fname)
if not os.path.exists(fname):
return np.nan, np.nan
model = resnet18_enc_dec(num_classes=1, preact=preact, pool=pool, in_channels=1,
final_activation=setting.get('final_activation')).to(device)
if final:
model.load_state_dict(torch.load(fname))
else:
model.load_state_dict(torch.load(fname).get('model_state_dict'))
if plots:
sample_ap, sample_auroc, fig = test_real_anomalies(model, test_dat,
device=device, batch_size=16, show=show, full_size=True)
fig.savefig(os.path.join(out_dir, setting.get('fname')[:-3] + '.png'))
plt.close(fig)
else:
sample_ap, sample_auroc, _ = test_real_anomalies(model, test_dat,
device=device, batch_size=16, show=show, plots=plots, full_size=True)
return sample_ap, sample_auroc
model_dir = 'put/your/path/here'
if not os.path.exists(model_dir):
os.makedirs(model_dir)
out_dir = 'put/your/path/here'
if not os.path.exists(out_dir):
os.makedirs(out_dir)
data_dir = 'put/your/path/here'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f'Using {device}')
```
# Evaluation for male PA
```
with open('self_sup_data/chest_xray_lists/norm_MaleAdultPA_test_curated_list.txt', "r") as f:
normal_test_files = f.read().splitlines()
with open('self_sup_data/chest_xray_lists/anomaly_MaleAdultPA_test_curated_list.txt', "r") as f:
anom_test_files = f.read().splitlines()
modes = list(SETTINGS.keys())
sample_ap_df = pd.DataFrame(columns=modes)
sample_auroc_df = pd.DataFrame(columns=modes)
test_dat = SelfSupChestXRay(data_dir=data_dir, normal_files=normal_test_files, anom_files=anom_test_files,
is_train=False, res=256, transform=T.CenterCrop(224))
sample_aps, sample_aurocs = {}, {}
for mode in modes:
sample_aps[mode], sample_aurocs[mode] = test(
test_dat, SETTINGS.get(mode), device, preact=False, pool=True, final=True)
sample_ap_df = sample_ap_df.append(sample_aps, ignore_index=True).transpose()
sample_auroc_df = sample_auroc_df.append(sample_aurocs, ignore_index=True).transpose()
pd.options.display.float_format = '{:3.1f}'.format
sample_table = 100 * sample_auroc_df.transpose()
cols_shift = sample_table.loc[: , 'Shift-M':'Shift-M-123425']
cols_shift_int = sample_table.loc[: , 'Shift-Intensity-M':'Shift-Intensity-M-123425']
cols_shift_raw_int = sample_table.loc[: , 'Shift-Raw-Intensity-M':'Shift-Raw-Intensity-M-123425']
cols_pii = sample_table.loc[: , 'FPI-Poisson':'FPI-Poisson-123425']
merge_func = lambda x: r'{:3.1f} {{\tiny $\pm$ {:3.1f}}}'.format(*x)
sample_table['Shift'] = list(map(merge_func, zip(cols_shift.mean(axis=1), cols_shift.std(axis=1))))
sample_table['Shift-Intensity'] = list(map(merge_func, zip(cols_shift_int.mean(axis=1), cols_shift_int.std(axis=1))))
sample_table['Shift-Raw-Intensity'] = list(map(merge_func, zip(cols_shift_raw_int.mean(axis=1), cols_shift_raw_int.std(axis=1))))
sample_table['FPI-Poisson'] = list(map(merge_func, zip(cols_pii.mean(axis=1), cols_pii.std(axis=1))))
sample_table = sample_table[['CutPaste', 'FPI', 'FPI-Poisson',
'Shift', 'Shift-Raw-Intensity', 'Shift-Intensity']].rename(
columns={'Shift':'Ours (binary)', 'Shift-Intensity':'Ours (logistic)', 'Shift-Raw-Intensity':'Ours (continous)'})
print(sample_table.to_latex(escape=False))
sample_table # male
```
# Evaluation for female PA
```
with open('self_sup_data/chest_xray_lists/norm_FemaleAdultPA_test_curated_list.txt', "r") as f:
normal_test_files = f.read().splitlines()
with open('self_sup_data/chest_xray_lists/anomaly_FemaleAdultPA_test_curated_list.txt', "r") as f:
anom_test_files = f.read().splitlines()
modes = list(SETTINGS.keys())
sample_ap_df = pd.DataFrame(columns=modes)
sample_auroc_df = pd.DataFrame(columns=modes)
test_dat = SelfSupChestXRay(data_dir=data_dir, normal_files=normal_test_files, anom_files=anom_test_files,
is_train=False, res=256, transform=T.CenterCrop(224))
sample_aps, sample_aurocs = {}, {}
for mode in modes:
sample_aps[mode], sample_aurocs[mode] = test(
test_dat, SETTINGS.get(mode), device, preact=False, pool=True, final=True)
sample_ap_df = sample_ap_df.append(sample_aps, ignore_index=True).transpose()
sample_auroc_df = sample_auroc_df.append(sample_aurocs, ignore_index=True).transpose()
pd.options.display.float_format = '{:3.1f}'.format
sample_table = 100 * sample_auroc_df.transpose()
cols_shift = sample_table.loc[: , 'Shift-M':'Shift-M-123425']
cols_shift_int = sample_table.loc[: , 'Shift-Intensity-M':'Shift-Intensity-M-123425']
cols_shift_raw_int = sample_table.loc[: , 'Shift-Raw-Intensity-M':'Shift-Raw-Intensity-M-123425']
cols_pii = sample_table.loc[: , 'FPI-Poisson':'FPI-Poisson-123425']
merge_func = lambda x: r'{:3.1f} {{\tiny $\pm$ {:3.1f}}}'.format(*x)
sample_table['Shift'] = list(map(merge_func, zip(cols_shift.mean(axis=1), cols_shift.std(axis=1))))
sample_table['Shift-Intensity'] = list(map(merge_func, zip(cols_shift_int.mean(axis=1), cols_shift_int.std(axis=1))))
sample_table['Shift-Raw-Intensity'] = list(map(merge_func, zip(cols_shift_raw_int.mean(axis=1), cols_shift_raw_int.std(axis=1))))
sample_table['FPI-Poisson'] = list(map(merge_func, zip(cols_pii.mean(axis=1), cols_pii.std(axis=1))))
sample_table = sample_table[['CutPaste', 'FPI', 'FPI-Poisson',
'Shift', 'Shift-Raw-Intensity', 'Shift-Intensity']].rename(
columns={'Shift':'Ours (binary)', 'Shift-Intensity':'Ours (logistic)', 'Shift-Raw-Intensity':'Ours (continous)'})
print(sample_table.to_latex(escape=False))
sample_table # female
```
|
github_jupyter
|
# Trumpler 1930 Dust Extinction
Figure 6.2 from Chapter 6 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021,
Cambridge University Press.
Data are from [Trumpler, R. 1930, Lick Observatory Bulletin #420, 14, 154](https://ui.adsabs.harvard.edu/abs/1930LicOB..14..154T), Table 3. The extinction curve derived
uses a different normalization in the bulletin paper than in the oft-reproduced figure from the Trumpler
1930 PASP paper ([Trumpler, R. 1930, PASP, 42, 267](https://ui.adsabs.harvard.edu/abs/1930PASP...42..267T),
Figure 1).
Table 3 gives distances and linear diameters to open star clusters. We've created two data files:
* Trumpler_GoodData.txt - Unflagged
* Trumpler_BadData.txt - Trumpler's "somewhat uncertain or less reliable" data, designed by the entry being printed in italics in Table 3.
The distances we use are from Table 3 column 8 ("Obs."distance from spectral types) and column 10
("from diam."), both converted to kiloparsecs.
```
%matplotlib inline
import os
import sys
import math
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter
import warnings
warnings.filterwarnings('ignore',category=UserWarning, append=True)
```
## Standard Plot Format
Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style.
```
figName = 'Fig6_2'
# graphic aspect ratio = width/height
aspect = 4.0/3.0 # 4:3
# Text width in inches - don't change, this is defined by the print layout
textWidth = 6.0 # inches
# output format and resolution
figFmt = 'png'
dpi = 600
# Graphic dimensions
plotWidth = dpi*textWidth
plotHeight = plotWidth/aspect
axisFontSize = 10
labelFontSize = 6
lwidth = 0.5
axisPad = 5
wInches = textWidth
hInches = wInches/aspect
# Plot filename
plotFile = f'{figName}.{figFmt}'
# LaTeX is used throughout for markup of symbols, Times-Roman serif font
plt.rc('text', usetex=True)
plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'})
# Font and line weight defaults for axes
matplotlib.rc('axes',linewidth=lwidth)
matplotlib.rcParams.update({'font.size':axisFontSize})
# axis and label padding
plt.rcParams['xtick.major.pad'] = f'{axisPad}'
plt.rcParams['ytick.major.pad'] = f'{axisPad}'
plt.rcParams['axes.labelpad'] = f'{axisPad}'
```
## Trumpler (1930) Data and Extinction Curve
The data are derived from the Table 3 in Trumpler 1930, converted to modern units of kiloparsecs. We've divided
the data into two files based on Trumpler's 2-fold division of the data into reliable and ""somewhat uncertain
or less reliable", which we abbreviate as "good" and "bad", respectively. This is the division used for
Trumpler's original diagram.
The Trumpler extintion curve is of the form:
$$ d_{L} = d_{A} e^{\kappa d_{A}/2}$$
where the extinction coefficient plotted is $\kappa=0.6$kpc$^{-1}$, plotted as a dashed line.
```
# Good data
data = pd.read_csv('Trumpler_GoodData.txt',sep=r'\s+',comment='#')
dLgood = np.array(data['dL']) # luminosity distance
dAgood = np.array(data['dA']) # angular diameter distance
# Bad data
data = pd.read_csv('Trumpler_BadData.txt',sep=r'\s+',comment='#')
dLbad = np.array(data['dL']) # luminosity distance
dAbad = np.array(data['dA']) # angular diameter distance
# Trumpler extinction curve
k = 0.6 # kpc^-1 [modern units]
dAext = np.linspace(0.0,4.0,401)
dLext = dAext*np.exp(k*dAext/2)
```
## Cluster angular diameter distance vs. luminosity distance
Plot open cluster angular distance against luminosity distance (what Trumpler called "photometric distance").
Good data are ploted as filled circles, the bad (less-reliable) data are plotted as open circles.
The unextincted 1:1 relation is plotted as a dotted line.
```
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
# Limits
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_minor_locator(MultipleLocator(0.2))
ax.set_xlabel(r'Luminosity distance [kpc]')
ax.set_xlim(0,5)
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(0.2))
ax.set_ylabel(r'Angular diameter distance [kpc]')
ax.set_ylim(0,4)
plt.plot(dLgood,dAgood,'o',mfc='black',mec='black',ms=3,zorder=10,mew=0.5)
plt.plot(dLbad,dAbad,'o',mfc='white',mec='black',ms=3,zorder=9,mew=0.5)
plt.plot([0,4],[0,4],':',color='black',lw=1,zorder=5)
plt.plot(dLext,dAext,'--',color='black',lw=1,zorder=7)
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
|
github_jupyter
|
```
from os.path import join, dirname
from os import listdir
import numpy as np
import pandas as pd
# GUI library
import panel as pn
import panel.widgets as pnw
# Chart libraries
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, Legend
from bokeh.palettes import Spectral5, Set2
from bokeh.events import SelectionGeometry
# Dimensionality reduction
from sklearn.decomposition import PCA
from sklearn.manifold import MDS
from sklearn.preprocessing import StandardScaler, LabelEncoder
#
from shapely.geometry import MultiPoint, MultiLineString, Polygon, MultiPolygon, LineString
from shapely.ops import unary_union
from shapely.ops import triangulate
# local scripts
from Embedding.Rangeset import Rangeset
pn.extension()
```
# Parameters
```
l = str(len(pd.read_csv('data/BLI_30102020171001105.csv')))
dataset_name = l
bins = 5
show_labels = True
labels_column = 'index'
overview_height = 700
small_multiples_ncols = 3
histogram_width = 250
show_numpy_histogram = True
rangeset_threshold = 1
```
# Load data
```
from bokeh.sampledata.iris import flowers
df = flowers
label_encoder = LabelEncoder().fit(df.species)
df['class'] = label_encoder.transform(df.species)
```
# Preprocessing
```
# attributes to be included in the overlays
selected_var = list(df.columns[:-2]) + ['class']
#selected_var = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'projection quality'] # custom selection
# maximal slider range and step size
# {'variable_name': (min,max,stepsize)}
custom_range = {'projection quality': (0,1,0.01)}
# custom min/max settings for sliders
# {'variable_name': (min,max)}
default_range = {'projection quality': (0.4,0.9)}
# which variables to use for the embedding
selected_var_embd = selected_var[:-1]
# set up embedding
#embedding = PCA(n_components=2)
embedding = MDS(n_components=2, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(df[selected_var_embd])
# some projections change the original data, so we make a copy
# this can cost a lot of memory for large data
X = X_scaled.copy()
pp = embedding.fit_transform(X)
x_range = pp[:,0].max() - pp[:,0].min()
y_range = pp[:,1].max() - pp[:,1].min()
# keep the aspect ration of the projected data
overview_width = int(overview_height * x_range / y_range)
histogram_height = int(histogram_width * y_range / x_range)
from Embedding.ProjectionQuality import projection_quality
df['projection quality'] = projection_quality(X_scaled, pp)
selected_var += ['projection quality']
rangeset = Rangeset(pp, df)
rangeset.threshold = rangeset_threshold
rangeset.size_inside = 8
rangeset.size_outside = 12
```
# GUI
## Vis elements
**overview chart** shows a large version of the embedding
```
TOOLS = "pan,wheel_zoom,box_zoom,box_select,lasso_select,help,reset,save"
overview = figure(tools=TOOLS, width=overview_width, height=overview_height, active_drag="lasso_select")
overview.scatter(x=pp[:,0], y=pp[:,1], color="#333333", muted_alpha=0,
size=7, level='underlay', name='points',
line_color=None, legend_label='data')
if show_labels:
labels = df.index.astype(str) if labels_column == 'index' else df[labels_column].astype(str)
overview.text(x=pp[:,0], y=pp[:,1], text=labels, legend_label='labels',
font_size="10pt", x_offset=5, y_offset=5, muted_alpha=0,
text_baseline="middle", text_align="left", color='#666666', level='glyph')
source_selection = ColumnDataSource({'x': [], 'y': []})
overview.patch(source=source_selection, x='x', y='y', fill_color=None, line_width=4, line_color='#aaaaaa',
level='annotation')
overview.legend.location = 'bottom_right'
overview.legend.label_height=1
overview.legend.click_policy='mute'
overview.legend.visible = True
overview.outline_line_color = None
overview.xgrid.visible = False
overview.ygrid.visible = False
overview.xaxis.visible = False
overview.yaxis.visible = False
overview.toolbar.logo = None
# Check the embedding with the code below
# pn.Row(overview)
```
**small multiples** charts are created upon request
```
def _make_chart( var, df_polys, df_scatter, bounds, cnt_in, cnt_out ):
global df
xvals = df[var].unique()
is_categorical = False
if len(xvals) < 10:
is_categorical = True
xvals = sorted(xvals.astype(str))
global histogram_width
p = figure(width=histogram_width, height=histogram_height, title=var)
df_scatter['size'] = df_scatter['size'] * histogram_height / overview_height
p.multi_polygons(source=df_polys, xs='xs', ys='ys', color='color', fill_alpha=0.5, level='image', line_color=None)
p.scatter(source=df_scatter, x='x', y='y', color='color', size='size', level='overlay')
global source_selection
p.patch(source=source_selection, x='x', y='y', fill_color=None, level='annotation', line_width=2, line_color='#333333')
p.xgrid.visible = False
p.ygrid.visible = False
p.xaxis.visible = False
p.yaxis.visible = False
p.toolbar.logo = None
p.toolbar_location = None
p.border_fill_color = '#f0f0f0'
p_histo = figure(height=100, width=histogram_width, name='histo')
if is_categorical:
p_histo = figure(height=100, width=histogram_width, name='histo', x_range=xvals)
p_histo.vbar(x=xvals, top=cnt_in, bottom=0, width=0.9, line_color='white', color=rangeset.colormap)
p_histo.vbar(x=xvals, top=0, bottom=np.array(cnt_out)*-1, width=0.9, line_color='white', color=rangeset.colormap)
else:
p_histo.quad(bottom=[0]*len(cnt_in), top=cnt_in, left=bounds[:-1], right=bounds[1:], line_color='white', color=rangeset.colormap)
p_histo.quad(bottom=np.array(cnt_out)*(-1), top=[0]*len(cnt_out), left=bounds[:-1], right=bounds[1:], line_color='white', color=rangeset.colormap)
df_select = df[df[var] < bounds[0]]
p_histo.square(df_select[var], -.5, color=rangeset.colormap[0])
df_select = df[df[var] > bounds[-1]]
p_histo.square(df_select[var], -.5, color=rangeset.colormap[-1])
p_histo.toolbar.logo = None
p_histo.toolbar_location = None
p_histo.xgrid.visible = False
p_histo.xaxis.minor_tick_line_color = None
p_histo.yaxis.minor_tick_line_color = None
p_histo.outline_line_color = None
p_histo.border_fill_color = '#f0f0f0'
global show_numpy_histogram
if show_numpy_histogram:
if is_categorical:
frequencies, edges = np.histogram(df[var], bins=len(xvals))
p_histo.vbar(x=xvals, bottom=0, width=.5, top=frequencies*-1,
line_color='white', color='gray', line_alpha=.5, fill_alpha=0.5)
else:
frequencies, edges = np.histogram(df[var])
p_histo.quad(bottom=[0]*len(frequencies), top=frequencies*-1, left=edges[:-1], right=edges[1:],
line_color='white', color='gray', line_alpha=.5, fill_alpha=0.5)
return (p, p_histo)
```
## Create input widget (buttons, sliders, etc) and layout
```
class MyCheckbox(pnw.Checkbox):
variable = ""
def __init__(self, variable="", slider=None, **kwds):
super().__init__(**kwds)
self.variable = variable
self.slider = slider
def init_slider_values(var):
vmin = df[var].min()
vmax = df[var].max()
step = 0
if var in custom_range:
vmin,vmax,step = custom_range[var]
value = (vmin,vmax)
if var in default_range:
value = default_range[var]
return (vmin, vmax, step, value)
ranges_embd = pn.Column()
ranges_aux = pn.Column()
sliders = {}
def create_slider(var):
vmin, vmax, step, value = init_slider_values(var)
slider = pnw.RangeSlider(name=var, start=vmin, end=vmax, step=step, value=value)
checkbox = MyCheckbox(name='', variable=var, value=False, width=20, slider=slider)
return pn.Row(checkbox,slider)
for var in selected_var:
s = create_slider(var)
sliders[var] = s
if var in selected_var_embd:
ranges_embd.append(s)
else:
ranges_aux.append(s)
selected_var = []
for r in ranges_embd:
selected_var.append(r[1].name)
for r in ranges_aux:
selected_var.append(r[1].name)
gui_colormap = pn.Row(pn.pane.Str(background=rangeset.colormap[0], height=30, width=20), "very low",
pn.pane.Str(background=rangeset.colormap[1], height=30, width=20), "low",
pn.pane.Str(background=rangeset.colormap[2], height=30, width=20), "medium",
pn.pane.Str(background=rangeset.colormap[3], height=30, width=20), "high",
pn.pane.Str(background=rangeset.colormap[4], height=30, width=20), "very high", sizing_mode='stretch_width')
selectColoring = pn.widgets.Select(name='', options=['None']+selected_var)
# set up the GUI
layout = pn.Row(pn.Column(
pn.Row(pn.pane.Markdown('''# NoLiES: The non-linear embedding surveyor\n
NoLiES augments the projected data with additional information. The following interactions are supported:\n
* **Attribute-based coloring** Chose an attribute from the drop-down menu below the embedding to display contours for multiple value ranges.
* **Selective muting**: Click on the legend to mute/hide parts of the chart. Press _labels_ to hide the labels.
* **Contour control** Change the slider range to change the contours.
* **Histograms** Select the check-box next to the slider to view the attribute's histogram.
* **Selection** Use the selection tool to outline a set of points and share this outline across plots.''', sizing_mode='stretch_width'),
margin=(0, 25,0,25)),
pn.Row(
pn.Column(pn.pane.Markdown('''# Attributes\nEnable histograms with the checkboxes.'''),
'## Embedding',
ranges_embd,
#pn.layout.Divider(),
'## Auxiliary',
ranges_aux, margin=(0, 25, 0, 0)),
pn.Column(pn.pane.Markdown('''# Embedding - '''+type(embedding).__name__+''' Dataset - '''+dataset_name, sizing_mode='stretch_width'),
overview,
pn.Row(selectColoring, gui_colormap)
),
margin=(0,25,25,25)
),
#pn.Row(sizing_mode='stretch_height'),
pn.Row(pn.pane.Markdown('''Data source: Fisher,R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950). ''',
width=800), sizing_mode='stretch_width', margin=(0,25,0,25))),
pn.GridBox(ncols=small_multiples_ncols, sizing_mode='stretch_both', margin=(220,25,0,0)),
background='#efefef'
)
# Check the GUI with the following code - this version is not interactive yet
layout
```
## Callbacks
Callbacks for **slider interactions**
```
visible = [False]*len(selected_var)
mapping = {v: k for k, v in dict(enumerate(selected_var)).items()}
def onSliderChanged(event):
'''Actions upon attribute slider change.
Attributes
----------
event: bokeh.Events.Event
information about the event that triggered the callback
'''
var = event.obj.name
v_range = event.obj.value
# if changed variable is currently displayed
if var == layout[0][1][1][2][0].value:
setColoring(var, v_range)
# find the matching chart and update it
for col in layout[1]:
if col.name == var:
df_polys, df_scatter, bounds, cnt_in, cnt_out = rangeset.compute_contours(var, v_range, bins=20 if col.name == 'groups' else 5)
p,histo = _make_chart(var, df_polys, df_scatter, bounds, cnt_in, cnt_out)
col[0].object = p
col[1].object = histo
def onSliderChanged_released(event):
'''Actions upon attribute slider change.
Attributes
----------
event: bokeh.Events.Event
information about the event that triggered the callback
'''
var = event.obj.name
v_range = event.obj.value
print('\''+var+'\': ('+str(v_range[0])+','+str(v_range[1])+')')
def onAttributeSelected(event):
'''Actions upon attribute checkbox change.
Attributes
----------
event: bokeh.Events.Event
information about the event that triggered the callback
'''
var = event.obj.variable
i = mapping[var]
if event.obj.value == True:
v_range = event.obj.slider.value
df_polys, df_scatter, bounds, cnt_in, cnt_out = rangeset.compute_contours(var, v_range)
p,p_histo = _make_chart(var, df_polys, df_scatter, bounds, cnt_in, cnt_out)
pos_insert = sum(visible[:i])
layout[1].insert(pos_insert, pn.Column(p,pn.panel(p_histo), name=var, margin=5))
else:
pos_remove = sum(visible[:i])
layout[1].pop(pos_remove)
visible[i] = event.obj.value
# link widgets to their callbacks
for var in sliders.keys():
sliders[var][0].param.watch(onAttributeSelected, 'value')
sliders[var][1].param.watch(onSliderChanged, 'value')
sliders[var][1].param.watch(onSliderChanged_released, 'value_throttled')
```
Callbacks **rangeset selection** in overview plot
```
def clearColoring():
'''Remove rangeset augmentation from the embedding.'''
global overview
overview.legend.visible = False
for r in overview.renderers:
if r.name is not None and ('poly' in r.name or 'scatter' in r.name):
r.visible = False
r.muted = True
def setColoring(var, v_range=None):
'''Compute and render the rangeset for a selected variable.
Attributes
----------
var: str
the selected variable
v_range: tuple (min,max)
the user define value range for the rangeset
'''
global overview
overview.legend.visible = True
df_polys, df_scatter, bounds, cnt,cnt = rangeset.compute_contours(var, val_range=v_range, bins=bins)
for r in overview.renderers:
if r.name is not None and ('poly' in r.name or 'scatter' in r.name):
r.visible = False
r.muted = True
if len(df_polys) > 0:
for k in list(rangeset.labels.keys())[::-1]:
g = df_polys[df_polys.color == k]
r = overview.select('poly '+k)
if len(r) > 0:
r[0].visible = True
r[0].muted = False
r[0].data_source.data = dict(ColumnDataSource(g).data)
else:
overview.multi_polygons(source = g, xs='xs', ys='ys', name='poly '+k,
color='color', alpha=.5, legend_label=rangeset.color2label(k),
line_color=None, muted_color='gray', muted_alpha=.1)
g = df_scatter[df_scatter.color == k]
r = overview.select('scatter '+k)
if len(r) > 0:
r[0].visible = True
r[0].muted = False
r[0].data_source.data = dict(ColumnDataSource(g).data)
else:
overview.circle(source = g, x='x', y='y', size='size', name='scatter '+k,
color='color', alpha=1, legend_label=rangeset.color2label(k),
muted_color='gray', muted_alpha=0)
def onChangeColoring(event):
'''Actions upon change of the rangeset attribute.
Attributes
----------
event: bokeh.Events.Event
information about the event that triggered the callback
'''
var = event.obj.value
if var == 'None':
clearColoring()
else:
v_range = sliders[var][1].value
setColoring(var, v_range)
selectColoring.param.watch( onChangeColoring, 'value' )
```
User **selection of data points** in the overview chart.
```
def onSelectionChanged(event):
if event.final:
sel_pp = pp[list(overview.select('points').data_source.selected.indices)]
if len(sel_pp) == 0:
source_selection.data = dict({'x': [], 'y': []})
else:
points = MultiPoint(sel_pp)
poly = unary_union([polygon for polygon in triangulate(points) if rangeset._max_edge(polygon) < 5]).boundary.parallel_offset(-0.05).coords.xy
source_selection.data = dict({'x': poly[0].tolist(), 'y': poly[1].tolist()})
overview.on_event(SelectionGeometry, onSelectionChanged)
layout.servable('NoLies')
```
|
github_jupyter
|
# Triplet Loss for Implicit Feedback Neural Recommender Systems
The goal of this notebook is first to demonstrate how it is possible to build a bi-linear recommender system only using positive feedback data.
In a latter section we show that it is possible to train deeper architectures following the same design principles.
This notebook is inspired by Maciej Kula's [Recommendations in Keras using triplet loss](
https://github.com/maciejkula/triplet_recommendations_keras). Contrary to Maciej we won't use the BPR loss but instead will introduce the more common margin-based comparator.
## Loading the movielens-100k dataset
For the sake of computation time, we will only use the smallest variant of the movielens reviews dataset. Beware that the architectural choices and hyperparameters that work well on such a toy dataset will not necessarily be representative of the behavior when run on a more realistic dataset such as [Movielens 10M](https://grouplens.org/datasets/movielens/10m/) or the [Yahoo Songs dataset with 700M rating](https://webscope.sandbox.yahoo.com/catalog.php?datatype=r).
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os.path as op
from zipfile import ZipFile
try:
from urllib.request import urlretrieve
except ImportError: # Python 2 compat
from urllib import urlretrieve
ML_100K_URL = "http://files.grouplens.org/datasets/movielens/ml-100k.zip"
ML_100K_FILENAME = ML_100K_URL.rsplit('/', 1)[1]
ML_100K_FOLDER = 'ml-100k'
if not op.exists(ML_100K_FILENAME):
print('Downloading %s to %s...' % (ML_100K_URL, ML_100K_FILENAME))
urlretrieve(ML_100K_URL, ML_100K_FILENAME)
if not op.exists(ML_100K_FOLDER):
print('Extracting %s to %s...' % (ML_100K_FILENAME, ML_100K_FOLDER))
ZipFile(ML_100K_FILENAME).extractall('.')
data_train = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.base'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
data_test = pd.read_csv(op.join(ML_100K_FOLDER, 'ua.test'), sep='\t',
names=["user_id", "item_id", "rating", "timestamp"])
data_train.describe()
data_train.head()
# data_test.describe()
max_user_id = max(data_train['user_id'].max(), data_test['user_id'].max())
max_item_id = max(data_train['item_id'].max(), data_test['item_id'].max())
n_users = max_user_id + 1
n_items = max_item_id + 1
print('n_users=%d, n_items=%d' % (n_users, n_items))
```
## Implicit feedback data
Consider ratings >= 4 as positive feed back and ignore the rest:
```
pos_data_train = data_train.query("rating >= 4")
pos_data_test = data_test.query("rating >= 4")
```
Because the median rating is around 3.5, this cut will remove approximately half of the ratings from the datasets:
```
pos_data_train['rating'].count()
pos_data_test['rating'].count()
```
## The Triplet Loss
The following section demonstrates how to build a low-rank quadratic interaction model between users and items. The similarity score between a user and an item is defined by the unormalized dot products of their respective embeddings.
The matching scores can be use to rank items to recommend to a specific user.
Training of the model parameters is achieved by randomly sampling negative items not seen by a pre-selected anchor user. We want the model embedding matrices to be such that the similarity between the user vector and the negative vector is smaller than the similarity between the user vector and the positive item vector. Furthermore we use a margin to further move appart the negative from the anchor user.
Here is the architecture of such a triplet architecture. The triplet name comes from the fact that the loss to optimize is defined for triple `(anchor_user, positive_item, negative_item)`:
<img src="images/rec_archi_implicit_2.svg" style="width: 600px;" />
We call this model a triplet model with bi-linear interactions because the similarity between a user and an item is captured by a dot product of the first level embedding vectors. This is therefore not a deep architecture.
```
import tensorflow as tf
def identity_loss(y_true, y_pred):
"""Ignore y_true and return the mean of y_pred
This is a hack to work-around the design of the Keras API that is
not really suited to train networks with a triplet loss by default.
"""
return tf.reduce_mean(y_pred + 0 * y_true)
def margin_comparator_loss(inputs, margin=1.):
"""Comparator loss for a pair of precomputed similarities
If the inputs are cosine similarities, they each have range in
(-1, 1), therefore their difference have range in (-2, 2). Using
a margin of 1. can therefore make sense.
If the input similarities are not normalized, it can be beneficial
to use larger values for the margin of the comparator loss.
"""
positive_pair_sim, negative_pair_sim = inputs
return tf.maximum(negative_pair_sim - positive_pair_sim + margin, 0)
```
Here is the actual code that builds the model(s) with shared weights. Note that here we use the cosine similarity instead of unormalized dot products (both seems to yield comparable results).
```
from keras.models import Model
from keras.layers import Embedding, Flatten, Input, Dense, merge
from keras.regularizers import l2
from keras_fixes import dot_mode, cos_mode
def build_models(n_users, n_items, latent_dim=64, l2_reg=0):
"""Build a triplet model and its companion similarity model
The triplet model is used to train the weights of the companion
similarity model. The triplet model takes 1 user, 1 positive item
(relative to the selected user) and one negative item and is
trained with comparator loss.
The similarity model takes one user and one item as input and return
compatibility score (aka the match score).
"""
# Common architectural components for the two models:
# - symbolic input placeholders
user_input = Input((1,), name='user_input')
positive_item_input = Input((1,), name='positive_item_input')
negative_item_input = Input((1,), name='negative_item_input')
# - embeddings
l2_reg = None if l2_reg == 0 else l2(l2_reg)
user_layer = Embedding(n_users, latent_dim, input_length=1,
name='user_embedding', W_regularizer=l2_reg)
# The following embedding parameters will be shared to encode both
# the positive and negative items.
item_layer = Embedding(n_items, latent_dim, input_length=1,
name="item_embedding", W_regularizer=l2_reg)
user_embedding = Flatten()(user_layer(user_input))
positive_item_embedding = Flatten()(item_layer(positive_item_input))
negative_item_embedding = Flatten()(item_layer(negative_item_input))
# - similarity computation between embeddings
positive_similarity = merge([user_embedding, positive_item_embedding],
mode=cos_mode, output_shape=(1,),
name="positive_similarity")
negative_similarity = merge([user_embedding, negative_item_embedding],
mode=cos_mode, output_shape=(1,),
name="negative_similarity")
# The triplet network model, only used for training
triplet_loss = merge([positive_similarity, negative_similarity],
mode=margin_comparator_loss, output_shape=(1,),
name='comparator_loss')
triplet_model = Model(input=[user_input,
positive_item_input,
negative_item_input],
output=triplet_loss)
# The match-score model, only use at inference to rank items for a given
# model: the model weights are shared with the triplet_model therefore
# we do not need to train it and therefore we do not need to plug a loss
# and an optimizer.
match_model = Model(input=[user_input, positive_item_input],
output=positive_similarity)
return triplet_model, match_model
triplet_model, match_model = build_models(n_users, n_items, latent_dim=64,
l2_reg=1e-6)
```
### Exercise:
How many trainable parameters does each model. Count the shared parameters only once per model.
```
print(match_model.summary())
print(triplet_model.summary())
# %load solutions/triplet_parameter_count.py
# Analysis:
#
# Both models have exactly the same number of parameters,
# namely the parameters of the 2 embeddings:
#
# - user embedding: n_users x embedding_dim
# - item embedding: n_items x embedding_dim
#
# The triplet model uses the same item embedding twice,
# once to compute the positive similarity and the other
# time to compute the negative similarity. However because
# those two nodes in the computation graph share the same
# instance of the item embedding layer, the item embedding
# weight matrix is shared by the two branches of the
# graph and therefore the total number of parameters for
# each model is in both cases:
#
# (n_users x embedding_dim) + (n_items x embedding_dim)
```
## Quality of Ranked Recommendations
Now that we have a randomly initialized model we can start computing random recommendations. To assess their quality we do the following for each user:
- compute matching scores for items (except the movies that the user has already seen in the training set),
- compare to the positive feedback actually collected on the test set using the ROC AUC ranking metric,
- average ROC AUC scores across users to get the average performance of the recommender model on the test set.
```
from sklearn.metrics import roc_auc_score
def average_roc_auc(match_model, data_train, data_test):
"""Compute the ROC AUC for each user and average over users"""
max_user_id = max(data_train['user_id'].max(), data_test['user_id'].max())
max_item_id = max(data_train['item_id'].max(), data_test['item_id'].max())
user_auc_scores = []
for user_id in range(1, max_user_id + 1):
pos_item_train = data_train[data_train['user_id'] == user_id]
pos_item_test = data_test[data_test['user_id'] == user_id]
# Consider all the items already seen in the training set
all_item_ids = np.arange(1, max_item_id + 1)
items_to_rank = np.setdiff1d(all_item_ids, pos_item_train['item_id'].values)
# Ground truth: return 1 for each item positively present in the test set
# and 0 otherwise.
expected = np.in1d(items_to_rank, pos_item_test['item_id'].values)
if np.sum(expected) >= 1:
# At least one positive test value to rank
repeated_user_id = np.empty_like(items_to_rank)
repeated_user_id.fill(user_id)
predicted = match_model.predict([repeated_user_id, items_to_rank],
batch_size=4096)
user_auc_scores.append(roc_auc_score(expected, predicted))
return sum(user_auc_scores) / len(user_auc_scores)
```
By default the model should make predictions that rank the items in random order. The **ROC AUC score** is a ranking score that represents the **expected value of correctly ordering uniformly sampled pairs of recommendations**.
A random (untrained) model should yield 0.50 ROC AUC on average.
```
average_roc_auc(match_model, pos_data_train, pos_data_test)
```
## Training the Triplet Model
Let's now fit the parameters of the model by sampling triplets: for each user, select a movie in the positive feedback set of that user and randomly sample another movie to serve as negative item.
Note that this sampling scheme could be improved by removing items that are marked as positive in the data to remove some label noise. In practice this does not seem to be a problem though.
```
def sample_triplets(pos_data, max_item_id, random_seed=0):
"""Sample negatives at random"""
rng = np.random.RandomState(random_seed)
user_ids = pos_data['user_id'].values
pos_item_ids = pos_data['item_id'].values
neg_item_ids = rng.randint(low=1, high=max_item_id + 1,
size=len(user_ids))
return [user_ids, pos_item_ids, neg_item_ids]
```
Let's train the triplet model:
```
# we plug the identity loss and the a fake target variable ignored by
# the model to be able to use the Keras API to train the triplet model
triplet_model.compile(loss=identity_loss, optimizer="adam")
fake_y = np.ones_like(pos_data_train['user_id'])
n_epochs = 15
for i in range(n_epochs):
# Sample new negatives to build different triplets at each epoch
triplet_inputs = sample_triplets(pos_data_train, max_item_id,
random_seed=i)
# Fit the model incrementally by doing a single pass over the
# sampled triplets.
triplet_model.fit(triplet_inputs, fake_y, shuffle=True,
batch_size=64, nb_epoch=1, verbose=2)
# Monitor the convergence of the model
test_auc = average_roc_auc(match_model, pos_data_train, pos_data_test)
print("Epoch %d/%d: test ROC AUC: %0.4f"
% (i + 1, n_epochs, test_auc))
```
## Training a Deep Matching Model on Implicit Feedback
Instead of using hard-coded cosine similarities to predict the match of a `(user_id, item_id)` pair, we can instead specify a deep neural network based parametrisation of the similarity. The parameters of that matching model are also trained with the margin comparator loss:
<img src="images/rec_archi_implicit_1.svg" style="width: 600px;" />
### Exercise to complete as a home assignment:
- Implement a `deep_match_model`, `deep_triplet_model` pair of models
for the architecture described in the schema. The last layer of
the embedded Multi Layer Perceptron outputs a single scalar that
encodes the similarity between a user and a candidate item.
- Evaluate the resulting model by computing the per-user average
ROC AUC score on the test feedback data.
- Check that the AUC ROC score is close to 0.50 for a randomly
initialized model.
- Check that you can reach at least 0.91 ROC AUC with this deep
model (you might need to adjust the hyperparameters).
Hints:
- it is possible to reuse the code to create embeddings from the previous model
definition;
- the concatenation between user and the positive item embedding can be
obtained with:
```py
positive_embeddings_pair = merge([user_embedding, positive_item_embedding],
mode='concat',
name="positive_embeddings_pair")
negative_embeddings_pair = merge([user_embedding, negative_item_embedding],
mode='concat',
name="negative_embeddings_pair")
```
- those embedding pairs should be fed to a shared MLP instance to compute the similarity scores.
```
from keras.models import Sequential
def make_interaction_mlp(input_dim, n_hidden=1, hidden_size=64,
dropout=0, l2_reg=None):
mlp = Sequential()
# TODO:
return mlp
def build_models(n_users, n_items, user_dim=32, item_dim=64,
n_hidden=1, hidden_size=64, dropout=0, l2_reg=0):
# TODO:
# Inputs and the shared embeddings can be defined as previously.
# Use a single instance of the MLP created by make_interaction_mlp()
# and use it twice: once on the positive pair, once on the negative
# pair
interaction_layers = make_interaction_mlp(
user_dim + item_dim, n_hidden=n_hidden, hidden_size=hidden_size,
dropout=dropout, l2_reg=l2_reg)
# Build the models: one for inference, one for triplet training
deep_match_model = None
deep_triplet_model = None
return deep_match_model, deep_triplet_model
# %load solutions/deep_implicit_feedback_recsys.py
from keras.models import Model, Sequential
from keras.layers import Embedding, Flatten, Input, Dense, Dropout, merge
from keras.regularizers import l2
def make_interaction_mlp(input_dim, n_hidden=1, hidden_size=64,
dropout=0, l2_reg=None):
"""Build the shared multi layer perceptron"""
mlp = Sequential()
if n_hidden == 0:
# Plug the output unit directly: this is a simple
# linear regression model. Not dropout required.
mlp.add(Dense(1, input_dim=input_dim,
activation='relu', W_regularizer=l2_reg))
else:
mlp.add(Dense(hidden_size, input_dim=input_dim,
activation='relu', W_regularizer=l2_reg))
mlp.add(Dropout(dropout))
for i in range(n_hidden - 1):
mlp.add(Dense(hidden_size, activation='relu',
W_regularizer=l2_reg))
mlp.add(Dropout(dropout))
mlp.add(Dense(1, activation='relu', W_regularizer=l2_reg))
return mlp
def build_models(n_users, n_items, user_dim=32, item_dim=64,
n_hidden=1, hidden_size=64, dropout=0, l2_reg=0):
"""Build models to train a deep triplet network"""
user_input = Input((1,), name='user_input')
positive_item_input = Input((1,), name='positive_item_input')
negative_item_input = Input((1,), name='negative_item_input')
l2_reg = None if l2_reg == 0 else l2(l2_reg)
user_layer = Embedding(n_users, user_dim, input_length=1,
name='user_embedding', W_regularizer=l2_reg)
# The following embedding parameters will be shared to encode both
# the positive and negative items.
item_layer = Embedding(n_items, item_dim, input_length=1,
name="item_embedding", W_regularizer=l2_reg)
user_embedding = Flatten()(user_layer(user_input))
positive_item_embedding = Flatten()(item_layer(positive_item_input))
negative_item_embedding = Flatten()(item_layer(negative_item_input))
# Similarity computation between embeddings using a MLP similarity
positive_embeddings_pair = merge([user_embedding, positive_item_embedding],
mode='concat',
name="positive_embeddings_pair")
positive_embeddings_pair = Dropout(dropout)(positive_embeddings_pair)
negative_embeddings_pair = merge([user_embedding, negative_item_embedding],
mode='concat',
name="negative_embeddings_pair")
negative_embeddings_pair = Dropout(dropout)(negative_embeddings_pair)
# Instanciate the shared similarity architecture
interaction_layers = make_interaction_mlp(
user_dim + item_dim, n_hidden=n_hidden, hidden_size=hidden_size,
dropout=dropout, l2_reg=l2_reg)
positive_similarity = interaction_layers(positive_embeddings_pair)
negative_similarity = interaction_layers(negative_embeddings_pair)
# The triplet network model, only used for training
triplet_loss = merge([positive_similarity, negative_similarity],
mode=margin_comparator_loss, output_shape=(1,),
name='comparator_loss')
deep_triplet_model = Model(input=[user_input,
positive_item_input,
negative_item_input],
output=triplet_loss)
# The match-score model, only used at inference
deep_match_model = Model(input=[user_input, positive_item_input],
output=positive_similarity)
return deep_match_model, deep_triplet_model
hyper_parameters = dict(
user_dim=32,
item_dim=64,
n_hidden=1,
hidden_size=128,
dropout=0.1,
l2_reg=0
)
deep_match_model, deep_triplet_model = build_models(n_users, n_items,
**hyper_parameters)
deep_triplet_model.compile(loss=identity_loss, optimizer='adam')
fake_y = np.ones_like(pos_data_train['user_id'])
n_epochs = 15
for i in range(n_epochs):
# Sample new negatives to build different triplets at each epoch
triplet_inputs = sample_triplets(pos_data_train, max_item_id,
random_seed=i)
# Fit the model incrementally by doing a single pass over the
# sampled triplets.
deep_triplet_model.fit(triplet_inputs, fake_y, shuffle=True,
batch_size=64, nb_epoch=1, verbose=2)
# Monitor the convergence of the model
test_auc = average_roc_auc(deep_match_model, pos_data_train, pos_data_test)
print("Epoch %d/%d: test ROC AUC: %0.4f"
% (i + 1, n_epochs, test_auc))
```
### Exercise:
Count the number of parameters in `deep_match_model` and `deep_triplet_model`. Which model has the largest number of parameters?
```
print(deep_match_model.summary())
print(deep_triplet_model.summary())
# %load solutions/deep_triplet_parameter_count.py
# Analysis:
#
# Both models have again exactly the same number of parameters,
# namely the parameters of the 2 embeddings:
#
# - user embedding: n_users x user_dim
# - item embedding: n_items x item_dim
#
# and the parameters of the MLP model used to compute the
# similarity score of an (user, item) pair:
#
# - first hidden layer weights: (user_dim + item_dim) * hidden_size
# - first hidden biases: hidden_size
# - extra hidden layers weights: hidden_size * hidden_size
# - extra hidden layers biases: hidden_size
# - output layer weights: hidden_size * 1
# - output layer biases: 1
#
# The triplet model uses the same item embedding layer
# twice and the same MLP instance twice:
# once to compute the positive similarity and the other
# time to compute the negative similarity. However because
# those two lanes in the computation graph share the same
# instances for the item embedding layer and for the MLP,
# their parameters are shared.
#
# Reminder: MLP stands for multi-layer perceptron, which is a
# common short-hand for Feed Forward Fully Connected Neural
# Network.
```
## Possible Extensions
You can implement any of the following ideas if you want to get a deeper understanding of recommender systems.
### Leverage User and Item metadata
As we did for the Explicit Feedback model, it's also possible to extend our models to take additional user and item metadata as side information when computing the match score.
### Better Ranking Metrics
In this notebook we evaluated the quality of the ranked recommendations using the ROC AUC metric. This score reflect the ability of the model to correctly rank any pair of items (sampled uniformly at random among all possible items).
In practice recommender systems will only display a few recommendations to the user (typically 1 to 10). It is typically more informative to use an evaluatio metric that characterize the quality of the top ranked items and attribute less or no importance to items that are not good recommendations for a specific users. Popular ranking metrics therefore include the **Precision at k** and the **Mean Average Precision**.
You can read up online about those metrics and try to implement them here.
### Hard Negatives Sampling
In this experiment we sampled negative items uniformly at random. However, after training the model for a while, it is possible that the vast majority of sampled negatives have a similarity already much lower than the positive pair and that the margin comparator loss sets the majority of the gradients to zero effectively wasting a lot of computation.
Given the current state of the recsys model we could sample harder negatives with a larger likelihood to train the model better closer to its decision boundary. This strategy is implemented in the WARP loss [1].
The main drawback of hard negative sampling is increasing the risk of sever overfitting if a significant fraction of the labels are noisy.
### Factorization Machines
A very popular recommender systems model is called Factorization Machines [2][3]. They two use low rank vector representations of the inputs but they do not use a cosine similarity or a neural network to model user/item compatibility.
It is be possible to adapt our previous code written with Keras to replace the cosine sims / MLP with the low rank FM quadratic interactions by reading through [this gentle introduction](http://tech.adroll.com/blog/data-science/2015/08/25/factorization-machines.html).
If you choose to do so, you can compare the quality of the predictions with those obtained by the [pywFM project](https://github.com/jfloff/pywFM) which provides a Python wrapper for the [official libFM C++ implementation](http://www.libfm.org/). Maciej Kula also maintains a [lighfm](http://www.libfm.org/) that implements an efficient and well documented variant in Cython and Python.
## References:
[1] Wsabie: Scaling Up To Large Vocabulary Image Annotation
Jason Weston, Samy Bengio, Nicolas Usunier, 2011
https://research.google.com/pubs/pub37180.html
[2] Factorization Machines, Steffen Rendle, 2010
https://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf
[3] Factorization Machines with libFM, Steffen Rendle, 2012
in ACM Trans. Intell. Syst. Technol., 3(3), May.
http://doi.acm.org/10.1145/2168752.2168771
|
github_jupyter
|
```
import tensorflow as tf
tf.constant([[1.,2.,3.], [4.,5.,6.]])
tf.constant(42) # 스칼라
t = tf.constant([[1.,2.,3.], [4.,5.,6.]])
t.shape # TensorShape([2, 3])
t.dtype # tf.float32
t[:, 1:]
t[..., 1, tf.newaxis]
t + 10
tf.square(t) # 제곱
t @ tf.transpose(t) # transpose는 행렬 변환
import numpy as np
a = np.array([2., 4., 5.])
tf.constant(a)
# <tf.Tensor: shape=(3,), dtype=float64, numpy=array([2., 4., 5.])>
np.array(t)
# array([[1., 2., 3.],
# [4., 5., 6.]], dtype=float32)
tf.square(a)
# <tf.Tensor: shape=(3,), dtype=float64, numpy=array([ 4., 16., 25.])>
np.square(t)
# array([[ 1., 4., 9.],
# [16., 25., 36.]], dtype=float32)
t2 = tf.constant(40., dtype=tf.float64)
tf.constant(2.0) + tf.cast(t2, tf.float32)
# <tf.Tensor: shape=(), dtype=float32, numpy=42.0>
v = tf.Variable([[1.,2.,3.], [4.,5.,6.]])
v
v.assign(2 * v)
v[0,1].assign(42)
v[:,2].assign([0., 1.])
v.scatter_nd_update(indices=[[0,0], [1,2]], updates=[100., 200.])
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < 1
squared_loss = tf.square(error) / 2
linear_loss = tf.abs(error) - 0.5
return tf.where(is_small_error, squared_loss, linear_loss)
model.compile(loss=huber_fn, optimizer='nadam')
model.fit(X_train, y_train, [...])
from tensorflow.keras.models import load_model
model = load_model("my_model_with_a_custom_loss.h5",
custom_objects={"huber_fn": huber_fn})
def create_huber(threshold=1.0):
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < threshold
squared_loss = tf.square(error) / 2
linear_loss = threshold * tf.abs(error) - threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
return huber_fn
model.compile(loss=create_huber(2.0), optimizer="nadam")
model = load_model("my_model_with_a_custom_loss_threshold_2.h5",
custom_objects={"huber_fn": create_huber(2.0)})
from tensorflow.keras.losses import Loss
class HuberLoss(Loss):
def __init__(self, threshold=1.0, **kwargs):
self.threshold = threshold
super().__init__(**kwargs)
def call(self, y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < self.threshold
squared_loss = tf.square(error) / 2
linear_loss = self.threshold * tf.abs(error) - self.threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
model.compile(loss=HuberLoss(2.), optimizer="nadam")
model = load_model("my_model_with_a_custom_loss_class.h5",
custom_objects={"HuberLoss": HuberLoss})
import tensorflow as tf
# 사용자 정의 활성화 함수 (keras.activations.softplus())
def my_softplus(z):
return tf.math.log(tf.exp(z) + 1.0)
# 사용자 정의 글로럿 초기화 (keras.initializers.glorot_normal())
def my_glorot_initializer(shape, dtype=tf.float32):
stddev = tf.sqrt(2. / (shape[0] + shape[1]))
return tf.random.normal(shape, stddev=stddev, dtype=dtype)
def my_l1_regularizer(weights):
return tf.reduce_sum(tf.abs(0.01 * weights))
def my_positive_weights(weights):
return tf.where(weights < 0., tf.zeros_like(weights), weights)
from tensorflow.keras.layers import Dense
layer = Dense(30 activation=my_softplus,
kernel_initializer=my_glorot_initializer,
kernel_regularizer=my_l1_regularizer,
kernel_constarint=my_positive_weights)
model.compile(loss='mse', optimizer='nadam', metrics=[create_huber(2.0)])
from tensorflow.keras.metrics import Metric
import tensorflow as tf
class HuberMetric(Metric):
def __init__(self, threshold=1.0, **kwargs):
super().__init__(**kwargs)
self.threshold = threshold
self.huber_fn = create_huber(threshold)
self.total = self.add_weight('total', initializer='zeros')
self.count = self.add_Weight('count', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
metric = self.huber_fn(y_true, y_pred)
self.total.assign_add(tf.reduce_sum(metric))
self.count.assign_add(tf.cast(Tf.size(y_true), tf.float32))
def result(self):
return self.total / self.count
def get_config(self):
basㅠㅠe_config = super().get_config()
return {**base_config, "threshold":self.threshold}
from tensorflow.keras.layers import Layer
class MyDense(Layer):
def __init__(self, units, activation=None, **kwargs):
super().__init__(**kwargs)
self.units = units
self.activation = tf.keras.activations.get(activation)
def build(self, batch_input_shape):
self.kernel = self.add_weight(
name='kernel', shape=[batch_input_shape[-1], self.units],
initializer = 'glorot_normal'
)
self.bias = self.add_weight(
name='bias', shape=[self.units], initializer='zeros'
)
super().build(batch_input_shape)
def call(self, X):
return self.activation(X @ self.kernel + self.bias)
def comput_output_shape(self, batch_input_shape):
return tf.TensorShape(batch_input_shape.as_list()[:-1] + [self.units])
def get_config(self):
base_config = super().get_config()
return {**base_config, 'units':self.units,
'activation': tf.keras.activations.serialize(self.activation)}
class MyMultiLayer(tf.keras.layers.Layer):
def call(self, X):
X1, X2 = X
return [X1 + X2, X1 * X2, X1 / X2]
def compute_output_shape(self, batch_input_shape):
b1, b2 = batch_input_shape
return [b1, b1, b1]
class MyGaussianNoise(tf.keras.layers.Layer):
def __init__(self, stddev, **kwargs):
super().__init__(**kwargs)
self.stddev = stddev
def call(self, X, training=None):
if training:
noise = tf.random.normal(tf.shape(X), stddev=self.stddev)
return X + noise
else:
return X
def compute_output_shape(self, batch_input_shape):
return batch_input_shape
import tensorflow as tf
class ResidualBlock(tf.keras.layers.Layer):
def __init__(self, n_layers, n_neurons, **kwargs):
super().__init__(**kwargs)
self.hidden = [tf.keras.layers.Dense(n_neurons, activation='elu',
kernel_initializer='he_normal')
for _ in range(n_layers)]
def call(self, inputs):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
return inputs + Z
class ResidualRegressor(tf.keras.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden1 = tf.keras.layers.Dense(30, activation='elu',
kernel_initializer='he_normal')
self.block1 = ResidualBlock(2, 30)
self.block2 = ResidualBlock(2, 30)
self.out = tf.keras.layers.Dense(output_dim)
def call(self, inputs):
Z = self.hidden1(inputs)
for _ in range(1 + 3):
Z = self.block1(Z)
Z = self.block2(Z)
return self.out(Z)
class ReconstructingRegressor(tf.keras.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden = [tf.keras.layers.Dense(30, activation='selu',
kernel_initializer='lecun_normal')
for _ in range(5)]
self.out = tf.keras.layers.Dense(output_dim)
def build(self, batch_input_shape):
n_inputs = batch_input_shape[-1]
self.reconstruct = tf.keras.layers.Dense(n_inputs)
super().build(batch_input_shape)
def call(self, inputs):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
reconstruction = self.reconstruct(Z)
recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))
self.add_loss(0.05 * recon_loss)
return self.out(Z)
def f(w1, w2):
return 3 * w1 ** 2 + 2 * w1 * w2
w1, w2 = 5, 3; eps = 1e-6
print((f(w1 + eps, w2) - f(w1, w2)) / eps) # 36.000003007075065
print((f(w1, w2 + eps) - f(w1, w2)) / eps) # 10.000000003174137
w1, w2 = tf.Variable(5.), tf.Variable(3.)
with tf.GradientTape() as tape:
z = f(w1, w2)
gradients = tape.gradient(z, [w1, w2])
gradients
# [<tf.Tensor: shape=(), dtype=float32, numpy=36.0>,
# <tf.Tensor: shape=(), dtype=float32, numpy=10.0>]
l2_reg = tf.keras.regularizers.l2(0.05)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, activation='elu', kernel_initializer='he_normal',
kernel_regulaizer=l2_reg),
tf.keras.layers.Dense(1, kernel_regularizer=l2_reg)
])
def random_batch(X, y, batch_size=32):
idx = np.random.randint(len(X), size=batch_size)
return X[idx], y[idx]
def print_status_bar(iteration, total, loss, metrics=None):
metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())
for m in [loss] + (metrics or [])])
end = "" if iteration < total else "\n"
print("\r{}/{} - ".format(iteration, total) + metrics,
end=end)
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
optimizer = tf.keras.optimizers.Nadam(lr=0.01)
loss_fn = tf.keras.losses.mean_squared_error
mean_loss = tf.keras.metrics.Mean()
metrics = [tf.keras.metrics.MeanAbsoluteError()]
for epoch in range(1, n_epochs + 1):
print('에포크 {}/{}'.format(epoch, n_epochs))
for step in range(1, n_steps + 1):
X_batch, y_batch = random_batch(X_train_scaled, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch, training=True)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
mean_loss(loss)
for metric in metrics:
metric(y_batch, y_pred)
print_status_bar(step * batch_size, len(y_train), mean_loss, metrics)
print_status_bar(len(y_train), len(y_train), mean_loss, metrics)
for metric in [mean_loss] + metrics:
metric.reset_states()
```
|
github_jupyter
|
*Регулярное выражение* — это последовательность символов, используемая для поиска и замены текста в строке или файле
Регулярные выражения используют два типа символов:
- специальные символы: как следует из названия, у этих символов есть специальные значения. Аналогично символу *, который как правило означает «любой символ» (но в регулярных выражениях работает немного иначе, о чем поговорим ниже);
- литералы (например: a, b, 1, 2 и т. д.).
```
# Реализовано тут
import re
help(re)
re.match() # Try to apply the pattern at the start of the string, returning a match object, or None if no match was found.
re.search() # Scan through string looking for a match to the pattern, returning a match object, or None if no match was found.
re.findall() # Return a list of all non-overlapping matches in the string.
re.split() # Split the source string by the occurrences of the pattern, returning a list containing the resulting substrings.
re.sub() # Return the string obtained by replacing the leftmost non-overlapping occurrences of the pattern in string by the
#replacement repl
re.compile() # Чтобы собрать регулярки в отдельный объект
```
## re.match(pattern, string):
```
result = re.match('AV', 'AV Analytics Vidhya')
print (result)
print (result.start()) # начало и конец найденной подстроки
print (result.end())
result.group(0) # что именно
result = re.match(r'Analytics', 'AV Analytics Vidhya AV')
print (result)
```
## re.search(pattern, string):
```
result = re.search(r'AV', 'AV Analytics Vidhya AV')
print (result.group(0))
result
```
## re.findall(pattern, string):
```
result = re.findall(r'AV', 'AV Analytics Vidhya AV')
result
```
## re.split(pattern, string, [maxsplit=0]):
```
result = re.split(r'y', 'Analytics')
print (result)
```
```
result = re.split(r'i', 'Analytics Vidhya')
print (result) # все возможные участки.
result = re.split(r'i', 'Analytics Vidhya',maxsplit=1)
print (result)
```
## re.sub(pattern, repl, string):
```
result = re.sub(r'India', 'the World', 'AV is largest Analytics community of India')
print (result)
```
## re.compile(pattern, repl, string):
```
pattern = re.compile('AV')
result = pattern.findall('AV Analytics Vidhya AV')
print (result)
result2 = pattern.findall('AV is largest analytics community of India')
print (result2)
```
## Определение количества вхождений
```
import re
c = re.compile(r'[0-9]+?')
str = '32 43 23423424'
print(re.findall(c, str))
# Пример 1. Как получить все числа из строки
price = '324234dfgdg34234DFDJ343'
b = "[a-zA-Z]*" # регулярное выражение для последовательности букв любой длины
nums = re.sub(b,"",price)
print (nums)
```
### Задача 1
```
#попробуем вытащить каждый символ (используя .)
# в конечный результат не попал пробел
# Теперь вытащим первое слово
# А теперь вытащим первое слово
```
### Задача 2
```
#используя \w, вытащить два последовательных символа, кроме пробельных, из каждого слова
# вытащить два последовательных символа, используя символ границы слова (\b)
```
### Задача 3
вернуть список доменов из списка адресов электронной почты
```
#Сначала вернем все символы после «@»:
str = '[email protected], [email protected], [email protected], [email protected] [email protected]@@'
#Если части «.com», «.in» и т. д. не попали в результат. Изменим наш код:
# вытащить только домен верхнего уровня, используя группировку — ( )
# Получить список почтовых адресов
```
### Задача 4:
```
# Извлечь дату из строки
str = 'Amit 34-3456 12-05-2007, XYZ 56-4532 11-11-2011, ABC 67-8945 12-01-2009'
#А теперь только года, с помощью скобок и группировок:
```
### Задача 5
```
#Извлечь все слова, начинающиеся на гласную. Но сначала получить все слова (\b - границы слов)
```
### Задача 6:
Проверить телефонный номер (номер должен быть длиной 10 знаков и начинаться с 8 или 9)
```
li = ['9999999999', '999999-999', '99999x9999']
for val in li:
print()
```
### Задача 7:
Разбить строку по нескольким разделителям
```
line = 'asdf fjdk;afed,fjek,asdf,foo' # String has multiple delimiters (";",","," ").
```
# Жадный против нежадного
```
s = '<html><head><title>Title</title>'
print (len(s))
print (re.match('<.*>', s).span())
print (re.match('<.*>', s).group())
```
```
print re.match('<.*?>', s).group()
c = re.compile(r'\d+')
str = '0123456789'
tuples = re.findall(c, str)
print(tuples)
#Но как сделать так, чтобы получить каждой отдельное число
```
## Разное
### Просмотр с возращением
```
#(?=...) - положительный просмотр вперед
s = "textl, text2, textЗ text4"
p = re.compile(r"\w+(?=[,])", re.S | re.I) # все слова, после которых есть запятая
print (p.findall(s))
#(?!...) - отрицательный просмотр вперед
import re
s = "textl, text2, textЗ text4"
p = re.compile(r"[a-z]+[0-9](?![,])", re.S | re.I) # все слова, после которых нет запятой
print(p.findall(s))
#(?<=...) - положительный просмотр назад
import re
s = "textl, text2, textЗ text4"
p = re.compile(r"(?<=[,][ ])([a-z]+[0-9])", re.S | re.I) # все слова, перед которыми есть запятая с пробелм
print (p.findall(s) )
#(?<!...) - отрицательный просмотр назад
s = "textl, text2, textЗ text4"
p = re.compile(r"(?<![,]) ([a-z]+[0-9])", re.S | re.I) # все слова, перед которыми есть пробел но нет запятой
print (p.findall(s))
#Дано текст:
str = 'ruby python 456 java 789 j2not clash2win'
#Задача: Найти все упоминания языков программирования в строке.
pattern = 'ruby|java|python|c#|fortran|c\+\+'
string = 'ruby python 456 java 789 j2not clash2win'
re.findall(pattern, string)
```
## Определитесь, нужны ли вам регулярные выражения для данной задачи. Возможно, получится гораздо быстрее, если вы примените другой способ решения.
|
github_jupyter
|
Define the network:
```
import torch # PyTorch base
from torch.autograd import Variable # Tensor class w gradients
import torch.nn as nn # modules, layers, loss fns
import torch.nn.functional as F # Conv,Pool,Loss,Actvn,Nrmlz fns from here
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# Kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an Affine Operation: y = Wx + b
self.fc1 = nn.Linear(16*5*5, 120) # Linear is Dense/Fully-Connected
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
# x = torch.nn.functional.max_pool2d(torch.nn.functional.relu(self.conv1(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
# If size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
```
You just have to define the `forward` function, and the `backward` function (where the gradients are computed) is automatically defined for you using `autograd`. You can use any of the Tensor operations in the `forward` function.
The learnable parameteres of a model are returns by `net.parameters()`
```
pars = list(net.parameters())
print(len(pars))
print(pars[0].size()) # conv1's .weight
```
The input to the forward is an `autograd.Variable`, and so is the output. **NOTE**: Expected input size to this net(LeNet) is 32x32. To use this net on MNIST dataset, please resize the images from the dataset to 32x32.
```
input = Variable(torch.randn(1, 1, 32, 32))
out = net(input)
print(out)
```
Zero the gradient buffers of all parameters and backprops with random gradients:
```
net.zero_grad()
out.backward(torch.randn(1, 10))
```
**!NOTE¡**:
`torch.nn` only supports mini-batches. The entire `torch.nn` package only supports inputs that are a mini-batch of samples, and not a single sample.
For example, `nn.Conv2d` will take in a 4D Tensor of `nSamples x nChannels x Height x Width`.
If you have a single sample, just use `input.unsqueeze(0)` to add a fake batch dimension.
Before proceeding further, let's recap all the classes you've seen so far.
**Recap**:
* `torch.Tensor` - A *multi-dimensional array*.
* `autograd.Variable` - *Wraps a Tensor and records the history of operations* applied to it. Has the same API as a `Tensor`, with some additions like `backward()`. Also *holds the gradient* wrt the tensor.
* `nn.Module` - Neural network module. *Convenient way of encapsulating parameters*, with helpers for moving them to GPU, exporting, loading, etc.
* `nn.Parameter` - A kind of Variable, that is *automatically registered as a parameter when assigned as an attribute to a* `Module`.
* `autograd.Function` - Implements *forward and backward definitions of an autograd operation*. Every `Variable` operation creates at least a single `Function` node that connects to functions that created a `Variable` and *encodes its history*.
**At this point, we covered:**
* Defining a neural network
* Processing inputs and calling backward.
**Still Left:**
* Computing the loss
* Updating the weights of the network
|
github_jupyter
|
# EuroSciPy 2018: NumPy tutorial (https://github.com/gertingold/euroscipy-numpy-tutorial)
## Let's do some slicing
```
mylist = list(range(10))
print(mylist)
```
Use slicing to produce the following outputs:
[2, 3, 4, 5]
[0, 1, 2, 3, 4]
[6, 7, 8, 9]
[0, 2, 4, 6, 8]
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
[7, 5, 3]
## Matrices and lists of lists
```
matrix = [[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
```
Get the second row by slicing twice
Try to get the second column by slicing. ~~Do not use a list comprehension!~~
## Getting started
Import the NumPy package
## Create an array
```
np.lookfor('create array')
help(np.array) # remember Shift + Tab give a pop up help. Wh Shift + Tab
```
Press Shift + Tab when your cursor in a Code cell. This will open a pop up with some helping text.
What happens Shift + Tab is pressed a second time?
The variable `matrix` contains a list of lists. Turn it into an `ndarray` and assign it to the variable `myarray`. Verify that its type is correct.
For practicing purposes, arrays can conveniently be created with the `arange` method.
```
myarray1 = np.arange(6)
myarray1
def array_attributes(a):
for attr in ('ndim', 'size', 'itemsize', 'dtype', 'shape', 'strides'):
print('{:8s}: {}'.format(attr, getattr(a, attr)))
array_attributes(myarray1)
```
## Data types
Use `np.array()` to create arrays containing
* floats
* complex numbers
* booleans
* strings
and check the `dtype` attribute.
Do you understand what is happening in the following statement?
```
np.arange(1, 160, 10, dtype=np.int8)
```
## Strides/Reshape
```
myarray2 = myarray1.reshape(2, 3)
myarray2
array_attributes(myarray2)
myarray3 = myarray1.reshape(3, 2)
array_attributes(myarray3)
```
## Views
Set the first entry of `myarray1` to a new value, e.g. 42.
What happened to `myarray2`?
What happens when a matrix is transposed?
```
a = np.arange(9).reshape(3, 3)
a
a.T
```
Check the strides!
```
a.strides
a.T.strides
```
## View versus copy
identical object
```
a = np.arange(4)
b = a
id(a), id(b)
```
view: a different object working on the same data
```
b = a[:]
id(a), id(b)
a[0] = 42
a, b
```
an independent copy
```
a = np.arange(4)
b = np.copy(a)
id(a), id(b)
a[0] = 42
a, b
```
## Some array creation routines
### numerical ranges
`arange(`*start*, *stop*, *step*`)`, *stop* is not included in the array
```
np.arange(5, 30, 5)
```
`arange` resembles `range`, but also works for floats
Create the array [1, 1.1, 1.2, 1.3, 1.4, 1.5]
`linspace(`*start*, *stop*, *num*`)` determines the step to produce *num* equally spaced values, *stop* is included by default
Create the array [1., 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.]
For equally spaced values on a logarithmic scale, use `logspace`.
```
np.logspace(-2, 2, 5)
np.logspace(0, 4, 9, base=2)
```
### Application
```
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(0, 10, 100)
y = np.cos(x)
plt.plot(x, y)
```
### Homogeneous data
```
np.zeros((4, 4))
```
Create a 4x4 array with integer zeros
```
np.ones((2, 3, 3))
```
Create a 3x3 array filled with tens
### Diagonal elements
```
np.diag([1, 2, 3, 4])
```
`diag` has an optional argument `k`. Try to find out what its effect is.
Replace the 1d array by a 2d array. What does `diag` do?
```
np.info(np.eye)
```
Create the 3x3 array
```[[2, 1, 0],
[1, 2, 1],
[0, 1, 2]]
```
### Random numbers
What is the effect of np.random.seed?
```
np.random.seed()
np.random.rand(5, 2)
np.random.seed(1234)
np.random.rand(5, 2)
data = np.random.rand(20, 20)
plt.imshow(data, cmap=plt.cm.hot, interpolation='none')
plt.colorbar()
casts = np.random.randint(1, 7, (100, 3))
plt.hist(casts, np.linspace(0.5, 6.5, 7))
```
## Indexing and slicing
### 1d arrays
```
a = np.arange(10)
```
Create the array [7, 8, 9]
Create the array [2, 4, 6, 8]
Create the array [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
### Higher dimensions
```
a = np.arange(40).reshape(5, 8)
```
Create the array [[21, 22, 23], [29, 30, 31], [37, 38, 39]]
Create the array [ 3, 11, 19, 27, 35]
Create the array [11, 12, 13]
Create the array [[ 8, 11, 14], [24, 27, 30]]
## Fancy indexing ‒ Boolean mask
```
a = np.arange(40).reshape(5, 8)
a % 3 == 0
a[a % 3 == 0]
a[(1, 1, 2, 2, 3, 3), (3, 4, 2, 5, 3, 4)]
```
## Axes
Create an array and calculate the sum over all elements
Now calculate the sum along axis 0 ...
and now along axis 1
Identify the axis in the following array
```
a = np.arange(24).reshape(2, 3, 4)
a
```
## Axes in more than two dimensions
Create a three-dimensional array
Produce a two-dimensional array by cutting along axis 0 ...
and axis 1 ...
and axis 2
What do you get by simply using the index `[0]`?
What do you get by using `[..., 0]`?
## Exploring numerical operations
```
a = np.arange(4)
b = np.arange(4, 8)
a, b
a+b
a*b
```
Operations are elementwise. Check this by multiplying two 2d array...
... and now do a real matrix multiplication
## Application: Random walk
```
length_of_walk = 10000
realizations = 5
angles = 2*np.pi*np.random.rand(length_of_walk, realizations)
x = np.cumsum(np.cos(angles), axis=0)
y = np.cumsum(np.sin(angles), axis=0)
plt.plot(x, y)
plt.axis('scaled')
plt.plot(np.hypot(x, y))
plt.plot(np.mean(x**2+y**2, axis=1))
plt.axis('scaled')
```
## Let's check the speed
```
%%timeit a = np.arange(1000000)
a**2
%%timeit xvals = range(1000000)
[xval**2 for xval in xvals]
%%timeit a = np.arange(100000)
np.sin(a)
import math
%%timeit xvals = range(100000)
[math.sin(xval) for xval in xvals]
```
## Broadcasting
```
a = np.arange(12).reshape(3, 4)
a
a+1
a+np.arange(4)
a+np.arange(3)
np.arange(3)
np.arange(3).reshape(3, 1)
a+np.arange(3).reshape(3, 1)
```
Create a multiplication table for the numbers from 1 to 10 starting from two appropriately chosen 1d arrays.
As an alternative to `reshape` one can add additional axes with `newaxes` or `None`:
```
a = np.arange(5)
b = a[:, np.newaxis]
```
Check the shapes.
## Functions of two variables
```
x = np.linspace(-40, 40, 200)
y = x[:, np.newaxis]
z = np.sin(np.hypot(x-10, y))+np.sin(np.hypot(x+10, y))
plt.imshow(z, cmap='viridis')
x, y = np.mgrid[-10:10:0.1, -10:10:0.1]
x
y
plt.imshow(np.sin(x*y))
x, y = np.mgrid[-10:10:50j, -10:10:50j]
x
y
plt.imshow(np.arctan2(x, y))
```
It is natural to use broadcasting. Check out what happens when you replace `mgrid` by `ogrid`.
## Linear Algebra in NumPy
```
a = np.arange(4).reshape(2, 2)
eigenvalues, eigenvectors = np.linalg.eig(a)
eigenvalues
eigenvectors
```
Explore whether the eigenvectors are the rows or the columns.
Try out `eigvals` and other methods offered by `linalg` which your are interested in
## Application: identify entry closest to ½
Create a 2d array containing random numbers and generate a vector containing for each row the entry closest to one-half.
|
github_jupyter
|
## 8.5 Optimization of Basic Blocks
### 8.5.1
> Construct the DAG for the basic block
> ```
d = b * c
e = a + b
b = b * c
a = e - d
```
```
+--+--+
| - | a
+-+++-+
| |
+---+ +---+
| |
+--v--+ +--v--+
e | + | | * | d,b
+-+++-+ +-+-+-+
| | | |
+---+ +--+ +--+ +---+
| | | |
v v v v
a0 b0 c
```
### 8.5.2
> Simplify the three-address code of Exercise 8.5.1, assuming
> a) Only $a$ is live on exit from the block.
```
d = b * c
e = a + b
a = e - d
```
> b) $a$, $b$, and $c$ are live on exit from the block.
```
e = a + b
b = b * c
a = e - b
```
### 8.5.3
> Construct the DAG for the code in block $B_6$ of Fig. 8.9. Do not forget to include the comparison $i \le 10$.
```
+-----+
| []= | a[t0]
++-+-++
| | |
+--------+ | +-------+
| | |
v +--v--+ v +-----+
a t6 | * | 1.0 | <= |
+-+-+-+ +-+-+-+
| | | |
+------+ +--+ +---+ +---+
| | | |
v +-----+ +-----+ v
88 t5 | - | i | + | 10
+-+++-+ +-++-++
| | | |
+-----------------+ |
| | | |
+-----+ +----+-------+
v v
i0 1
```
### 8.5.4
> Construct the DAG for the code in block $B_3$ of Fig. 8.9.
```
+-----+
| []= | a[t4]
++-+-++
| | |
+--------+ | +--------+
| | |
v +--v--+ v
a0 t4 | - | 0.0
+-+-+-+
| |
+---+ +---+
| |
+--v--+ v +-----+
t3 | * | 88 | <= |
+-+-+-+ +---+-+
| | | |
+-----+ +---+ +---+ +---+
| | | |
v +--v--+ +--v--+ v
8 t2 | + | j | + | 10
+-+-+-+ +-+-+-+
| | | |
+---+ +--------+ +---+
| || |
+--v--+ vv v
t1 | * | j0 1
+-+-+-+
| |
+-----+ +---+
v v
10 i
```
### 8.5.5
> Extend Algorithm 8.7 to process three-statements of the form
> a) `a[i] = b`
> b) `a = b[i]`
> c) `a = *b`
> d) `*a = b`
1. ...
2. set $a$ to "not live" and "no next use".
3. set $b$ (and $i$) to "live" and the next uses of $b$ (and $i$) to statement
### 8.5.6
> Construct the DAG for the basic block
> ```
a[i] = b
*p = c
d = a[j]
e = *p
*p = a[i]
```
> on the assumption that
> a) `p` can point anywhere.
Calculate `a[i]` twice.
> b) `p` can point only to `b` or `d`.
```
+-----+ +-----+
| *= | *p e | =* |
+--+--+ +--+--+
| |
| |
| |
+--v--+ +-----+ +--v--+
a[i] | []= | d | []= | *p0 | *= |
++-+-++ ++-+-++ +--+--+
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
+------------------------+ | |
| | | | | | |
+------------------------------+ | |
| | | | | | |
| | | | +------------------------+ |
| | | | | | |
| <--+ +------+ +----+ | |
v v v v v v
d a i j b c
```
### 8.5.7
> Revise the DAG-construction algorithm to take advantage of such situations, and apply your algorithm to the code of Exercise 8.5.6.
```
a[i] = b
d = a[j]
e = c
*p = b
```
### 8.5.8
> Suppose a basic block is formed from the C assignment statements
> ```
x = a + b + c + d + e + f;
y = a + c + e;
```
> a) Give the three-address statements for this block.
```
t1 = a + b
t2 = t1 + c
t3 = t2 + d
t4 = t3 + e
t5 = t4 + f
x = t5
t6 = a + c
t7 = t6 + e
y = t7
```
> b) Use the associative and commutative laws to modify the block to use the fewest possible number of instructions, assuming both `x` and `y` are live on exit from the block.
```
t1 = a + c
t2 = t1 + e
y = t2
t3 = t2 + b
t4 = t3 + d
t5 = t4 + f
x = t5
```
|
github_jupyter
|
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*
Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
# Model Zoo -- Using PyTorch Dataset Loading Utilities for Custom Datasets (MNIST)
This notebook provides an example for how to load an image dataset, stored as individual PNG files, using PyTorch's data loading utilities. For a more in-depth discussion, please see the official
- [Data Loading and Processing Tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html)
- [torch.utils.data](http://pytorch.org/docs/master/data.html) API documentation
In this example, we are using the cropped version of the **Street View House Numbers (SVHN) Dataset**, which is available at http://ufldl.stanford.edu/housenumbers/.
To execute the following examples, you need to download the 2 ".mat" files
- [train_32x32.mat](http://ufldl.stanford.edu/housenumbers/train_32x32.mat) (ca. 182 Mb, 73,257 images)
- [test_32x32.mat](http://ufldl.stanford.edu/housenumbers/test_32x32.mat) (ca. 65 Mb, 26,032 images)
## Imports
```
import pandas as pd
import numpy as np
import os
import torch
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torchvision import transforms
import matplotlib.pyplot as plt
from PIL import Image
import scipy.io as sio
import imageio
```
## Dataset
The following function will convert the images from ".mat" into individual ".png" files. In addition, we will create CSV contained the image paths and associated class labels.
```
def make_pngs(main_dir, mat_file, label):
if not os.path.exists(main_dir):
os.mkdir(main_dir)
sub_dir = os.path.join(main_dir, label)
if not os.path.exists(sub_dir):
os.mkdir(sub_dir)
data = sio.loadmat(mat_file)
X = np.transpose(data['X'], (3, 0, 1, 2))
y = data['y'].flatten()
with open(os.path.join(main_dir, '%s_labels.csv' % label), 'w') as out_f:
for i, img in enumerate(X):
file_path = os.path.join(sub_dir, str(i) + '.png')
imageio.imwrite(os.path.join(file_path),
img)
out_f.write("%d.png,%d\n" % (i, y[i]))
make_pngs(main_dir='svhn_cropped',
mat_file='train_32x32.mat',
label='train')
make_pngs(main_dir='svhn_cropped',
mat_file='test_32x32.mat',
label='test')
df = pd.read_csv('svhn_cropped/train_labels.csv', header=None, index_col=0)
df.head()
df = pd.read_csv('svhn_cropped/test_labels.csv', header=None, index_col=0)
df.head()
```
## Implementing a Custom Dataset Class
Now, we implement a custom `Dataset` for reading the images. The `__getitem__` method will
1. read a single image from disk based on an `index` (more on batching later)
2. perform a custom image transformation (if a `transform` argument is provided in the `__init__` construtor)
3. return a single image and it's corresponding label
```
class SVHNDataset(Dataset):
"""Custom Dataset for loading cropped SVHN images"""
def __init__(self, csv_path, img_dir, transform=None):
df = pd.read_csv(csv_path, index_col=0, header=None)
self.img_dir = img_dir
self.csv_path = csv_path
self.img_names = df.index.values
self.y = df[1].values
self.transform = transform
def __getitem__(self, index):
img = Image.open(os.path.join(self.img_dir,
self.img_names[index]))
if self.transform is not None:
img = self.transform(img)
label = self.y[index]
return img, label
def __len__(self):
return self.y.shape[0]
```
Now that we have created our custom Dataset class, let us add some custom transformations via the `transforms` utilities from `torchvision`, we
1. normalize the images (here: dividing by 255)
2. converting the image arrays into PyTorch tensors
Then, we initialize a Dataset instance for the training images using the 'quickdraw_png_set1_train.csv' label file (we omit the test set, but the same concepts apply).
Finally, we initialize a `DataLoader` that allows us to read from the dataset.
```
# Note that transforms.ToTensor()
# already divides pixels by 255. internally
custom_transform = transforms.Compose([#transforms.Grayscale(),
#transforms.Lambda(lambda x: x/255.),
transforms.ToTensor()])
train_dataset = SVHNDataset(csv_path='svhn_cropped/train_labels.csv',
img_dir='svhn_cropped/train',
transform=custom_transform)
test_dataset = SVHNDataset(csv_path='svhn_cropped/test_labels.csv',
img_dir='svhn_cropped/test',
transform=custom_transform)
BATCH_SIZE=128
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=4)
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=4)
```
That's it, now we can iterate over an epoch using the train_loader as an iterator and use the features and labels from the training dataset for model training:
## Iterating Through the Custom Dataset
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.manual_seed(0)
num_epochs = 2
for epoch in range(num_epochs):
for batch_idx, (x, y) in enumerate(train_loader):
print('Epoch:', epoch+1, end='')
print(' | Batch index:', batch_idx, end='')
print(' | Batch size:', y.size()[0])
x = x.to(device)
y = y.to(device)
break
```
Just to make sure that the batches are being loaded correctly, let's print out the dimensions of the last batch:
```
x.shape
```
As we can see, each batch consists of 128 images, just as specified. However, one thing to keep in mind though is that
PyTorch uses a different image layout (which is more efficient when working with CUDA); here, the image axes are "num_images x channels x height x width" (NCHW) instead of "num_images height x width x channels" (NHWC):
To visually check that the images that coming of the data loader are intact, let's swap the axes to NHWC and convert an image from a Torch Tensor to a NumPy array so that we can visualize the image via `imshow`:
```
one_image = x[99].permute(1, 2, 0)
one_image.shape
# note that imshow also works fine with scaled
# images in [0, 1] range.
plt.imshow(one_image.to(torch.device('cpu')));
%watermark -iv
```
|
github_jupyter
|

---
## 01. Interpolación de Funciones
Eduard Larrañaga ([email protected])
---
## Interpolación
### Resumen
En este cuaderno se presentan algunas de las técnicas de interpolación de una función.
---
## Interpolación
Los datos astrofísicos (experimentales y sintéticos) usualmente consiten de un conjunto de valores discretos con la forma $(x_j, f_j)$ en donde se representa el valor de una función $f(x)$ paa un conjunto finito de argumentos $\{ x_0, x_1, x_2, ..., x_{n} \}$. Sin embargo, en muchas ocasiones se necesita conocer el valor de la función en puntos adicionales (que no pertenecen al conjunto dado). La **interpolación** es el método que permite obtener estos valores.
Por **interpolación** entenderemos el definir una función $g(x)$, utilizando la información discreta conocida y de tal forma que $g(x_j) = f(x_j)$ y que se aproxime el valor de la función $f$ en cualquier punto $x \in [x_{min}, x_{max}]$, done $x_{min} = \min [x_j]$ y $x_{max} = \max \{ x_j \}$.
Por otro lado, la **extrapolación** correspondería a aproximar el valor de la función $f$ en un punto $x \notin [x_{min}, x_{max}]$}. Sin embargo, este caso no será analizado aquí.
---
## Interpolación Lineal Simple
El método de interpolación más simple es denominado **Interpolación Polinomial** y consiste en encontrar un polinomio $p_n(x)$ de grado $n$ que pasa por $N = n+1$ puntos $x_j$ tomando los valores $p(x_j) = f(x_j)$, donde $j=0,1,2,...,n$.
El polinomio se escribe en la forma general
$p_n(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n$
donde $a_i$ son $n+1$-constantes reales que se determinarán por las condiciones
$\left(
\begin{array}{ccccc}
1&x_0^1&x_0^2&\cdots&x_0^n\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
1&x_n^1&x_n^2&\cdots&x_n^n\\
\end{array}
\right)
\left(\begin{array}{c}
a_0\\
\vdots\\
\vdots\\
a_n
\end{array}\right)
=
\left(\begin{array}{c}
f(x_0)\\
\vdots\\
\vdots\\
f(x_n)
\end{array}\right)$
La solución de este sistema es fácil de obtener en los casos de interpolación lineal ($n=1$) y cuadrática ($n=2$), pero puede ser dificil de encontrar para un valor grande de $n$.
---
### Interpolación Lineal
La interpolación lineal ($n=1$) de una función $f(x)$ en un intervalo
$[x_i,x_{i+1}]$ requiere conocer solamente dos puntos.
Resolviendo el sistema lineal resultante se obtiene el polinomio interpolado
\begin{equation}
p_1(x) = f(x_i) + \frac{f(x_{i+1}) - f(x_i)}{x_{i+1} - x_i} (x-x_i) + \mathcal{O}(\Delta x^2)
\end{equation}
donde $\Delta x = x_{i+1} - x_i$.
El método de interpolación lineal provee un polinomio con una precisión de segundo orden que puede ser derivado una vez, pero esta derivada no es continua en los puntos extremos del intervalo de interpolación, $x_i$ y $x_{i+1}$.
#### Ejemplo. Interpolación Lineal por intervalos
A continuación se leerá un conjunto de datos desde un archivo .txt y se interpolará linealmente entre cada par de puntos (*piecewise interpolation*)
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
plt.figure()
plt.scatter(x,f)
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
data.shape
def linearInterpolation(x1, x2, f1, f2, x):
p1 = f1 + ((f2-f1)/(x2-x1))*(x-x1)
return p1
N = len(x)
plt.figure(figsize=(7,5))
plt.scatter(x, f, color='black')
for i in range(N-1):
x_interval = np.linspace(x[i],x[i+1],3)
# Note that the number 3 in thie above line indeicates the number of
# points interpolated in each interval !
# (including the extreme points of the interval)
y_interval = linearInterpolation(x[i], x[i+1], f[i], f[i+1], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.title(r'Linear Piecewise Interpolation')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_1(x)$')
plt.show()
```
---
### Interpolación Cuadrática
La interpolación cuadrática ($n=2$) requiere información de tres puntos.
Por ejemplo, se pueden tomar los tres puntos $x_i$ , $x_{i+1}$ y $x_{i+2}$ para interpolar la función $f(x)$ en el rango$[x_{i},x_{i+1}]$. Al solucionar el sistema de ecuaciones lineales correspondiente se obtiene el polinomio
$p_2(x) = \frac{(x-x_{i+1})(x-x_{i+2})}{(x_i - x_{i+1})(x_i - x_{i+2})} f(x_i)
+ \frac{(x-x_{i})(x-x_{i+2})}{(x_{i+1} - x_{i})(x_{i+1} - x_{i+2})} f(x_{i+1})
+ \frac{(x-x_i)(x-x_{i+1})}{(x_{i+2} - x_i)(x_{i+2} - x_{i+1})} f(x_{i+2}) + \mathcal{O}(\Delta x^3)$,
donde $\Delta x = \max \{ x_{i+2}-x_{i+1},x_{i+1}-x_i \}$.
En este caso, el polinomio interpolado se puede derivar dos veces pero, aunque su primera derivada es continua, la segunda derivada no es continua en los puntos extremos del intervalo.
#### Ejemplo. Interpolación Cuadrática por Intervalos
A continuación se leerá un conjunto de datos desde un archivo .txt y se interpolará cuadráticamente en sub-intervalos (*quadratic piecewise interpolation*)
```
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 +\
(((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
N = len(x)
plt.figure(figsize=(7,5))
plt.scatter(x, f, color='black')
for i in range(N-2):
x_interval = np.linspace(x[i],x[i+1],6) # 6 interpolate points in each interval
y_interval = quadraticInterpolation(x[i], x[i+1], x[i+2], f[i], f[i+1], f[i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.title(r' Quadratic Polynomial Piecewise Interpolation')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_2(x)$')
plt.show()
```
**Nota:** Por la forma de realizar la interpolación cuadrática, el último intervalo queda sin información. En esta región se puede extender la interpolación del penúltimo intervalo o también se puede interpolar un polinomio lineal.
---
## Interpolación de Lagrange
La **Interpolación de Lagrange Interpolation** también busca un polinomio de grado $n$ utilizando $n+1$ puntos, pero utiliza un método alternativo de encontrar los coeficientes. Para comprender esta idea, re-escribimos el polinomio lineal encontrado antes en la forma
\begin{equation}
p_1(x) = \frac{x-x_{i+1}}{x_i - x_{i+1}} f(x_i) + \frac{x-x_i}{x_{i+1}-x_i} f(x_{i+1}) + \mathcal{O}(\Delta x^2),
\end{equation}
o así,
\begin{equation}
p_1(x) = \sum_{j=i}^{i+1} f(x_j) L_{1j}(x) + \mathcal{O}(\Delta x^2)
\end{equation}
donde se han introducido los *coeficientes de Lagrange*
\begin{equation}
L_{1j}(x) = \frac{x-x_k}{x_j-x_k}\bigg|_{k\ne j}.
\end{equation}
Nótese que estos coeficientes aseguran que el polinomio pasa por los puntos conocidos, i.e. $p_1(x_i) = f(x_i)$ y $p_1(x_{i+1}) = f(x_{i+1})$
La **interpolación de Lagrange** generaliza estas expresiones para un polinomio de grado $n$ que pasa por los $n+1$ puntos conocidos,
\begin{equation}
p_n (x) = \sum_{j=0}^{n} f(x_j) L_{nj}(x) + \mathcal{O}(\Delta x^{n+1})\,, \label{eq:LagrangeInterpolation}
\end{equation}
donde los coeficientes de Lagrange se generalizan a
\begin{equation}
L_{nj}(x) = \prod_{k\ne j}^{n} \frac{x-x_k}{x_j - x_k}\,.
\end{equation}
De nuevo, es posible notar que estos coeficientes aseguran que el polinomio pasa por los puntos concidos $p(x_j) = f(x_j)$.
```
# %load lagrangeInterpolation
'''
Eduard Larrañaga
Computational Astrophysics
2020
Lagrange Interpolation Method
'''
import numpy as np
#Lagrange Coefficients
def L(x, xi, j):
'''
------------------------------------------
L(x, xi, j)
------------------------------------------
Returns the Lagrange coefficient for the
interpolation evaluated at points x
Receives as arguments:
x : array of points where the interpolated
polynomial will be evaluated
xi : array of N data points
j : index of the coefficient to be
calculated
------------------------------------------
'''
# Number of points
N = len(xi)
prod = 1
for k in range(N):
if (k != j):
prod = prod * (x - xi[k])/(xi[j] - xi[k])
return prod
# Interpolated Polynomial
def p(x, xi, fi):
'''
------------------------------------------
p(x, xi, fi)
------------------------------------------
Returns the values of the Lagrange
interpolated polynomial in a set of points
defined by x
x : array of points where the interpolated
polynomial will be evaluated
xi : array of N data points points
fi : values of the function to be
interpolated
------------------------------------------
'''
# Number of points
N = len(xi)
summ = 0
for j in range(N):
summ = summ + fi[j]*L(x, xi, j)
return summ
import numpy as np
import matplotlib.pyplot as plt
#import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Degree of the polynomial to be interpolated piecewise
n = 3
# Check if the number of point is enough to interpolate such a polynomial
if n>=N:
print('\nThere are not enough points to interpolate this polynomial.')
print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}')
sys.exit()
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
# Piecewise Interpolation Loop
for i in range(N-n):
xi = x[i:i+n+1]
fi = f[i:i+n+1]
x_interval = np.linspace(x[i],x[i+1],3*n)
y_interval = p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
Nótese que los últimos $n$ puntos no estan interpolados. Qué se puede hacer?
```
import numpy as np
import matplotlib.pyplot as plt
#import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Degree of the polynomial to be interpolated piecewise
n = 6
# Check if the number of point is enough to interpolate such a polynomial
if n>=N:
print('\nThere are not enough points to interpolate this polynomial.')
print(f'Using {N:.0f} points it is possible to interpolate polynomials up to order n={N-1:.0f}')
sys.exit()
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
# Piecewise Interpolation Loop
for i in range(N-n):
xi = x[i:i+n+1]
fi = f[i:i+n+1]
x_interval = np.linspace(x[i],x[i+1],3*n)
y_interval = p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
# Piecewise Interpolation for the final N-n points,
# using a lower degree polynomial
while n>1:
m = n-1
for i in range(N-n,N-m):
xi = x[i:i+m+1]
fi = f[i:i+m+1]
x_interval = np.linspace(x[i],x[i+1],3*m)
y_interval = p(x_interval,xi,fi)
plt.plot(x_interval, y_interval,'r')
n=n-1
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
### Fenómeno de Runge
Por qué se se interpola por sub-intervalos? Cuando se tiene una gran cantidad de puntos conocidos, es posible interpolar un polinomio de grado alto. Sin embargo, el comportamiento del polinomio interpolado puede no ser el esperado (especialmente en los extremos del intervalo de interpolación) debido a la existencia de oscilaciones no controladas. A este comportamiento se le denomina el fenómeno de Runge.
Por ejemplo, para un conjunto de datos con $20$ puntos es posible interpolar un polinomio de orden $n=19$,
```
import numpy as np
import matplotlib.pyplot as plt
import lagrangeInterpolation as lagi
import sys
# Reading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Higher Degree polynomial to be interpolated
n = N-1
plt.figure(figsize=(7,5))
plt.title(f'Lagrange Polynomial Piecewise Interpolation n={n:.0f}')
plt.scatter(x, f, color='black')
#Interpolation of the higher degree polynomial
x_int = np.linspace(x[0],x[N-1],3*n)
y_int = lagi.p(x_int,x,f)
plt.plot(x_int, y_int,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_n(x)$')
plt.show()
```
Sin embargo, es claro que el comportamiento del polinomio interpolado no es bueno en los extremos del intervalo considerado. Por esta razón, es muy aconsejable utilizar una interpolación de polinomios de grado pequeño por sub-intervalos.
---
## Interpolación Cúbica de Hermite por Intervalos
La interpolación de Hermite es unc aso particular de interpolación polinomica que utiliza un conjunto de puntos conocidos en donde se conoce el valor de la función $f(x_j)$ y su derivada $f'(x_j)$. Al incorporar la primera derivada se pueden interpolar polinomios de un grado alto controlando las osiclaciones no deseadas. Adicionalmente, al conocer la primera derivada, se necesitan menos puntos para realizar la interpolación.
Dentro de este tipo de interpolación, la más utilizada es la de polinomios de tercer orden. DE esta forma, en un intervalo $[x_i , x_{i+1}]$, se requiere conocer (o evaluar) los valores de $f(x_i)$, $f(x_{i+1})$, $f'(x_i)$ y $f'(x_{i+1})$ para obtener el polinomio interpolado de Hermite cúbico,
\begin{equation}
H_3(x) = f(x_i)\psi_0(z) + f(x_{i+1})\psi_0(1-z)+ f'(x_i)(x_{i+1} - x_{i})\psi_1(z) - f'(x_{i+1})(x_{i+1}-x_i)\psi_1 (1-z),
\end{equation}
donde
\begin{equation}
z = \frac{x-x_i}{x_{i+1}-x_i}
\end{equation}
y
\begin{align}
\psi_0(z) =&2z^3 - 3z^2 + 1 \\
\psi_1(z) =&z^3-2z^2+z\,\,.
\end{align}
Nótese que con esta formulación, es posible interpolar un polinomio de tercer orden en un intervalo con solo dos puntos. De esta forma, al trabajar con un conjunto de muhos puntos, se podría interpolar un polinomio cúbico entre cada par de datos, incluso en el último sub-intervalo!
```
# %load HermiteInterpolation
'''
Eduard Larrañaga
Computational Astrophysics
2020
Hermite Interpolation Method
'''
import numpy as np
#Hermite Coefficients
def psi0(z):
'''
------------------------------------------
psi0(z)
------------------------------------------
Returns the Hermite coefficients Psi_0
for the interpolation
Receives as arguments: z
------------------------------------------
'''
psi_0 = 2*z**3 - 3*z**2 + 1
return psi_0
def psi1(z):
'''
------------------------------------------
psi1(z)
------------------------------------------
Returns the Hermite coefficients Psi_1 for
the interpolation
Receives as arguments: z
------------------------------------------
'''
psi_1 = z**3 - 2*z**2 + z
return psi_1
# Interpolated Polynomial
def H3(x, xi, fi, dfidx):
'''
------------------------------------------
H3(x, xi, fi, dfidx)
------------------------------------------
Returns the values of the Cubic Hermite
interpolated polynomial in a set of points
defined by x
x : array of points where the interpolated
polynomial will be evaluated
xi : array of 2 data points
fi : array of values of the function at xi
dfidx : array of values of the derivative
of the function at xi
------------------------------------------
'''
# variable z in the interpolation
z = (x - xi[0])/(xi[1] - x[0])
h1 = psi0(z) * fi[0]
h2 = psi0(1-z)*fi[1]
h3 = psi1(z)*(xi[1] - xi[0])*dfidx[0]
h4 = psi1(1-z)*(xi[1] - xi[0])*dfidx[1]
H = h1 + h2 + h3 - h4
return H
import numpy as np
import matplotlib.pyplot as plt
import HermiteInterpolation as heri
def Derivative(x, f):
'''
------------------------------------------
Derivative(x, f)
------------------------------------------
This function returns the numerical
derivative of a discretely-sample function
using one-side derivatives in the extreme
points of the interval and second order
accurate derivative in the middle points.
The data points may be evenly or unevenly
spaced.
------------------------------------------
'''
# Number of points
N = len(x)
dfdx = np.zeros([N, 2])
dfdx[:,0] = x
# Derivative at the extreme points
dfdx[0,1] = (f[1] - f[0])/(x[1] - x[0])
dfdx[N-1,1] = (f[N-1] - f[N-2])/(x[N-1] - x[N-2])
#Derivative at the middle points
for i in range(1,N-1):
h1 = x[i] - x[i-1]
h2 = x[i+1] - x[i]
dfdx[i,1] = h1*f[i+1]/(h2*(h1+h2)) - (h1-h2)*f[i]/(h1*h2) -\
h2*f[i-1]/(h1*(h1+h2))
return dfdx
# Loading the data
data = np.loadtxt('data_points.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
# Calling the derivative function and chosing only the second column
dfdx = Derivative(x,f)[:,1]
plt.figure(figsize=(7,5))
plt.title(f'Cubic Hermite Polynomial Piecewise Interpolation')
plt.scatter(x, f, color='black')
# Piecewise Hermite Interpolation Loop
for i in range(N-1):
xi = x[i:i+2]
fi = f[i:i+2]
dfidx = dfdx[i:i+2]
x_interval = np.linspace(x[i],x[i+1],4)
y_interval = heri.H3(x_interval, xi, fi, dfidx)
plt.plot(x_interval, y_interval,'r')
plt.xlabel(r'$x$')
plt.ylabel(r'$H_3(x)$')
plt.show()
```
|
github_jupyter
|
## Deploy an ONNX model to an IoT Edge device using ONNX Runtime and the Azure Machine Learning

```
!python -m pip install --upgrade pip
!pip install azureml-core azureml-contrib-iot azure-mgmt-containerregistry azure-cli
!az extension add --name azure-cli-iot-ext
import os
print(os.__file__)
# Check core SDK version number
import azureml.core as azcore
print("SDK version:", azcore.VERSION)
```
## 1. Setup the Azure Machine Learning Environment
### 1a AML Workspace : using existing config
```
#Initialize Workspace
from azureml.core import Workspace
ws = Workspace.from_config()
```
### 1.2 AML Workspace : create a new workspace
```
#Initialize Workspace
from azureml.core import Workspace
### Change this cell from markdown to code and run this if you need to create a workspace
### Update the values for your workspace below
ws=Workspace.create(subscription_id="<subscription-id goes here>",
resource_group="<resource group goes here>",
name="<name of the AML workspace>",
location="<location>")
ws.write_config()
```
### 1.3 AML Workspace : initialize an existing workspace
Download the `config.json` file for your AML Workspace from the Azure portal
```
#Initialize Workspace
from azureml.core import Workspace
## existing AML Workspace in config.json
ws = Workspace.from_config('config.json')
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## 2. Setup the trained model to use in this example
### 2.1 Register the trained model in workspace from the ONNX Model Zoo
```
import urllib.request
onnx_model_url = "https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/tiny_yolov2.tar.gz"
urllib.request.urlretrieve(onnx_model_url, filename="tiny_yolov2.tar.gz")
!tar xvzf tiny_yolov2.tar.gz
from azureml.core.model import Model
model = Model.register(workspace = ws,
model_path = "./tiny_yolov2/Model.onnx",
model_name = "Model.onnx",
tags = {"data": "Imagenet", "model": "object_detection", "type": "TinyYolo"},
description = "real-time object detection model from ONNX model zoo")
```
### 2.2 Load the model from your workspace model registry
For e.g. this could be the ONNX model exported from your training experiment
```
from azureml.core.model import Model
model = Model(name='Model.onnx', workspace=ws)
```
## 3. Create the application container image
This container is the IoT Edge module that will be deployed on the UP<sup>2</sup> device.
1. This container is using a pre-build base image for ONNX Runtime.
2. Includes a `score.py` script, Must include a `run()` and `init()` function. The `init()` is entrypoint that reads the camera frames from /device/video0. The `run()` function is a dummy module to satisfy AML-sdk checks.
3. `amlpackage_inference.py` script which is used to process the input frame and run the inference session and
4. the ONNX model, label file used by the ONNX Runtime
```
%%writefile score.py
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for
# full license information.
import sys
import time
import io
import csv
# Imports for inferencing
import onnxruntime as rt
from amlpackage_inference import run_onnx
import numpy as np
import cv2
# Imports for communication w/IOT Hub
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
from azureml.core.model import Model
# Imports for the http server
from flask import Flask, request
import json
# Imports for storage
import os
# from azure.storage.blob import BlockBlobService, PublicAccess, AppendBlobService
import random
import string
import csv
from datetime import datetime
from pytz import timezone
import time
import json
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def send_confirmation_callback(message, result, user_context):
"""
Callback received when the message that we're forwarding is processed.
"""
print("Confirmation[%d] received for message with result = %s" % (user_context, result))
def get_tinyyolo_frame_from_encode(msg):
"""
Formats jpeg encoded msg to frame that can be processed by tiny_yolov2
"""
#inp = np.array(msg).reshape((len(msg),1))
#frame = cv2.imdecode(inp.astype(np.uint8), 1)
frame = cv2.cvtColor(msg, cv2.COLOR_BGR2RGB)
# resize and pad to keep input frame aspect ratio
h, w = frame.shape[:2]
tw = 416 if w > h else int(np.round(416.0 * w / h))
th = 416 if h > w else int(np.round(416.0 * h / w))
frame = cv2.resize(frame, (tw, th))
pad_value=114
top = int(max(0, np.round((416.0 - th) / 2)))
left = int(max(0, np.round((416.0 - tw) / 2)))
bottom = 416 - top - th
right = 416 - left - tw
frame = cv2.copyMakeBorder(frame, top, bottom, left, right,
cv2.BORDER_CONSTANT, value=[pad_value, pad_value, pad_value])
frame = np.ascontiguousarray(np.array(frame, dtype=np.float32).transpose(2, 0, 1)) # HWC -> CHW
frame = np.expand_dims(frame, axis=0)
return frame
def run(msg):
# this is a dummy function required to satisfy AML-SDK requirements.
return msg
def init():
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
DEVICE = 0 # when device is /dev/video0
LABEL_FILE = "labels.txt"
MODEL_FILE = "Model.onnx"
global MESSAGE_TIMEOUT # setting for IoT Hub Manager
MESSAGE_TIMEOUT = 1000
LOCAL_DISPLAY = "OFF" # flag for local display on/off, default OFF
# Create the IoT Hub Manager to send message to IoT Hub
print("trying to make IOT Hub manager")
hub_manager = HubManager(PROTOCOL)
if not hub_manager:
print("Took too long to make hub_manager, exiting program.")
print("Try restarting IotEdge or this module.")
sys.exit(1)
# Get Labels from labels file
labels_file = open(LABEL_FILE)
labels_string = labels_file.read()
labels = labels_string.split(",")
labels_file.close()
label_lookup = {}
for i, val in enumerate(labels):
label_lookup[val] = i
# get model path from within the container image
model_path=Model.get_model_path(MODEL_FILE)
# Loading ONNX model
print("loading model to ONNX Runtime...")
start_time = time.time()
ort_session = rt.InferenceSession(model_path)
print("loaded after", time.time()-start_time,"s")
# start reading frames from video endpoint
cap = cv2.VideoCapture(DEVICE)
while cap.isOpened():
_, _ = cap.read()
ret, img_frame = cap.read()
if not ret:
print('no video RESETTING FRAMES TO 0 TO RUN IN LOOP')
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
continue
"""
Handles incoming inference calls for each fames. Gets frame from request and calls inferencing function on frame.
Sends result to IOT Hub.
"""
try:
draw_frame = img_frame
start_time = time.time()
# pre-process the frame to flatten, scale for tiny-yolo
frame = get_tinyyolo_frame_from_encode(img_frame)
# run the inference session for the given input frame
objects = run_onnx(frame, ort_session, draw_frame, labels, LOCAL_DISPLAY)
# LOOK AT OBJECTS AND CHECK PREVIOUS STATUS TO APPEND
num_objects = len(objects)
print("NUMBER OBJECTS DETECTED:", num_objects)
print("PROCESSED IN:",time.time()-start_time,"s")
if num_objects > 0:
output_IOT = IoTHubMessage(json.dumps(objects))
hub_manager.forward_event_to_output("inferenceoutput", output_IOT, 0)
continue
except Exception as e:
print('EXCEPTION:', str(e))
continue
```
### 3.1 Include the dependent packages required by the application scripts
```
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_pip_package("azure-iothub-device-client")
myenv.add_pip_package("numpy")
myenv.add_pip_package("opencv-python")
myenv.add_pip_package("requests")
myenv.add_pip_package("pytz")
myenv.add_pip_package("onnx")
with open("myenv.yml", "w") as f:
f.write(myenv.serialize_to_string())
```
### 3.2 Build the custom container image with the ONNX Runtime + OpenVINO base image
This step uses pre-built container images with ONNX Runtime and the different HW execution providers. A complete list of base images are located [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles#docker-containers-for-onnx-runtime).
```
from azureml.core.image import ContainerImage
from azureml.core.model import Model
openvino_image_config = ContainerImage.image_configuration(execution_script = "score.py",
runtime = "python",
dependencies=["labels.txt", "amlpackage_inference.py"],
conda_file = "myenv.yml",
description = "TinyYolo ONNX Runtime inference container",
tags = {"demo": "onnx"})
# Use the ONNX Runtime + OpenVINO base image for Intel MovidiusTM USB sticks
openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad"
# For the Intel Movidius VAD-M PCIe card use this:
# openvino_image_config.base_image = "mcr.microsoft.com/azureml/onnxruntime:latest-openvino-vadm"
openvino_image = ContainerImage.create(name = "name-of-image",
# this is the model object
models = [model],
image_config = openvino_image_config,
workspace = ws)
# Alternative: Re-use an image that you have already built from the workspace image registry
# openvino_image = ContainerImage(name = "<name-of-image>", workspace = ws)
openvino_image.wait_for_creation(show_output = True)
if openvino_image.creation_state == 'Failed':
print("Image build log at: " + openvino_image.image_build_log_uri)
if openvino_image.creation_state != 'Failed':
print("Image URI at: " +openvino_image.image_location)
```
## 4. Deploy to the UP<sup>2</sup> device using Azure IoT Edge
### 4.1 Login with the Azure subscription to provision the IoT Hub and the IoT Edge device
```
!az login
!az account set --subscription $ws.subscription_id
# confirm the account
!az account show
```
### 4.2 Specify the IoT Edge device details
```
# Parameter list to configure the IoT Hub and the IoT Edge device
# Pick a name for what you want to call the module you deploy to the camera
module_name = "module-name-here"
# Resource group in Azure
resource_group_name= ws.resource_group
iot_rg=resource_group_name
# Azure region where your services will be provisioned
iot_location="location-here"
# Azure IoT Hub name
iot_hub_name="name-of-IoT-Hub"
# Pick a name for your camera
iot_device_id="name-of-IoT-Edge-device"
# Pick a name for the deployment configuration
iot_deployment_id="Infernce Module from AML"
```
### 4.2a Optional: Provision the IoT Hub, create the IoT Edge device and Setup the Intel UP<sup>2</sup> AI Vision Developer Kit
```
!az iot hub create --resource-group $resource_group_name --name $iot_hub_name --sku S1
# Register an IoT Edge device (create a new entry in the Iot Hub)
!az iot hub device-identity create --hub-name $iot_hub_name --device-id $iot_device_id --edge-enabled
!az iot hub device-identity show-connection-string --hub-name $iot_hub_name --device-id $iot_device_id
```
The following steps need to be executed in the device terminal
1. Open the IoT edge configuration file in UP<sup>2</sup> device to update the IoT Edge device *connection string*
`sudo nano /etc/iotedge/config.yaml`
provisioning:
source: "manual"
device_connection_string: "<ADD DEVICE CONNECTION STRING HERE>"
2. To update the DPS TPM provisioning configuration:
provisioning:
source: "dps"
global_endpoint: "https://global.azure-devices-provisioning.net"
scope_id: "{scope_id}"
attestation:
method: "tpm"
registration_id: "{registration_id}"
3. Save and close the file. `CTRL + X, Y, Enter
4. After entering the privisioning information in the configuration file, restart the *iotedge* daemon
`sudo systemctl restart iotedge`
5. We will show the object detection results from the camera connected (`/dev/video0`) to the UP<sup>2</sup> on the display. Update your .profile file:
`nano ~/.profile`
add the following line to the end of file
__xhost +__
### 4.3 Construct the deployment file
```
# create the registry uri
container_reg = ws.get_details()["containerRegistry"]
reg_name=container_reg.split("/")[-1]
container_url = "\"" + openvino_image.image_location + "\","
subscription_id = ws.subscription_id
print('{}'.format(openvino_image.image_location), "<-- this is the URI configured in the IoT Hub for the device")
print('{}'.format(reg_name))
print('{}'.format(subscription_id))
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth,subscription_id)
result= client.registries.list_credentials(resource_group_name, reg_name, custom_headers=None, raw=False)
username = result.username
password = result.passwords[0].value
```
#### Create the `deplpyment.json` with the AML image registry details
We have provided here a sample deployment template this reference implementation.
```
file = open('./AML-deployment.template.json')
contents = file.read()
contents = contents.replace('__AML_MODULE_NAME', module_name)
contents = contents.replace('__AML_REGISTRY_NAME', reg_name)
contents = contents.replace('__AML_REGISTRY_USER_NAME', username)
contents = contents.replace('__AML_REGISTRY_PASSWORD', password)
contents = contents.replace('__AML_REGISTRY_IMAGE_LOCATION', openvino_image.image_location)
with open('./deployment.json', 'wt', encoding='utf-8') as output_file:
output_file.write(contents)
```
### 4.4 Push the *deployment* to the IoT Edge device
```
print("Pushing deployment to IoT Edge device")
print ("Set the deployement")
!az iot edge set-modules --device-id $iot_device_id --hub-name $iot_hub_name --content deployment.json
```
### 4.5 Monitor IoT Hub Messages
```
!az iot hub monitor-events --hub-name $iot_hub_name -y
```
## 5. CLEANUP
```
!rm score.py deployment.json myenv.yml
```
|
github_jupyter
|
# About this kernel
+ efficientnet_b3
+ CurricularFace
+ Mish() activation
+ Ranger (RAdam + Lookahead) optimizer
+ margin = 0.9
## Imports
```
import sys
sys.path.append('../input/shopee-competition-utils')
sys.path.insert(0,'../input/pytorch-image-models')
import numpy as np
import pandas as pd
import torch
from torch import nn
from torch.utils.data import Dataset, DataLoader
import albumentations
from albumentations.pytorch.transforms import ToTensorV2
from custom_scheduler import ShopeeScheduler
from custom_activation import replace_activations, Mish
from custom_optimizer import Ranger
import math
import cv2
import timm
import os
import random
import gc
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import GroupKFold
from sklearn.neighbors import NearestNeighbors
from tqdm.notebook import tqdm
```
## Config
```
class CFG:
DATA_DIR = '../input/shopee-product-matching/train_images'
TRAIN_CSV = '../input/shopee-product-matching/train.csv'
# data augmentation
IMG_SIZE = 512
MEAN = [0.485, 0.456, 0.406]
STD = [0.229, 0.224, 0.225]
SEED = 2021
# data split
N_SPLITS = 5
TEST_FOLD = 0
VALID_FOLD = 1
EPOCHS = 8
BATCH_SIZE = 8
NUM_WORKERS = 4
DEVICE = 'cuda:0'
CLASSES = 6609
SCALE = 30
MARGIN = 0.9
MODEL_NAME = 'efficientnet_b3'
MODEL_PATH = f'{MODEL_NAME}_curricular_face_epoch_{EPOCHS}_bs_{BATCH_SIZE}_margin_{MARGIN}.pt'
FC_DIM = 512
SCHEDULER_PARAMS = {
"lr_start": 1e-5,
"lr_max": 1e-5 * 32,
"lr_min": 1e-6,
"lr_ramp_ep": 5,
"lr_sus_ep": 0,
"lr_decay": 0.8,
}
```
## Augmentations
```
def get_train_transforms():
return albumentations.Compose(
[
albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True),
albumentations.HorizontalFlip(p=0.5),
albumentations.VerticalFlip(p=0.5),
albumentations.Rotate(limit=120, p=0.8),
albumentations.RandomBrightness(limit=(0.09, 0.6), p=0.5),
albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD),
ToTensorV2(p=1.0),
]
)
def get_valid_transforms():
return albumentations.Compose(
[
albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True),
albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD),
ToTensorV2(p=1.0)
]
)
def get_test_transforms():
return albumentations.Compose(
[
albumentations.Resize(CFG.IMG_SIZE,CFG.IMG_SIZE,always_apply=True),
albumentations.Normalize(mean=CFG.MEAN, std=CFG.STD),
ToTensorV2(p=1.0)
]
)
```
## Reproducibility
```
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True # set True to be faster
seed_everything(CFG.SEED)
```
## Dataset
```
class ShopeeDataset(torch.utils.data.Dataset):
"""for training
"""
def __init__(self,df, transform = None):
self.df = df
self.root_dir = CFG.DATA_DIR
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self,idx):
row = self.df.iloc[idx]
img_path = os.path.join(self.root_dir,row.image)
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
label = row.label_group
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return {
'image' : image,
'label' : torch.tensor(label).long()
}
class ShopeeImageDataset(torch.utils.data.Dataset):
"""for validating and test
"""
def __init__(self,df, transform = None):
self.df = df
self.root_dir = CFG.DATA_DIR
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self,idx):
row = self.df.iloc[idx]
img_path = os.path.join(self.root_dir,row.image)
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
label = row.label_group
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image,torch.tensor(1)
```
## Curricular Face + NFNet-L0
```
'''
credit : https://github.com/HuangYG123/CurricularFace/blob/8b2f47318117995aa05490c05b455b113489917e/head/metrics.py#L70
'''
def l2_norm(input, axis = 1):
norm = torch.norm(input, 2, axis, True)
output = torch.div(input, norm)
return output
class CurricularFace(nn.Module):
def __init__(self, in_features, out_features, s = 30, m = 0.50):
super(CurricularFace, self).__init__()
print('Using Curricular Face')
self.in_features = in_features
self.out_features = out_features
self.m = m
self.s = s
self.cos_m = math.cos(m)
self.sin_m = math.sin(m)
self.threshold = math.cos(math.pi - m)
self.mm = math.sin(math.pi - m) * m
self.kernel = nn.Parameter(torch.Tensor(in_features, out_features))
self.register_buffer('t', torch.zeros(1))
nn.init.normal_(self.kernel, std=0.01)
def forward(self, embbedings, label):
embbedings = l2_norm(embbedings, axis = 1)
kernel_norm = l2_norm(self.kernel, axis = 0)
cos_theta = torch.mm(embbedings, kernel_norm)
cos_theta = cos_theta.clamp(-1, 1) # for numerical stability
with torch.no_grad():
origin_cos = cos_theta.clone()
target_logit = cos_theta[torch.arange(0, embbedings.size(0)), label].view(-1, 1)
sin_theta = torch.sqrt(1.0 - torch.pow(target_logit, 2))
cos_theta_m = target_logit * self.cos_m - sin_theta * self.sin_m #cos(target+margin)
mask = cos_theta > cos_theta_m
final_target_logit = torch.where(target_logit > self.threshold, cos_theta_m, target_logit - self.mm)
hard_example = cos_theta[mask]
with torch.no_grad():
self.t = target_logit.mean() * 0.01 + (1 - 0.01) * self.t
cos_theta[mask] = hard_example * (self.t + hard_example)
cos_theta.scatter_(1, label.view(-1, 1).long(), final_target_logit)
output = cos_theta * self.s
return output, nn.CrossEntropyLoss()(output,label)
class ShopeeModel(nn.Module):
def __init__(
self,
n_classes = CFG.CLASSES,
model_name = CFG.MODEL_NAME,
fc_dim = CFG.FC_DIM,
margin = CFG.MARGIN,
scale = CFG.SCALE,
use_fc = True,
pretrained = True):
super(ShopeeModel,self).__init__()
print('Building Model Backbone for {} model'.format(model_name))
self.backbone = timm.create_model(model_name, pretrained=pretrained)
if 'efficientnet' in model_name:
final_in_features = self.backbone.classifier.in_features
self.backbone.classifier = nn.Identity()
self.backbone.global_pool = nn.Identity()
elif 'resnet' in model_name:
final_in_features = self.backbone.fc.in_features
self.backbone.fc = nn.Identity()
self.backbone.global_pool = nn.Identity()
elif 'resnext' in model_name:
final_in_features = self.backbone.fc.in_features
self.backbone.fc = nn.Identity()
self.backbone.global_pool = nn.Identity()
elif 'nfnet' in model_name:
final_in_features = self.backbone.head.fc.in_features
self.backbone.head.fc = nn.Identity()
self.backbone.head.global_pool = nn.Identity()
self.pooling = nn.AdaptiveAvgPool2d(1)
self.use_fc = use_fc
if use_fc:
self.dropout = nn.Dropout(p=0.0)
self.fc = nn.Linear(final_in_features, fc_dim)
self.bn = nn.BatchNorm1d(fc_dim)
self._init_params()
final_in_features = fc_dim
self.final = CurricularFace(final_in_features,
n_classes,
s=scale,
m=margin)
def _init_params(self):
nn.init.xavier_normal_(self.fc.weight)
nn.init.constant_(self.fc.bias, 0)
nn.init.constant_(self.bn.weight, 1)
nn.init.constant_(self.bn.bias, 0)
def forward(self, image, label):
feature = self.extract_feat(image)
logits = self.final(feature,label)
return logits
def extract_feat(self, x):
batch_size = x.shape[0]
x = self.backbone(x)
x = self.pooling(x).view(batch_size, -1)
if self.use_fc:
x = self.dropout(x)
x = self.fc(x)
x = self.bn(x)
return x
```
## Engine
```
def train_fn(model, data_loader, optimizer, scheduler, i):
model.train()
fin_loss = 0.0
tk = tqdm(data_loader, desc = "Epoch" + " [TRAIN] " + str(i+1))
for t,data in enumerate(tk):
for k,v in data.items():
data[k] = v.to(CFG.DEVICE)
optimizer.zero_grad()
_, loss = model(**data)
loss.backward()
optimizer.step()
fin_loss += loss.item()
tk.set_postfix({'loss' : '%.6f' %float(fin_loss/(t+1)), 'LR' : optimizer.param_groups[0]['lr']})
scheduler.step()
return fin_loss / len(data_loader)
def eval_fn(model, data_loader, i):
model.eval()
fin_loss = 0.0
tk = tqdm(data_loader, desc = "Epoch" + " [VALID] " + str(i+1))
with torch.no_grad():
for t,data in enumerate(tk):
for k,v in data.items():
data[k] = v.to(CFG.DEVICE)
_, loss = model(**data)
fin_loss += loss.item()
tk.set_postfix({'loss' : '%.6f' %float(fin_loss/(t+1))})
return fin_loss / len(data_loader)
def read_dataset():
df = pd.read_csv(CFG.TRAIN_CSV)
df['matches'] = df.label_group.map(df.groupby('label_group').posting_id.agg('unique').to_dict())
df['matches'] = df['matches'].apply(lambda x: ' '.join(x))
gkf = GroupKFold(n_splits=CFG.N_SPLITS)
df['fold'] = -1
for i, (train_idx, valid_idx) in enumerate(gkf.split(X=df, groups=df['label_group'])):
df.loc[valid_idx, 'fold'] = i
labelencoder= LabelEncoder()
df['label_group'] = labelencoder.fit_transform(df['label_group'])
train_df = df[df['fold']!=CFG.TEST_FOLD].reset_index(drop=True)
train_df = train_df[train_df['fold']!=CFG.VALID_FOLD].reset_index(drop=True)
valid_df = df[df['fold']==CFG.VALID_FOLD].reset_index(drop=True)
test_df = df[df['fold']==CFG.TEST_FOLD].reset_index(drop=True)
train_df['label_group'] = labelencoder.fit_transform(train_df['label_group'])
return train_df, valid_df, test_df
def precision_score(y_true, y_pred):
y_true = y_true.apply(lambda x: set(x.split()))
y_pred = y_pred.apply(lambda x: set(x.split()))
intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)])
len_y_pred = y_pred.apply(lambda x: len(x)).values
precision = intersection / len_y_pred
return precision
def recall_score(y_true, y_pred):
y_true = y_true.apply(lambda x: set(x.split()))
y_pred = y_pred.apply(lambda x: set(x.split()))
intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)])
len_y_true = y_true.apply(lambda x: len(x)).values
recall = intersection / len_y_true
return recall
def f1_score(y_true, y_pred):
y_true = y_true.apply(lambda x: set(x.split()))
y_pred = y_pred.apply(lambda x: set(x.split()))
intersection = np.array([len(x[0] & x[1]) for x in zip(y_true, y_pred)])
len_y_pred = y_pred.apply(lambda x: len(x)).values
len_y_true = y_true.apply(lambda x: len(x)).values
f1 = 2 * intersection / (len_y_pred + len_y_true)
return f1
def get_valid_embeddings(df, model):
model.eval()
image_dataset = ShopeeImageDataset(df,transform=get_valid_transforms())
image_loader = torch.utils.data.DataLoader(
image_dataset,
batch_size=CFG.BATCH_SIZE,
pin_memory=True,
num_workers = CFG.NUM_WORKERS,
drop_last=False
)
embeds = []
with torch.no_grad():
for img,label in tqdm(image_loader):
img = img.to(CFG.DEVICE)
label = label.to(CFG.DEVICE)
feat,_ = model(img,label)
image_embeddings = feat.detach().cpu().numpy()
embeds.append(image_embeddings)
del model
image_embeddings = np.concatenate(embeds)
print(f'Our image embeddings shape is {image_embeddings.shape}')
del embeds
gc.collect()
return image_embeddings
def get_valid_neighbors(df, embeddings, KNN = 50, threshold = 0.36):
model = NearestNeighbors(n_neighbors = KNN, metric = 'cosine')
model.fit(embeddings)
distances, indices = model.kneighbors(embeddings)
predictions = []
for k in range(embeddings.shape[0]):
idx = np.where(distances[k,] < threshold)[0]
ids = indices[k,idx]
posting_ids = ' '.join(df['posting_id'].iloc[ids].values)
predictions.append(posting_ids)
df['pred_matches'] = predictions
df['f1'] = f1_score(df['matches'], df['pred_matches'])
df['recall'] = recall_score(df['matches'], df['pred_matches'])
df['precision'] = precision_score(df['matches'], df['pred_matches'])
del model, distances, indices
gc.collect()
return df, predictions
```
# Training
```
def run_training():
train_df, valid_df, test_df = read_dataset()
train_dataset = ShopeeDataset(train_df, transform = get_train_transforms())
train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size = CFG.BATCH_SIZE,
pin_memory = True,
num_workers = CFG.NUM_WORKERS,
shuffle = True,
drop_last = True
)
print(train_df['label_group'].nunique())
model = ShopeeModel()
model = replace_activations(model, torch.nn.SiLU, Mish())
model.to(CFG.DEVICE)
optimizer = Ranger(model.parameters(), lr = CFG.SCHEDULER_PARAMS['lr_start'])
#optimizer = torch.optim.Adam(model.parameters(), lr = config.SCHEDULER_PARAMS['lr_start'])
scheduler = ShopeeScheduler(optimizer,**CFG.SCHEDULER_PARAMS)
best_valid_f1 = 0.
for i in range(CFG.EPOCHS):
avg_loss_train = train_fn(model, train_dataloader, optimizer, scheduler, i)
valid_embeddings = get_valid_embeddings(valid_df, model)
valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings)
valid_f1 = valid_df.f1.mean()
valid_recall = valid_df.recall.mean()
valid_precision = valid_df.precision.mean()
print(f'Valid f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}')
if valid_f1 > best_valid_f1:
best_valid_f1 = valid_f1
print('Valid f1 score improved, model saved')
torch.save(model.state_dict(),CFG.MODEL_PATH)
run_training()
```
## Best threshold Search
```
train_df, valid_df, test_df = read_dataset()
print("Searching best threshold...")
search_space = np.arange(10, 50, 1)
model = ShopeeModel()
model.eval()
model = replace_activations(model, torch.nn.SiLU, Mish())
model.load_state_dict(torch.load(CFG.MODEL_PATH))
model = model.to(CFG.DEVICE)
valid_embeddings = get_valid_embeddings(valid_df, model)
best_f1_valid = 0.
best_threshold = 0.
for i in search_space:
threshold = i / 100
valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings, threshold=threshold)
valid_f1 = valid_df.f1.mean()
valid_recall = valid_df.recall.mean()
valid_precision = valid_df.precision.mean()
print(f"threshold = {threshold} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}")
if (valid_f1 > best_f1_valid):
best_f1_valid = valid_f1
best_threshold = threshold
print("Best threshold =", best_threshold)
print("Best f1 score =", best_f1_valid)
BEST_THRESHOLD = best_threshold
print("Searching best knn...")
search_space = np.arange(40, 80, 2)
best_f1_valid = 0.
best_knn = 0
for knn in search_space:
valid_df, valid_predictions = get_valid_neighbors(valid_df, valid_embeddings, KNN=knn, threshold=BEST_THRESHOLD)
valid_f1 = valid_df.f1.mean()
valid_recall = valid_df.recall.mean()
valid_precision = valid_df.precision.mean()
print(f"knn = {knn} -> f1 score = {valid_f1}, recall = {valid_recall}, precision = {valid_precision}")
if (valid_f1 > best_f1_valid):
best_f1_valid = valid_f1
best_knn = knn
print("Best knn =", best_knn)
print("Best f1 score =", best_f1_valid)
BEST_KNN = best_knn
test_embeddings = get_valid_embeddings(test_df,model)
test_df, test_predictions = get_valid_neighbors(test_df, test_embeddings, KNN = BEST_KNN, threshold = BEST_THRESHOLD)
test_f1 = test_df.f1.mean()
test_recall = test_df.recall.mean()
test_precision = test_df.precision.mean()
print(f'Test f1 score = {test_f1}, recall = {test_recall}, precision = {test_precision}')
```
|
github_jupyter
|
```
# While in argo environment: Import necessary packages for this notebook
import numpy as np
from matplotlib import pyplot as plt
import xarray as xr
import pandas as pd
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
%matplotlib inline
import glob
```
!python -m pip install "dask[complete]"
```
float_id = '9094' # '9094' '9099' '7652' '9125'
rootdir = '../data/raw/LowRes/'
fd = xr.open_mfdataset(rootdir + float_id + 'SOOCNQC.nc')
JULD = pd.to_datetime(fd.JULD.values)
```
#reads float data
file_folder = "../data/raw/WGfloats/"
#file_folder = "../../data/raw/LowRes"
float_number = "5904468" #7900918 #9094
files = sorted(glob.glob(file_folder+"/*"+float_number+"*.nc"))
print(files)
#files = sorted(glob.glob(file_folder+"/*.nc"))
fd = xr.open_mfdataset(file_folder+"/*"+float_number+"*.nc")
JULD = pd.to_datetime(fd.JULD.values)
```
fd
```
#help(xr.open_mfdataset)
rootdir + float_id + 'SOOCNQC.nc'
#Data/LowRes/9099SOOCNQC.nc
#fd
```
# HELPER FUNCTIONS
#define a function that smooths using a boxcar filter (running mean)
# not sure this function is actually used in the notebook??
def smooth(y, box_pts):
box = np.ones(box_pts)/box_pts
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
#interpolate the data onto the standard depth grid given by x_int
def interpolate(x_int, xvals, yvals):
yvals_int = []
for n in range(0, len(yvals)): # len(yvals) = profile number
yvals_int.append(np.interp(x_int, xvals[n, :], yvals[n, :]))
#convert the interpolated data from a list to numpy array
return np.asarray(yvals_int)
# calculate the vertically integrated data column inventory using the composite trapezoidal rule
def integrate(zi, data, depth_range):
n_profs = len(data)
zi_start = abs(zi - depth_range[0]).argmin() # find location of start depth
zi_end = abs(zi - depth_range[1]).argmin() # find location of end depth
zi_struct = np.ones((n_profs, 1)) * zi[zi_start : zi_end] # add +1 to get the 200m value
data = data[:, zi_start : zi_end] # add +1 to get the 200m value
col_inv = []
for n in range(0, len(data)):
col_inv.append(np.trapz(data[n,:][~np.isnan(data[n,:])], zi_struct[n,:][~np.isnan(data[n,:])]))
return col_inv
#fd #(float data)
#fd.Pressure.isel(N_PROF=0).values
# Interpolate nitrate and poc
zi = np.arange(0, 1600, 5) # 5 = 320 depth intervals between 0m to 1595m
nitr_int = interpolate(zi, fd.Pressure[:, ::-1], fd.Nitrate[:, ::-1]) # interpolate nitrate values across zi depth intervals for all 188 profiles
# Integrate nitrate and poc - total nitrate in upper 200m
upperlim=25
lowerlim=200
nitr = np.array(integrate(zi, nitr_int, [upperlim, lowerlim])) # integrate interpolated nitrate values between 25m-200m
print(nitr)
#nitr.shape
# Find winter maximum and summer minimum upper ocean nitrate levels
def find_extrema(data, date_range, find_func):
# Find indices of float profiles in the date range
date_mask = (JULD > date_range[0]) & (JULD < date_range[1])
# Get the index where the data is closest to the find_func
index = np.where(data[date_mask] == find_func(data[date_mask]))[0][0]
# Get the average data for the month of the extrema
month_start = JULD[date_mask][index].replace(day = 1) # .replace just changes the day of max/min to 1
month_dates = (JULD > month_start) & (JULD < month_start + pd.Timedelta(days = 30))
# ^ not sure why this is needed? or what it does? - it is not used later on...
#month_avg = np.mean(data[date_mask]) #average whole winter or summer values
# ^ but it should be just the month of max/min nitrate,
# not the average for the whole season?...
month_mask = (JULD.month[date_mask] == month_start.month)
month_avg = np.mean(data[date_mask][month_mask])
return month_avg, JULD[date_mask][index], data[date_mask][index]
years = [2015, 2016, 2017, 2018]
nitr_extrema = []
nitr_ancp = []
for y in years:
winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)] #4 months
summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)] #4 months
# Find maximum winter and minimum summer nitrate
avg_max_nitr, max_nitr_date, max_nitr = find_extrema(nitr, winter_range, np.max)
avg_min_nitr, min_nitr_date, min_nitr = find_extrema(nitr, summer_range, np.min)
# Convert to annual nitrate drawdown
redfield_ratio = 106.0/16.0 #106C:16NO3-
# Nitrate units: umol/kg --> divide by 1000 to convert to mol/kg
nitr_drawdown = (avg_max_nitr - avg_min_nitr)/1000.0 * redfield_ratio
nitr_ancp.append(nitr_drawdown)
nitr_extrema.append(((max_nitr, max_nitr_date), (min_nitr, min_nitr_date)))
print(y, max_nitr_date, max_nitr, avg_max_nitr)
print(y, min_nitr_date, min_nitr, avg_min_nitr)
# plot ANCP for chosen float over specified time period
fig, ax = plt.subplots(figsize = (10, 5))
ax.plot(years, nitr_ancp)
ax.set_ylabel('ANCP [mol/m$^2$]', size = 12)
ax.set_xticks(years)
ax.set_xticklabels(['2015', '2016', '2017', '2018'])
ax.set_title('ANCP for Float ' + float_id)
# Find winter maximum and summer minimum upper ocean nitrate levels
def find_extrema(data, date_range, find_func):
# Find indices of float profiles in the date range
date_mask = (JULD > date_range[0]) & (JULD < date_range[1])
# Get the index where the data is closest to the find_func
index = np.where(data[date_mask] == find_func(data[date_mask]))[0][0]
# Get the average data for the month of the extrema
month_start = JULD[date_mask][index].replace(day = 1) # .replace just changes the day of max/min to 1
month_dates = (JULD > month_start) & (JULD < month_start + pd.Timedelta(days = 30))
# ^ not sure why this is needed? or what it does? - it is not used later on...
month_avg = np.mean(data[date_mask]) #average whole winter or summer values
# ^ but it should be just the month of max/min nitrate,
# not the average for the whole season?...
# month_mask = (JULD.month[date_mask] == month_start.month)
# month_avg = np.mean(data[date_mask][month_mask])
return month_avg, JULD[date_mask][index], data[date_mask][index]
years = [2015, 2016, 2017, 2018]
nitr_extrema = []
nitr_ancp = []
for y in years:
winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)]
summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)]
# Find maximum winter and minimum summer nitrate
avg_max_nitr, max_nitr_date, max_nitr = find_extrema(nitr, winter_range, np.max)
avg_min_nitr, min_nitr_date, min_nitr = find_extrema(nitr, summer_range, np.min)
# Convert to annual nitrate drawdown
redfield_ratio = 106.0/16.0 #106C:16NO3-
# Nitrate units: umol/kg --> divide by 1000 to convert to mol/kg
nitr_drawdown = (avg_max_nitr - avg_min_nitr)/1000.0 * redfield_ratio
nitr_ancp.append(nitr_drawdown)
nitr_extrema.append(((max_nitr, max_nitr_date), (min_nitr, min_nitr_date)))
print(y, max_nitr_date, max_nitr, avg_max_nitr)
print(y, min_nitr_date, min_nitr, avg_min_nitr)
# plot ANCP for chosen float over specified time period
fig, ax = plt.subplots(figsize = (10, 5))
ax.plot(years, nitr_ancp)
ax.set_ylabel('ANCP [mol/m$^2$]', size = 12)
ax.set_xticks(years)
ax.set_xticklabels(['2015', '2016', '2017', '2018'])
ax.set_title('ANCP for Float ' + float_id)
# Plot values of integrated nitrate (mol/m2)
fig, ax = plt.subplots(figsize = (20, 5))
# Integrate nitrate and poc between given depth range
zi_range = [25, 200]
nitr_v = np.array(integrate(zi, nitr_int, zi_range))/1000.0
# Function to mark the maximum/minimum values of the data for summer and winter
def add_extrema(ax, ydata, extrema):
for i in range(len(years)):
y = years[i]
winter_range = [pd.datetime(y, 8, 1), pd.datetime(y, 12, 1)]
summer_range = [pd.datetime(y, 12, 1), pd.datetime(y + 1, 4, 1)]
plt.axvspan(winter_range[0], winter_range[1], color='grey', alpha=0.1)
plt.axvspan(summer_range[0], summer_range[1], color='y', alpha=0.1)
(nmax, dmax), (nmin, dmin) = extrema[i]
nitr_vmax = ydata[JULD == dmax]
nitr_vmin = ydata[JULD == dmin]
ax.plot([dmax], nitr_vmax, color = 'g', marker='o', markersize=8)
ax.plot([dmin], nitr_vmin, color = 'r', marker='o', markersize=8)
return ax
#ax = plt.subplot(2, 1, 1)
ax.plot(JULD, nitr_v)
add_extrema(ax, nitr_v, nitr_extrema)
ax.set_ylabel('Nitrate [mol/$m^2$]')
ax.set_title('Integrated Nitrate (' + str(zi_range[0]) + '-' + str(zi_range[1]) + 'm)')
ax.set_ylim([4.5, ax.get_ylim()[1]])
```
|
github_jupyter
|
```
#importing the packages
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import joblib # for saving algorithm and preprocessing objects
from sklearn.linear_model import LinearRegression
# uploading the dataset
df = pd.read_csv('pollution_us_2000_2016.csv')
df.head()
df.columns
#droping all the unnecessary features
df.drop(['Unnamed: 0','State Code', 'County Code', 'Site Num', 'Address', 'County', 'City',
'NO2 Units', 'O3 Units' ,'SO2 Units', 'CO Units',
'NO2 1st Max Hour', 'O3 1st Max Hour', 'SO2 1st Max Hour', 'CO 1st Max Hour'], axis=1, inplace=True)
df.shape
df.describe()
#IQR range
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
#removing Outliers
df = df[~((df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))).any(axis=1)]
df.shape
#encoding dates
df.insert(loc=1, column='Year', value=df['Date Local'].apply(lambda year: year.split('-')[0]))
df.drop('Date Local', axis=1, inplace=True)
df['Year']=df['Year'].astype('int')
#filling the FIRST Nan values with the means by the state
for i in df.columns[2:]:
df[i] = df[i].fillna(df.groupby('State')[i].transform('mean'))
df[df["State"]=='Missouri']['NO2 AQI'].plot(kind='density', subplots=True, layout=(1, 2),
sharex=False, figsize=(10, 4));
plt.scatter(df[df['State']=='Missouri']['Year'], df[df['State']=='Missouri']['NO2 AQI']);
# grouped dataset
dfG = df.groupby(['State', 'Year']).mean().reset_index()
dfG.shape
dfG.describe()
plt.scatter(dfG[dfG['State']=='Missouri']['Year'], dfG[dfG['State']=='Missouri']['NO2 AQI']);
#function for inserting a row
def Insert_row_(row_number, df, row_value):
# Slice the upper half of the dataframe
df1 = df[0:row_number]
# Store the result of lower half of the dataframe
df2 = df[row_number:]
# Inser the row in the upper half dataframe
df1.loc[row_number]=row_value
# Concat the two dataframes
df_result = pd.concat([df1, df2])
# Reassign the index labels
df_result.index = [*range(df_result.shape[0])]
# Return the updated dataframe
return df_result
#all the years
year_list = df['Year'].unique()
print(year_list)
#all the states
state_list = df['State'].unique()
print(state_list)
# add more years with NaN values
for state in state_list:
year_diff = set(year_list).difference(list(dfG[dfG['State']==state]['Year']))
for i in year_diff:
row_value = [state, i, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan,np.nan,np.nan,np.nan,np.nan,np.nan]
dfG = Insert_row_(random.randint(1,494), dfG, row_value)
# fill Nan values with means by the state
for i in dfG.columns[2:]:
dfG[i] = dfG[i].fillna(dfG.groupby('State')[i].transform('mean'))
total_AQI = dfG['NO2 AQI'] + dfG['SO2 AQI'] + \
dfG['CO AQI'] + dfG['O3 AQI']
dfG.insert(loc=len(dfG.columns), column='Total_AQI', value=total_AQI)
dfG.head()
plt.scatter(dfG[dfG['State']=='Missouri']['Year'], dfG[dfG['State']=='Missouri']['Total_AQI']);
dfG[dfG["State"]=='Missouri']['Total_AQI'].plot(kind='density', subplots=True, layout=(1, 2),
sharex=False, figsize=(10, 4));
joblib.dump(dfG, "./processed_data.joblib", compress=True)
testing_Data = joblib.load("./processed_data.joblib")
states = list(testing_Data['State'].unique())
print(states)
from sklearn.linear_model import LinearRegression
def state_data(state, data, df):
t = df[df['State']==state].sort_values(by='Year')
clf = LinearRegression()
clf.fit(t[['Year']], t[data])
years = np.arange(2017, 2020, 1)
tt = pd.DataFrame({'Year': years, data: clf.predict(years.reshape(-1, 1))})
pd.concat([t, tt], sort=False).set_index('Year')[data].plot(color='red')
t.set_index('Year')[data].plot(figsize=(15, 5), xticks=(np.arange(2000, 2020, 1)))
return print(clf.predict(years.reshape(-1, 1)))
state_data('Missouri', 'NO2 AQI', testing_Data)
```
|
github_jupyter
|
# Modeling and Simulation in Python
Chapter 5: Design
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
```
### SIR implementation
We'll use a `State` object to represent the number or fraction of people in each compartment.
```
init = State(S=89, I=1, R=0)
init
```
To convert from number of people to fractions, we divide through by the total.
```
init /= sum(init)
init
```
`make_system` creates a `System` object with the given parameters.
```
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
```
Here's an example with hypothetical values for `beta` and `gamma`.
```
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
```
The update function takes the state during the current time step and returns the state during the next time step.
```
def update1(state, system):
"""Update the SIR model.
state: State with variables S, I, R
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
```
To run a single time step, we call it like this:
```
state = update1(init, system)
state
```
Now we can run a simulation by calling the update function for each time step.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end-1):
state = update_func(state, system)
return state
```
The result is the state of the system at `t_end`
```
run_simulation(system, update1)
```
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?
Hint: what is the change in `S` between the beginning and the end of the simulation?
```
# Solution goes here
```
### Using Series objects
If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end-1):
state = update_func(state, system)
S[t+1], I[t+1], R[t+1] = state
system.S = S
system.I = I
system.R = R
```
Here's how we call it.
```
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
run_simulation(system, update1)
```
And then we can plot the results.
```
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
```
Here's what they look like.
```
plot_results(system.S, system.I, system.R)
savefig('chap05-fig01.pdf')
```
### Using a DataFrame
Instead of making three `TimeSeries` objects, we can use one `DataFrame`.
We have to use `loc` to indicate which row we want to assign the results to. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
```
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a DataFrame to the System: results
system: System object
update_func: function that updates state
"""
frame = DataFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end-1):
frame.loc[t+1] = update_func(frame.loc[t], system)
system.results = frame
```
Here's how we run it, and what the result looks like.
```
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
run_simulation(system, update1)
system.results.head()
```
We can extract the results and plot them.
```
frame = system.results
plot_results(frame.S, frame.I, frame.R)
```
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results.
```
# Solution goes here
```
### Metrics
Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
```
def calc_total_infected(system):
"""Fraction of population infected during the simulation.
system: System object with results.
returns: fraction of population
"""
frame = system.results
return frame.S[system.t0] - frame.S[system.t_end]
```
Here's an example.|
```
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(system.beta, system.gamma, calc_total_infected(system))
```
**Exercise:** Write functions that take a `System` object as a parameter, extract the `results` object from it, and compute the other metrics mentioned in the book:
1. The fraction of students who are sick at the peak of the outbreak.
2. The day the outbreak peaks.
3. The fraction of students who are sick at the end of the semester.
Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this:
I.max()
And the index of the largest value like this:
I.idxmax()
You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
### What if?
We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
```
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
```
Let's start again with the system we used in the previous sections.
```
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
system.beta, system.gamma
```
And run the model without immunization.
```
run_simulation(system, update1)
calc_total_infected(system)
```
Now with 10% immunization.
```
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
run_simulation(system2, update1)
calc_total_infected(system2)
```
10% immunization leads to a drop in infections of 16 percentage points.
Here's what the time series looks like for S, with and without immunization.
```
plot(system.results.S, '-', label='No immunization')
plot(system2.results.S, 'g--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('chap05-fig02.pdf')
```
Now we can sweep through a range of values for the fraction of the population who are immunized.
```
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
print(fraction, calc_total_infected(system))
```
This function does the same thing and stores the results in a `Sweep` object.
```
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
sweep[fraction] = calc_total_infected(system)
return sweep
```
Here's how we run it.
```
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
```
And here's what the results look like.
```
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('chap05-fig03.pdf')
```
If 40% of the population is immunized, less than 4% of the population gets sick.
### Logistic function
To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
```
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
```
The following array represents the range of possible spending.
```
spending = linspace(0, 1200, 21)
spending
```
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.
`M` is chosen so the transition happens around \$500.
`K` is the maximum reduction in `beta`, 20%.
`B` is chosen by trial and error to yield a curve that seems feasible.
```
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
```
Here's what it looks like.
```
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
savefig('chap05-fig04.pdf')
```
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
### Hand washing
Now we can model the effect of a hand-washing campaign by modifying `beta`
```
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
```
Let's start with the same values of `beta` and `gamma` we've been using.
```
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
```
Now we can sweep different levels of campaign spending.
```
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(spending, system.beta, calc_total_infected(system))
```
Here's a function that sweeps a range of spending and stores the results in a `Sweep` object.
```
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[spending] = calc_total_infected(system)
return sweep
```
Here's how we run it.
```
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
```
And here's what it looks like.
```
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('chap05-fig05.pdf')
```
Now let's put it all together to make some public health spending decisions.
### Optimization
Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
```
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
max_doses
```
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.
For each scenario, we compute the fraction of students who get sick.
```
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(doses, system.init.S, system.beta, calc_total_infected(system))
```
The following function wraps that loop and stores the results in a `Sweep` object.
```
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
```
Now we can compute the number of infected students for each possible allocation of the budget.
```
infected_sweep = sweep_doses(dose_array)
```
And plot the results.
```
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('chap05-fig06.pdf')
```
**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
**Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.
How might you incorporate the effect of quarantine in the SIR model?
```
# Solution goes here
```
|
github_jupyter
|
## Dependencies
```
import os
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
# Preprocecss data
X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png")
X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png")
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
X_train['diagnosis'] = X_train['diagnosis']
X_val['diagnosis'] = X_val['diagnosis']
display(X_train.head())
```
# Model parameters
```
# Model parameters
BATCH_SIZE = 8
EPOCHS = 40
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
```
# Pre-procecess images
```
train_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
x = int(width/2)
y = int(height/2)
r = np.amin((x,y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x,y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
# Pre-procecss train set
for i, image_id in enumerate(X_train['id_code']):
preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss validation set
for i, image_id in enumerate(X_val['id_code']):
preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH)
# Pre-procecss test set
for i, image_id in enumerate(test['id_code']):
preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH)
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True,
fill_mode='constant',
cval=0.)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = applications.ResNet50(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
```
# Fine-tune the complete model (1st step)
```
for i in range(-15, 0):
model.layers[i].trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es, rlrop]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=int(EPOCHS*0.8),
callbacks=callback_list,
verbose=1).history
```
# Fine-tune the complete model (2nd step)
```
optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
history_finetunning_2 = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=int(EPOCHS*0.2),
callbacks=callback_list,
verbose=1).history
```
# Model loss graph
```
history = {'loss': history_finetunning['loss'] + history_finetunning_2['loss'],
'val_loss': history_finetunning['val_loss'] + history_finetunning_2['val_loss'],
'acc': history_finetunning['acc'] + history_finetunning_2['acc'],
'val_acc': history_finetunning['val_acc'] + history_finetunning_2['val_acc']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
```
# Threshold optimization
```
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
def classify_opt(x):
if x <= (0 + best_thr_0):
return 0
elif x <= (1 + best_thr_1):
return 1
elif x <= (2 + best_thr_2):
return 2
elif x <= (3 + best_thr_3):
return 3
return 4
def find_best_threshold(df, label, label_col='label', pred_col='pred', do_plot=True):
score = []
thrs = np.arange(0, 1, 0.01)
for thr in thrs:
preds_thr = [label if ((pred >= label and pred < label+1) and (pred < (label+thr))) else classify(pred) for pred in df[pred_col]]
score.append(cohen_kappa_score(df[label_col].astype('int'), preds_thr))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
print('Label %s: thr=%.2f, Kappa=%.3f' % (label, best_thr, best_score))
plt.rcParams["figure.figsize"] = (20, 5)
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.show()
return best_thr
# Best threshold for label 3
best_thr_3 = find_best_threshold(df_preds, 3)
# Best threshold for label 2
best_thr_2 = find_best_threshold(df_preds, 2)
# Best threshold for label 1
best_thr_1 = find_best_threshold(df_preds, 1)
# Best threshold for label 0
best_thr_0 = find_best_threshold(df_preds, 0)
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
# Apply optimized thresholds to the predictions
df_preds['predictions_opt'] = df_preds['pred'].apply(lambda x: classify_opt(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
### Optimized thresholds
```
plot_confusion_matrix((train_preds['label'], train_preds['predictions_opt']), (validation_preds['label'], validation_preds['predictions_opt']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
print(" Original thresholds")
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
print(" Optimized thresholds")
evaluate_model((train_preds['label'], train_preds['predictions_opt']), (validation_preds['label'], validation_preds['predictions_opt']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator)
predictions = [classify(x) for x in preds]
predictions_opt = [classify_opt(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
results_opt = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions_opt})
results_opt['id_code'] = results_opt['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d", ax=ax1).set_title('Test')
sns.countplot(x="diagnosis", data=results_opt, palette="GnBu_d", ax=ax2).set_title('Test optimized')
sns.despine()
plt.show()
val_kappa = cohen_kappa_score(validation_preds['label'], validation_preds['predictions'], weights='quadratic')
val_opt_kappa = cohen_kappa_score(validation_preds['label'], validation_preds['predictions_opt'], weights='quadratic')
results_name = 'submission.csv'
results_opt_name = 'submission_opt.csv'
# if val_kappa > val_opt_kappa:
# results_name = 'submission.csv'
# results_opt_name = 'submission_opt.csv'
# else:
# results_name = 'submission_norm.csv'
# results_opt_name = 'submission.csv'
results.to_csv(results_name, index=False)
display(results.head())
results_opt.to_csv(results_opt_name, index=False)
display(results_opt.head())
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.