path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
day-3/machine-learning-frameworks.ipynb | ###Markdown
Some Machine Learning Frameworks Out There [Pytorch](https://pytorch.org/) pytorch is one of the most popular frameworks in the world. It has a bit of the best of both worlds in terms of simplicity and flexibility. Pytorch is Facebook's ML library
###Code
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(dev)
###Output
_____no_output_____
###Markdown
[Tensorflow](https://www.tensorflow.org/) Tensorflow is another one of the most popular frameworks in the world. It is a little more complicated than pytorch, but offers more flexibility.Tensorflow was developed by Google
###Code
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
[Tensorflow.js](https://www.tensorflow.org/js) If you are into web-dev, there's ML libraries like tensorflow.js that you can use too! [Keras](https://keras.io/) Keras is another popular framework, but it is written on top of tensorflow instead of being stand-alone. It is very easy to use, but does not offer as much flexibility. However, it makes "doing" ML much faster and easier.
###Code
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=50)) #input shape of 50
model.add(Dense(28, activation='relu')) #input shape of 50
model.add(Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
[Scikit-learn](https://scikit-learn.org/stable/) A good library to create models out of your data. It is good all-around, but lacks a little bit of the convenience that some other libaries provide.
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
[NLTK](https://www.nltk.org/) (The Natural Langauge Toolkit) If you are interested in learning more about natural language processing and computational linguistics, the NLTK library provides a beautiful API to work with natural language and is .a good place to start exploring.
###Code
import nltk
sentence = """At eight o'clock on Thursday morning Arthur didn't feel very good."""
tokens = nltk.word_tokenize(sentence)
print(tokens)
###Output
_____no_output_____ |
workshop/lessons/02_intro_pymatgen/1 - pymatgen foundations.ipynb | ###Markdown
Materials Project Workshop – August 10 - 12 2021, Berkeley, California  What is pymatgen?Pymatgen (Python Materials Genomics) is the code that powers all of the scientific analysis behind the Materials Project. It includes a robust and efficient libraries for the handling of crystallographic structures and molecules, in addition to various mathematical and scientific tools for the handling and generation of materials data. For more details, see the [pymatgen website](https://pymatgen.org). How do I install pymatgen?For the workshop, pymatgen has been pre-installed for use in your Jupyter notebooks.Otherwise, pymatgen can be installed via pip:`pip install pymatgen`or conda:`conda install --channel matsci pymatgen`We recommend using Python 3.6 or above. Until 2018, pymatgen was developed simultaneously for Python 2.x and 3.x, but following the rest of the Python community we have phased out support for Python 2.x, and since version 2019.1.1 we are developing exclusively for Python 3.x. Where can I find help and how do I get involved?* **For general help:** [pymatgen discourse](https://pymatgen.discourse.group/) is a place to ask questions.* **To report bugs:** The [Github Issues](https://github.com/materialsproject/pymatgen/issues) page is a good place to report bugs.* **For Materials Project data and website discussions:** The Materials Project has its community [Materials Project Discussion](https://discuss.materialsproject.org) forum. * **For more example notebooks:** [matgenb](http://matgenb.materialsvirtuallab.org) is a new resource of Jupyter notebooks demonstrating various pymatgen functionality.If you want specific new features, you're welcome to ask! We try to respond to community needs. If you're a developer and can add the feature yourself, we actively encourage you to do so by creating a Pull Request on Github with your additional functionality. To date, pymatgen has seen over 19,000 commits and nearly 150 contributors, and we try to have an inclusive and welcoming development community. All contributors are also individually acknowledged on [materialsproject.org/about](https://materialsproject.org/about). Verify we have pymatgen installedFirst, let's verify we have pymatgen installed. The following command should produce no error or warning:
###Code
import pymatgen.core
###Output
_____no_output_____
###Markdown
We can show the specific version of pymatgen installed:
###Code
print(pymatgen.core.__version__)
###Output
_____no_output_____
###Markdown
For a list of new features, bug fixes and other changes, consult the [changelog on pymatgen.org](http://pymatgen.org/change_log.html).You can also see where pymatgen is installed on your computer:
###Code
print(pymatgen.core.__file__)
###Output
_____no_output_____
###Markdown
We can also see which version of the Python programming language we are using:
###Code
import sys
print(sys.version)
###Output
_____no_output_____
###Markdown
If you have problems or need to report bugs when using pymatgen after the workshop, the above information is often very useful to help us identify the problem. Structures and MoleculesMost of the fundamentals of pymatgen are expressed in terms of [**`Molecule`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Molecule) and [**`Structure`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Structure) objects.While we will mostly be using `Structure`, `Stucture` and `Molecule` are very similar conceptually. The main difference is that `Structure` supports full periodicity required to describe crystallographic structures.Creating a `Structure` can be done in one line, even for complicated crystallographic structures. However, we'll start by introducing the somewhat simpler `Molecule` object, and then use this understanding of `Molecule` to introduce `Structure`. Creating a MoleculeStart by importing `Molecule`:
###Code
from pymatgen.core.structure import Molecule
###Output
_____no_output_____
###Markdown
In a Jupyter notebook, you can show help for any Python object by clicking on the object and pressing **Shift+Tab**. This will give you a list of arguments and keyword arguments necessary to construct the object, as well as the documentation ('docstring') which gives more information on what each argument means.
###Code
Molecule
###Output
_____no_output_____
###Markdown
Molecule takes input **arguments** `species` and `coords`, and input **keyword arguments** `charge`, `spin_multiplicity`, `validate_proximity` and `site_properties`.Keyword arguments come with a default value (the value after the equals sign), and so keyword arguments are optional.Arguments (without default values) are mandatory.
###Code
c_monox = Molecule(["C","O"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.2]])
print(c_monox)
###Output
_____no_output_____
###Markdown
Alright, now let's use a keyword variable to change a default. How about we make an anion?
###Code
oh_minus = Molecule(["O", "H"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], charge=-1)
print(oh_minus)
###Output
_____no_output_____
###Markdown
You can also create Molecule objects from files. Let's say you have an \*.xyz file called "water.xyz". You can import that into pymatgen with `Molecule.from_file`, like:
###Code
water = Molecule.from_file("water.xyz")
print(water)
###Output
_____no_output_____
###Markdown
Exercise: Making MoleculesTry it yourself! Create molecules however you like!In this folder are several example molecules (`methane.xyz`, `furan.xyz`, and `benzene.xyz`). Try loading these files with `Molecule.from_file`. You can also try making a Molecule from a list of species and coordinates. Try changing the default parameters - see what you can and cannot do (for instance, look at varying the charge and the spin multiplicity). What's in a Molecule? Introducing Sites, Elements and Species You can access properties of the molecule, such as the Cartesian coordinates of its sites:
###Code
print(c_monox.cart_coords)
###Output
_____no_output_____
###Markdown
or properties that are computed on-the-fly, such as its center of mass:
###Code
print(c_monox.center_of_mass)
###Output
_____no_output_____
###Markdown
To see the full list of available properties and methods, press **Tab** after typing `my_molecule.` in your Jupyter notebook. There are methods used to modify the molecule and these take additional argument(s). For example, to add a charge to the molecule:
###Code
c_monox.set_charge_and_spin(charge=1)
print(c_monox)
###Output
_____no_output_____
###Markdown
A molecule is essentially a list of `Site` objects. We can access these sites like we would a list in Python. For example, to obtain the total number of sites in the molecule:
###Code
len(c_monox)
###Output
_____no_output_____
###Markdown
Or to access the first site (note that Python is a 0-indexed programming language, so the first site is site 0):
###Code
print(c_monox[0])
###Output
_____no_output_____
###Markdown
And just like a list, I can even change the elements of a molecule.
###Code
c_monox[0] = "O"
c_monox[1] = "C"
print(c_monox)
###Output
_____no_output_____
###Markdown
A site object contains information on the site's identity and position in space.
###Code
site0 = c_monox[0]
site0.coords
site0.specie
###Output
_____no_output_____
###Markdown
Here, because we switched the elements, the site holds the element O. In general, a site can hold an [**`Element`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Element), a [**`Specie`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Specie) or a [**`Composition`**](http://pymatgen.org/pymatgen.core.composition.htmlpymatgen.core.composition.Composition). Let's look at each of these in turn.
###Code
from pymatgen.core.composition import Element, Composition
from pymatgen.core.periodic_table import Specie
###Output
_____no_output_____
###Markdown
An `Element` is simply an element from the Periodic Table.
###Code
carbon = Element('C')
###Output
_____no_output_____
###Markdown
Elements have properties such as atomic mass, average ionic radius and more:
###Code
carbon.average_ionic_radius
###Output
_____no_output_____
###Markdown
A `Specie` can contain additional information, such as oxidation state:
###Code
o_ion = Specie('O', oxidation_state=-2)
o_ion
###Output
_____no_output_____
###Markdown
Again, we can access both these **`Specie`**-specific properties and more general properties of Elements.
###Code
o_ion.oxi_state
o_ion.atomic_mass
###Output
_____no_output_____
###Markdown
Or, for convenience, we can use strings, which will be interpreted as elements with oxidation states:
###Code
Specie.from_string('O2-')
###Output
_____no_output_____
###Markdown
Finally, a `Composition` is an object that can hold certain amounts of different elements or species. This is most useful in a disordered Structure, and would rarely be used in a Molecule. For example, a site that holds 50% Au and 50% Cu would be set as follows:
###Code
comp = Composition({'Au': 0.5, 'Cu': 0.5})
###Output
_____no_output_____
###Markdown
A **`Composition`** contains more information than either an **`Element`** or a **`Specie`**. Because it can contain multiple **`Elements`**/**`Species`**, you can obtain the formula, or the chemical system.
###Code
print("formula", comp.alphabetical_formula)
print("chemical system", comp.chemical_system)
###Output
_____no_output_____
###Markdown
When we construct a `Molecule`, the input argument will automatically be converted into one of `Element`, `Specie` or `Composition`. Thus, in the previous example, when we first defined carbon monoxide, the input `['C', 'O']` was converted to `[Element C, Element O]`. Exercise: Hydrogen CyanideConstruct the linear HCN molecule where each bond distance is the sum of the two atomic radii.HINT: To do this, you'll probably want to use some Element properties!
###Code
H_rad = Element('H').atomic_radius
C_rad = Element('C').atomic_radius
N_rad = Element('N').atomic_radius
HC_bond_dist = H_rad + C_rad
CN_bond_dist = C_rad + N_rad
H_pos = 0
C_pos = H_pos + HC_bond_dist
N_pos = C_pos + CN_bond_dist
hcn = Molecule(['H','C','N'], [[H_pos, 0, 0], [C_pos, 0, 0],[N_pos, 0, 0]])
print(hcn)
###Output
_____no_output_____
###Markdown
Creating a Structure and Lattice Creating a `Structure` is very similar to creating a `Molecule`, except we now also have to specify a `Lattice`.
###Code
from pymatgen.core import Lattice, Structure
###Output
_____no_output_____
###Markdown
A `Lattice` can be created in one of several ways, such as by inputting a 3x3 matrix describing the individual lattice vectors. For example, a cubic lattice of length 5 Ångstrom:
###Code
my_lattice = Lattice([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
print(my_lattice)
###Output
_____no_output_____
###Markdown
Equivalently, we can create it from its lattice parameters:
###Code
my_lattice_2 = Lattice.from_parameters(5, 5, 5, 90, 90, 90)
###Output
_____no_output_____
###Markdown
Or, since we know in this case that we have a cubic lattice, a == b == c and alpha == beta == gamma == 90 degrees, so we can simply put:
###Code
my_lattice_3 = Lattice.cubic(5)
###Output
_____no_output_____
###Markdown
We can confirm that these lattices are the same:
###Code
my_lattice == my_lattice_2 == my_lattice_3
###Output
_____no_output_____
###Markdown
Now, we can create a simple crystal structure. Let's start with body-centered-cubic iron:
###Code
bcc_fe = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [0.5, 0.5, 0.5]])
print(bcc_fe)
print(bcc_fe)
###Output
_____no_output_____
###Markdown
Creating this `Structure` was similar to creating a `Molecule` in that we provided a list of elements and a list of positions. However, there are two key differences: we had to include our `Lattice` object when creating the `Structure` and since we have a lattice, we can define the positions of our sites in *fractional coordinates* with respect to that lattice instead of Cartesian coordinates. It's also possible to create an equivalent `Structure` using Cartesian coordinates:
###Code
bcc_fe_from_cart = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [1.4, 1.4, 1.4]],
coords_are_cartesian=True)
print(bcc_fe_from_cart)
###Output
_____no_output_____
###Markdown
We can check that both structures are equivalent:
###Code
bcc_fe == bcc_fe_from_cart
###Output
_____no_output_____
###Markdown
As in a `Molecule`, we can access properties of the structure, such as its volume:
###Code
bcc_fe.volume
###Output
_____no_output_____
###Markdown
Creating Structures from Spacegroups Structures can also be created directly from their spacegroup:
###Code
bcc_fe = Structure.from_spacegroup("Im-3m", Lattice.cubic(2.8), ["Fe"], [[0, 0, 0]])
print(bcc_fe)
nacl = Structure.from_spacegroup("Fm-3m", Lattice.cubic(5.692), ["Na+", "Cl-"],
[[0, 0, 0], [0.5, 0.5, 0.5]])
print(nacl)
###Output
_____no_output_____
###Markdown
And spacegroups can be obtained from a structure:
###Code
nacl.get_space_group_info()
###Output
_____no_output_____
###Markdown
Where 225 is the spacegroup number. Supercells Alright, we are now well-versed in the art of creating singular structures. But in some cases, you really don't just want one unit cell; you want a supercell. Pymatgen provides a very simple interface to create superstructures. Let's start with the simplest structure that we can imagine: Polonium, a simple cubic metal.
###Code
polonium = Structure(Lattice.cubic(3.4), ["Po"], [[0.0, 0.0, 0.0]])
print(polonium)
###Output
_____no_output_____
###Markdown
To make a supercell, we can just multiply the structure by a tuple!
###Code
supercell = polonium * (2, 2, 2)
print(supercell)
###Output
_____no_output_____
###Markdown
Exercise: Barium TitanateLoad BaTiO3 from the CIF file provided (`BaTiO3.cif`). Replace the barium with a new element with atomic number equal to the the day of the month you were born (e.g. 1st = hydrogen, 31st = gallium). Then, make a supercell of N unit cells in one cartesian direction of your choice where N is the integer of your birth month (e.g. January = 1, December = 12).
###Code
BaTiO3=Structure.from_file("BaTiO3.cif")
print(BaTiO3.get_space_group_info())
BaTiO3.replace(0,'Mg')
BaTiO3=BaTiO3*(1,1,4)
print(BaTiO3)
print(BaTiO3.get_space_group_info())
###Output
_____no_output_____
###Markdown
Materials Project Workshop – July 28th to July 30th, 2019, Berkeley, California  What is pymatgen?Pymatgen (Python Materials Genomics) is the code that powers all of the scientific analysis behind the Materials Project. It includes a robust and efficient libraries for the handling of crystallographic structures and molecules, in addition to various mathematical and scientific tools for the handling and generation of materials data. For more details, see the [pymatgen website](https://pymatgen.org). How do I install pymatgen?For the workshop, pymatgen has been pre-installed for use in your Jupyter notebooks.Otherwise, pymatgen can be installed via pip:`pip install pymatgen`or conda:`conda install --channel matsci pymatgen`We recommend using Python 3.6 or above. Until 2018, pymatgen was developed simultaneously for Python 2.x and 3.x, but following the rest of the Python community we have phased out support for Python 2.x, and since version 2019.1.1 we are developing exclusively for Python 3.x. Where can I find help and how do I get involved?* **For general help:** [pymatgen discourse](https://pymatgen.discourse.group/) is a place to ask questions.* **To report bugs:** The [Github Issues](https://github.com/materialsproject/pymatgen/issues) page is a good place to report bugs.* **For Materials Project data and website discussions:** The Materials Project has its community [Materials Project Discussion](https://discuss.materialsproject.org) forum. * **For more example notebooks:** [matgenb](http://matgenb.materialsvirtuallab.org) is a new resource of Jupyter notebooks demonstrating various pymatgen functionality.If you want specific new features, you're welcome to ask! We try to respond to community needs. If you're a developer and can add the feature yourself, we actively encourage you to do so by creating a Pull Request on Github with your additional functionality. To date, pymatgen has seen over 19,000 commits and nearly 150 contributors, and we try to have an inclusive and welcoming development community. All contributors are also individually acknowledged on [materialsproject.org/about](https://materialsproject.org/about). Verify we have pymatgen installedFirst, let's verify we have pymatgen installed. The following command should produce no error or warning:
###Code
import pymatgen
###Output
_____no_output_____
###Markdown
We can show the specific version of pymatgen installed:
###Code
print(pymatgen.__version__)
###Output
2020.7.16
###Markdown
For a list of new features, bug fixes and other changes, consult the [changelog on pymatgen.org](http://pymatgen.org/change_log.html).You can also see where pymatgen is installed on your computer:
###Code
print(pymatgen.__file__)
###Output
/Users/shyamd/Dropbox/Codes/pymatgen/pymatgen/__init__.py
###Markdown
We can also see which version of the Python programming language we are using:
###Code
import sys
print(sys.version)
###Output
3.8.3 (default, Jul 2 2020, 11:26:31)
[Clang 10.0.0 ]
###Markdown
If you have problems or need to report bugs when using pymatgen after the workshop, the above information is often very useful to help us identify the problem. Structures and MoleculesMost of the fundamentals of pymatgen are expressed in terms of [**`Molecule`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Molecule) and [**`Structure`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Structure) objects.While we will mostly be using `Structure`, `Stucture` and `Molecule` are very similar conceptually. The main difference is that `Structure` supports full periodicity required to describe crystallographic structures.Creating a `Structure` can be done in one line, even for complicated crystallographic structures. However, we'll start by introducing the somewhat simpler `Molecule` object, and then use this understanding of `Molecule` to introduce `Structure`. Creating a MoleculeStart by importing `Molecule`:
###Code
from pymatgen import Molecule
###Output
_____no_output_____
###Markdown
In a Jupyter notebook, you can show help for any Python object by clicking on the object and pressing **Shift+Tab**. This will give you a list of arguments and keyword arguments necessary to construct the object, as well as the documentation ('docstring') which gives more information on what each argument means.
###Code
Molecule
###Output
_____no_output_____
###Markdown
Molecule takes input **arguments** `species` and `coords`, and input **keyword arguments** `charge`, `spin_multiplicity`, `validate_proximity` and `site_properties`.Keyword arguments come with a default value (the value after the equals sign), and so keyword arguments are optional.Arguments (without default values) are mandatory.
###Code
c_monox = Molecule(["C","O"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.2]])
print(c_monox)
###Output
Full Formula (C1 O1)
Reduced Formula: CO
Charge = 0.0, Spin Mult = 1
Sites (2)
0 C 0.000000 0.000000 0.000000
1 O 0.000000 0.000000 1.200000
###Markdown
Alright, now let's use a keyword variable to change a default. How about we make an anion?
###Code
oh_minus = Molecule(["O", "H"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], charge=-1)
print(oh_minus)
###Output
Full Formula (H1 O1)
Reduced Formula: H2O2
Charge = -1, Spin Mult = 1
Sites (2)
0 O 0.000000 0.000000 0.000000
1 H 0.000000 0.000000 1.000000
###Markdown
You can also create Molecule objects from files. Let's say you have an \*.xyz file called "water.xyz". You can import that into pymatgen with `Molecule.from_file`, like:
###Code
water = Molecule.from_file("water.xyz")
print(water)
###Output
Full Formula (H2 O1)
Reduced Formula: H2O
Charge = 0, Spin Mult = 1
Sites (3)
0 O -0.070000 -0.026960 -0.095240
1 H 0.919330 -0.015310 -0.054070
2 H -0.359290 0.231000 0.816010
###Markdown
Exercise: Making MoleculesTry it yourself! Create molecules however you like!In this folder are several example molecules (`methane.xyz`, `furan.xyz`, and `benzene.xyz`). Try loading these files with `Molecule.from_file`. You can also try making a Molecule from a list of species and coordinates. Try changing the default parameters - see what you can and cannot do (for instance, look at varying the charge and the spin multiplicity). What's in a Molecule? Introducing Sites, Elements and Species You can access properties of the molecule, such as the Cartesian coordinates of its sites:
###Code
print(c_monox.cart_coords)
###Output
[[0. 0. 0. ]
[0. 0. 1.2]]
###Markdown
or properties that are computed on-the-fly, such as its center of mass:
###Code
print(c_monox.center_of_mass)
###Output
[0. 0. 0.68544132]
###Markdown
To see the full list of available properties and methods, press **Tab** after typing `my_molecule.` in your Jupyter notebook. There are methods used to modify the molecule and these take additional argument(s). For example, to add a charge to the molecule:
###Code
c_monox.set_charge_and_spin(charge=1)
print(c_monox)
###Output
Full Formula (C1 O1)
Reduced Formula: CO
Charge = 1, Spin Mult = 2
Sites (2)
0 C 0.000000 0.000000 0.000000
1 O 0.000000 0.000000 1.200000
###Markdown
A molecule is essentially a list of `Site` objects. We can access these sites like we would a list in Python. For example, to obtain the total number of sites in the molecule:
###Code
len(c_monox)
###Output
_____no_output_____
###Markdown
Or to access the first site (note that Python is a 0-indexed programming language, so the first site is site 0):
###Code
print(c_monox[0])
###Output
[0. 0. 0.] C
###Markdown
And just like a list, I can even change the elements of a molecule.
###Code
c_monox[0] = "O"
c_monox[1] = "C"
print(c_monox)
###Output
Full Formula (C1 O1)
Reduced Formula: CO
Charge = 1, Spin Mult = 2
Sites (2)
0 O 0.000000 0.000000 0.000000
1 C 0.000000 0.000000 1.200000
###Markdown
A site object contains information on the site's identity and position in space.
###Code
site0 = c_monox[0]
site0.coords
site0.specie
###Output
_____no_output_____
###Markdown
Here, because we switched the elements, the site holds the element O. In general, a site can hold an [**`Element`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Element), a [**`Specie`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Specie) or a [**`Composition`**](http://pymatgen.org/pymatgen.core.composition.htmlpymatgen.core.composition.Composition). Let's look at each of these in turn.
###Code
from pymatgen import Element, Specie, Composition
###Output
_____no_output_____
###Markdown
An `Element` is simply an element from the Periodic Table.
###Code
carbon = Element('C')
###Output
_____no_output_____
###Markdown
Elements have properties such as atomic mass, average ionic radius and more:
###Code
carbon.average_ionic_radius
###Output
_____no_output_____
###Markdown
A `Specie` can contain additional information, such as oxidation state:
###Code
o_ion = Specie('O', oxidation_state=-2)
o_ion
###Output
_____no_output_____
###Markdown
Again, we can access both these **`Specie`**-specific properties and more general properties of Elements.
###Code
o_ion.oxi_state
o_ion.atomic_mass
###Output
_____no_output_____
###Markdown
Or, for convenience, we can use strings, which will be interpreted as elements with oxidation states:
###Code
Specie.from_string('O2-')
###Output
_____no_output_____
###Markdown
Finally, a `Composition` is an object that can hold certain amounts of different elements or species. This is most useful in a disordered Structure, and would rarely be used in a Molecule. For example, a site that holds 50% Au and 50% Cu would be set as follows:
###Code
comp = Composition({'Au': 0.5, 'Cu': 0.5})
###Output
_____no_output_____
###Markdown
A **`Composition`** contains more information than either an **`Element`** or a **`Specie`**. Because it can contain multiple **`Elements`**/**`Species`**, you can obtain the formula, or the chemical system.
###Code
print("formula", comp.alphabetical_formula)
print("chemical system", comp.chemical_system)
###Output
formula Au0.5 Cu0.5
chemical system Au-Cu
###Markdown
When we construct a `Molecule`, the input argument will automatically be converted into one of `Element`, `Specie` or `Composition`. Thus, in the previous example, when we first defined carbon monoxide, the input `['C', 'O']` was converted to `[Element C, Element O]`. Exercise: Hydrogen CyanideConstruct the linear HCN molecule where each bond distance is the sum of the two atomic radii.HINT: To do this, you'll probably want to use some Element properties!
###Code
H_rad = Element('H').atomic_radius
C_rad = Element('C').atomic_radius
N_rad = Element('N').atomic_radius
HC_bond_dist = H_rad + C_rad
CN_bond_dist = C_rad + N_rad
H_pos = 0
C_pos = H_pos + HC_bond_dist
N_pos = C_pos + CN_bond_dist
hcn = Molecule(['H','C','N'], [[H_pos, 0, 0], [C_pos, 0, 0],[N_pos, 0, 0]])
print(hcn)
###Output
Full Formula (H1 C1 N1)
Reduced Formula: HCN
Charge = 0.0, Spin Mult = 1
Sites (3)
0 H 0.000000 0.000000 0.000000
1 C 0.950000 0.000000 0.000000
2 N 2.300000 0.000000 0.000000
###Markdown
Creating a Structure and Lattice Creating a `Structure` is very similar to creating a `Molecule`, except we now also have to specify a `Lattice`.
###Code
from pymatgen import Lattice, Structure
###Output
_____no_output_____
###Markdown
A `Lattice` can be created in one of several ways, such as by inputting a 3x3 matrix describing the individual lattice vectors. For example, a cubic lattice of length 5 Ångstrom:
###Code
my_lattice = Lattice([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
print(my_lattice)
###Output
5.000000 0.000000 0.000000
0.000000 5.000000 0.000000
0.000000 0.000000 5.000000
###Markdown
Equivalently, we can create it from its lattice parameters:
###Code
my_lattice_2 = Lattice.from_parameters(5, 5, 5, 90, 90, 90)
###Output
_____no_output_____
###Markdown
Or, since we know in this case that we have a cubic lattice, a == b == c and alpha == beta == gamma == 90 degrees, so we can simply put:
###Code
my_lattice_3 = Lattice.cubic(5)
###Output
_____no_output_____
###Markdown
We can confirm that these lattices are the same:
###Code
my_lattice == my_lattice_2 == my_lattice_3
###Output
_____no_output_____
###Markdown
Now, we can create a simple crystal structure. Let's start with body-centered-cubic iron:
###Code
bcc_fe = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [0.5, 0.5, 0.5]])
print(bcc_fe)
print(bcc_fe)
###Output
Full Formula (Fe2)
Reduced Formula: Fe
abc : 2.800000 2.800000 2.800000
angles: 90.000000 90.000000 90.000000
Sites (2)
# SP a b c
--- ---- --- --- ---
0 Fe 0 0 0
1 Fe 0.5 0.5 0.5
###Markdown
Creating this `Structure` was similar to creating a `Molecule` in that we provided a list of elements and a list of positions. However, there are two key differences: we had to include our `Lattice` object when creating the `Structure` and since we have a lattice, we can define the positions of our sites in *fractional coordinates* with respect to that lattice instead of Cartesian coordinates. It's also possible to create an equivalent `Structure` using Cartesian coordinates:
###Code
bcc_fe_from_cart = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [1.4, 1.4, 1.4]],
coords_are_cartesian=True)
print(bcc_fe_from_cart)
###Output
Full Formula (Fe2)
Reduced Formula: Fe
abc : 2.800000 2.800000 2.800000
angles: 90.000000 90.000000 90.000000
Sites (2)
# SP a b c
--- ---- --- --- ---
0 Fe 0 0 0
1 Fe 0.5 0.5 0.5
###Markdown
We can check that both structures are equivalent:
###Code
bcc_fe == bcc_fe_from_cart
###Output
_____no_output_____
###Markdown
As in a `Molecule`, we can access properties of the structure, such as its volume:
###Code
bcc_fe.volume
###Output
_____no_output_____
###Markdown
Creating Structures from Spacegroups Structures can also be created directly from their spacegroup:
###Code
bcc_fe = Structure.from_spacegroup("Im-3m", Lattice.cubic(2.8), ["Fe"], [[0, 0, 0]])
print(bcc_fe)
nacl = Structure.from_spacegroup("Fm-3m", Lattice.cubic(5.692), ["Na+", "Cl-"],
[[0, 0, 0], [0.5, 0.5, 0.5]])
print(nacl)
###Output
Full Formula (Na4 Cl4)
Reduced Formula: NaCl
abc : 5.692000 5.692000 5.692000
angles: 90.000000 90.000000 90.000000
Sites (8)
# SP a b c
--- ---- --- --- ---
0 Na+ 0 0 0
1 Na+ 0 0.5 0.5
2 Na+ 0.5 0 0.5
3 Na+ 0.5 0.5 0
4 Cl- 0.5 0.5 0.5
5 Cl- 0.5 0 0
6 Cl- 0 0.5 0
7 Cl- 0 0 0.5
###Markdown
And spacegroups can be obtained from a structure:
###Code
nacl.get_space_group_info()
###Output
_____no_output_____
###Markdown
Where 225 is the spacegroup number. Supercells Alright, we are now well-versed in the art of creating singular structures. But in some cases, you really don't just want one unit cell; you want a supercell. Pymatgen provides a very simple interface to create superstructures. Let's start with the simplest structure that we can imagine: Polonium, a simple cubic metal.
###Code
polonium = Structure(Lattice.cubic(3.4), ["Po"], [[0.0, 0.0, 0.0]])
print(polonium)
###Output
Full Formula (Po1)
Reduced Formula: Po
abc : 3.400000 3.400000 3.400000
angles: 90.000000 90.000000 90.000000
Sites (1)
# SP a b c
--- ---- --- --- ---
0 Po 0 0 0
###Markdown
To make a supercell, we can just multiply the structure by a tuple!
###Code
supercell = polonium * (2, 2, 2)
print(supercell)
###Output
Full Formula (Po8)
Reduced Formula: Po
abc : 6.800000 6.800000 6.800000
angles: 90.000000 90.000000 90.000000
Sites (8)
# SP a b c
--- ---- --- --- ---
0 Po 0 0 0
1 Po 0 0 0.5
2 Po 0 0.5 0
3 Po 0 0.5 0.5
4 Po 0.5 0 0
5 Po 0.5 0 0.5
6 Po 0.5 0.5 0
7 Po 0.5 0.5 0.5
###Markdown
Exercise: Barium TitanateLoad BaTiO3 from the CIF file provided (`BaTiO3.cif`). Replace the barium with a new element with atomic number equal to the the day of the month you were born (e.g. 1st = hydrogen, 31st = gallium). Then, make a supercell of N unit cells in one cartesian direction of your choice where N is the integer of your birth month (e.g. January = 1, December = 12).
###Code
BaTiO3=Structure.from_file("BaTiO3.cif")
print(BaTiO3.get_space_group_info())
BaTiO3.replace(0,'Mg')
BaTiO3=BaTiO3*(1,1,4)
print(BaTiO3)
print(BaTiO3.get_space_group_info())
###Output
('R3m', 160)
Full Formula (Mg4 Ti4 O12)
Reduced Formula: MgTiO3
abc : 4.077159 4.077159 16.308637
angles: 89.699022 89.699022 89.699022
Sites (20)
# SP a b c
--- ---- -------- -------- --------
0 Mg 0.497155 0.497155 0.124289
1 Mg 0.497155 0.497155 0.374289
2 Mg 0.497155 0.497155 0.624289
3 Mg 0.497155 0.497155 0.874289
4 Ti 0.982209 0.982209 0.245552
5 Ti 0.982209 0.982209 0.495552
6 Ti 0.982209 0.982209 0.745552
7 Ti 0.982209 0.982209 0.995552
8 O 0.524065 0.012736 0.003184
9 O 0.524065 0.012736 0.253184
10 O 0.524065 0.012736 0.503184
11 O 0.524065 0.012736 0.753184
12 O 0.012736 0.012736 0.131016
13 O 0.012736 0.012736 0.381016
14 O 0.012736 0.012736 0.631016
15 O 0.012736 0.012736 0.881016
16 O 0.012736 0.524065 0.003184
17 O 0.012736 0.524065 0.253184
18 O 0.012736 0.524065 0.503184
19 O 0.012736 0.524065 0.753184
('R3m', 160)
###Markdown
Materials Project Workshop – July 28th to July 30th, 2019, Berkeley, California  What is pymatgen?Pymatgen (Python Materials Genomics) is the code that powers all of the scientific analysis behind the Materials Project. It includes a robust and efficient libraries for the handling of crystallographic structures and molecules, in addition to various mathematical and scientific tools for the handling and generation of materials data. For more details, see the [pymatgen website](https://pymatgen.org). How do I install pymatgen?For the workshop, pymatgen has been pre-installed for use in your Jupyter notebooks.Otherwise, pymatgen can be installed via pip:`pip install pymatgen`or conda:`conda install --channel matsci pymatgen`We recommend using Python 3.6 or above. Until 2018, pymatgen was developed simultaneously for Python 2.x and 3.x, but following the rest of the Python community we have phased out support for Python 2.x, and since version 2019.1.1 we are developing exclusively for Python 3.x. Where can I find help and how do I get involved?* **For general help:** [pymatgen discourse](https://pymatgen.discourse.group/) is a place to ask questions.* **To report bugs:** The [Github Issues](https://github.com/materialsproject/pymatgen/issues) page is a good place to report bugs.* **For Materials Project data and website discussions:** The Materials Project has its community [Materials Project Discussion](https://discuss.materialsproject.org) forum. * **For more example notebooks:** [matgenb](http://matgenb.materialsvirtuallab.org) is a new resource of Jupyter notebooks demonstrating various pymatgen functionality.If you want specific new features, you're welcome to ask! We try to respond to community needs. If you're a developer and can add the feature yourself, we actively encourage you to do so by creating a Pull Request on Github with your additional functionality. To date, pymatgen has seen over 19,000 commits and nearly 150 contributors, and we try to have an inclusive and welcoming development community. All contributors are also individually acknowledged on [materialsproject.org/about](https://materialsproject.org/about). Verify we have pymatgen installedFirst, let's verify we have pymatgen installed. The following command should produce no error or warning:
###Code
import pymatgen
###Output
_____no_output_____
###Markdown
We can show the specific version of pymatgen installed:
###Code
print(pymatgen.__version__)
###Output
_____no_output_____
###Markdown
For a list of new features, bug fixes and other changes, consult the [changelog on pymatgen.org](http://pymatgen.org/change_log.html).You can also see where pymatgen is installed on your computer:
###Code
print(pymatgen.__file__)
###Output
_____no_output_____
###Markdown
We can also see which version of the Python programming language we are using:
###Code
import sys
print(sys.version)
###Output
_____no_output_____
###Markdown
If you have problems or need to report bugs when using pymatgen after the workshop, the above information is often very useful to help us identify the problem. Structures and MoleculesMost of the fundamentals of pymatgen are expressed in terms of [**`Molecule`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Molecule) and [**`Structure`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Structure) objects.While we will mostly be using `Structure`, `Stucture` and `Molecule` are very similar conceptually. The main difference is that `Structure` supports full periodicity required to describe crystallographic structures.Creating a `Structure` can be done in one line, even for complicated crystallographic structures. However, we'll start by introducing the somewhat simpler `Molecule` object, and then use this understanding of `Molecule` to introduce `Structure`. Creating a MoleculeStart by importing `Molecule`:
###Code
from pymatgen import Molecule
###Output
_____no_output_____
###Markdown
In a Jupyter notebook, you can show help for any Python object by clicking on the object and pressing **Shift+Tab**. This will give you a list of arguments and keyword arguments necessary to construct the object, as well as the documentation ('docstring') which gives more information on what each argument means.
###Code
Molecule
###Output
_____no_output_____
###Markdown
Molecule takes input **arguments** `species` and `coords`, and input **keyword arguments** `charge`, `spin_multiplicity`, `validate_proximity` and `site_properties`.Keyword arguments come with a default value (the value after the equals sign), and so keyword arguments are optional.Arguments (without default values) are mandatory.
###Code
c_monox = Molecule(["C","O"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.2]])
print(c_monox)
###Output
_____no_output_____
###Markdown
Alright, now let's use a keyword variable to change a default. How about we make an anion?
###Code
oh_minus = Molecule(["O", "H"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], charge=-1)
print(oh_minus)
###Output
_____no_output_____
###Markdown
You can also create Molecule objects from files. Let's say you have an \*.xyz file called "water.xyz". You can import that into pymatgen with `Molecule.from_file`, like:
###Code
water = Molecule.from_file("water.xyz")
print(water)
###Output
_____no_output_____
###Markdown
Exercise: Making MoleculesTry it yourself! Create molecules however you like!In this folder are several example molecules (`methane.xyz`, `furan.xyz`, and `benzene.xyz`). Try loading these files with `Molecule.from_file`. You can also try making a Molecule from a list of species and coordinates. Try changing the default parameters - see what you can and cannot do (for instance, look at varying the charge and the spin multiplicity). What's in a Molecule? Introducing Sites, Elements and Species You can access properties of the molecule, such as the Cartesian coordinates of its sites:
###Code
print(c_monox.cart_coords)
###Output
_____no_output_____
###Markdown
or properties that are computed on-the-fly, such as its center of mass:
###Code
print(c_monox.center_of_mass)
###Output
_____no_output_____
###Markdown
To see the full list of available properties and methods, press **Tab** after typing `my_molecule.` in your Jupyter notebook. There are methods used to modify the molecule and these take additional argument(s). For example, to add a charge to the molecule:
###Code
c_monox.set_charge_and_spin(charge=1)
print(c_monox)
###Output
_____no_output_____
###Markdown
A molecule is essentially a list of `Site` objects. We can access these sites like we would a list in Python. For example, to obtain the total number of sites in the molecule:
###Code
len(c_monox)
###Output
_____no_output_____
###Markdown
Or to access the first site (note that Python is a 0-indexed programming language, so the first site is site 0):
###Code
print(c_monox[0])
###Output
_____no_output_____
###Markdown
And just like a list, I can even change the elements of a molecule.
###Code
c_monox[0] = "O"
c_monox[1] = "C"
print(c_monox)
###Output
_____no_output_____
###Markdown
A site object contains information on the site's identity and position in space.
###Code
site0 = c_monox[0]
site0.coords
site0.specie
###Output
_____no_output_____
###Markdown
Here, because we switched the elements, the site holds the element O. In general, a site can hold an [**`Element`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Element), a [**`Specie`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Specie) or a [**`Composition`**](http://pymatgen.org/pymatgen.core.composition.htmlpymatgen.core.composition.Composition). Let's look at each of these in turn.
###Code
from pymatgen import Element, Specie, Composition
###Output
_____no_output_____
###Markdown
An `Element` is simply an element from the Periodic Table.
###Code
carbon = Element('C')
###Output
_____no_output_____
###Markdown
Elements have properties such as atomic mass, average ionic radius and more:
###Code
carbon.average_ionic_radius
###Output
_____no_output_____
###Markdown
A `Specie` can contain additional information, such as oxidation state:
###Code
o_ion = Specie('O', oxidation_state=-2)
o_ion
###Output
_____no_output_____
###Markdown
Again, we can access both these **`Specie`**-specific properties and more general properties of Elements.
###Code
o_ion.oxi_state
o_ion.atomic_mass
###Output
_____no_output_____
###Markdown
Or, for convenience, we can use strings, which will be interpreted as elements with oxidation states:
###Code
Specie.from_string('O2-')
###Output
_____no_output_____
###Markdown
Finally, a `Composition` is an object that can hold certain amounts of different elements or species. This is most useful in a disordered Structure, and would rarely be used in a Molecule. For example, a site that holds 50% Au and 50% Cu would be set as follows:
###Code
comp = Composition({'Au': 0.5, 'Cu': 0.5})
###Output
_____no_output_____
###Markdown
A **`Composition`** contains more information than either an **`Element`** or a **`Specie`**. Because it can contain multiple **`Elements`**/**`Species`**, you can obtain the formula, or the chemical system.
###Code
print("formula", comp.alphabetical_formula)
print("chemical system", comp.chemical_system)
###Output
_____no_output_____
###Markdown
When we construct a `Molecule`, the input argument will automatically be converted into one of `Element`, `Specie` or `Composition`. Thus, in the previous example, when we first defined carbon monoxide, the input `['C', 'O']` was converted to `[Element C, Element O]`. Exercise: Hydrogen CyanideConstruct the linear HCN molecule where each bond distance is the sum of the two atomic radii.HINT: To do this, you'll probably want to use some Element properties!
###Code
H_rad = Element('H').atomic_radius
C_rad = Element('C').atomic_radius
N_rad = Element('N').atomic_radius
HC_bond_dist = H_rad + C_rad
CN_bond_dist = C_rad + N_rad
H_pos = 0
C_pos = H_pos + HC_bond_dist
N_pos = C_pos + CN_bond_dist
hcn = Molecule(['H','C','N'], [[H_pos, 0, 0], [C_pos, 0, 0],[N_pos, 0, 0]])
print(hcn)
###Output
_____no_output_____
###Markdown
Creating a Structure and Lattice Creating a `Structure` is very similar to creating a `Molecule`, except we now also have to specify a `Lattice`.
###Code
from pymatgen import Lattice, Structure
###Output
_____no_output_____
###Markdown
A `Lattice` can be created in one of several ways, such as by inputting a 3x3 matrix describing the individual lattice vectors. For example, a cubic lattice of length 5 Ångstrom:
###Code
my_lattice = Lattice([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
print(my_lattice)
###Output
_____no_output_____
###Markdown
Equivalently, we can create it from its lattice parameters:
###Code
my_lattice_2 = Lattice.from_parameters(5, 5, 5, 90, 90, 90)
###Output
_____no_output_____
###Markdown
Or, since we know in this case that we have a cubic lattice, a == b == c and alpha == beta == gamma == 90 degrees, so we can simply put:
###Code
my_lattice_3 = Lattice.cubic(5)
###Output
_____no_output_____
###Markdown
We can confirm that these lattices are the same:
###Code
my_lattice == my_lattice_2 == my_lattice_3
###Output
_____no_output_____
###Markdown
Now, we can create a simple crystal structure. Let's start with body-centered-cubic iron:
###Code
bcc_fe = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [0.5, 0.5, 0.5]])
print(bcc_fe)
print(bcc_fe)
###Output
_____no_output_____
###Markdown
Creating this `Structure` was similar to creating a `Molecule` in that we provided a list of elements and a list of positions. However, there are two key differences: we had to include our `Lattice` object when creating the `Structure` and since we have a lattice, we can define the positions of our sites in *fractional coordinates* with respect to that lattice instead of Cartesian coordinates. It's also possible to create an equivalent `Structure` using Cartesian coordinates:
###Code
bcc_fe_from_cart = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [1.4, 1.4, 1.4]],
coords_are_cartesian=True)
print(bcc_fe_from_cart)
###Output
_____no_output_____
###Markdown
We can check that both structures are equivalent:
###Code
bcc_fe == bcc_fe_from_cart
###Output
_____no_output_____
###Markdown
As in a `Molecule`, we can access properties of the structure, such as its volume:
###Code
bcc_fe.volume
###Output
_____no_output_____
###Markdown
Creating Structures from Spacegroups Structures can also be created directly from their spacegroup:
###Code
bcc_fe = Structure.from_spacegroup("Im-3m", Lattice.cubic(2.8), ["Fe"], [[0, 0, 0]])
print(bcc_fe)
nacl = Structure.from_spacegroup("Fm-3m", Lattice.cubic(5.692), ["Na+", "Cl-"],
[[0, 0, 0], [0.5, 0.5, 0.5]])
print(nacl)
###Output
_____no_output_____
###Markdown
And spacegroups can be obtained from a structure:
###Code
nacl.get_space_group_info()
###Output
_____no_output_____
###Markdown
Where 225 is the spacegroup number. Supercells Alright, we are now well-versed in the art of creating singular structures. But in some cases, you really don't just want one unit cell; you want a supercell. Pymatgen provides a very simple interface to create superstructures. Let's start with the simplest structure that we can imagine: Polonium, a simple cubic metal.
###Code
polonium = Structure(Lattice.cubic(3.4), ["Po"], [[0.0, 0.0, 0.0]])
print(polonium)
###Output
_____no_output_____
###Markdown
To make a supercell, we can just multiply the structure by a tuple!
###Code
supercell = polonium * (2, 2, 2)
print(supercell)
###Output
_____no_output_____
###Markdown
Exercise: Barium TitanateLoad BaTiO3 from the CIF file provided (`BaTiO3.cif`). Replace the barium with a new element with atomic number equal to the the day of the month you were born (e.g. 1st = hydrogen, 31st = gallium). Then, make a supercell of N unit cells in one cartesian direction of your choice where N is the integer of your birth month (e.g. January = 1, December = 12).
###Code
BaTiO3=Structure.from_file("BaTiO3.cif")
print(BaTiO3.get_space_group_info())
BaTiO3.replace(0,'Mg')
BaTiO3=BaTiO3*(1,1,4)
print(BaTiO3)
print(BaTiO3.get_space_group_info())
###Output
_____no_output_____
###Markdown
Materials Project Workshop – July 28th to July 30th, 2019, Berkeley, California  What is pymatgen?Pymatgen (Python Materials Genomics) is the code that powers all of the scientific analysis behind the Materials Project. It includes a robust and efficient libraries for the handling of crystallographic structures and molecules, in addition to various mathematical and scientific tools for the handling and generation of materials data. For more details, see the [pymatgen website](https://pymatgen.org). How do I install pymatgen?For the workshop, pymatgen has been pre-installed for use in your Jupyter notebooks.Otherwise, pymatgen can be installed via pip:`pip install pymatgen`or conda:`conda install --channel matsci pymatgen`We recommend using Python 3.6 or above. Until 2018, pymatgen was developed simultaneously for Python 2.x and 3.x, but following the rest of the Python community we have phased out support for Python 2.x, and since version 2019.1.1 we are developing exclusively for Python 3.x. Where can I find help and how do I get involved?* **For general help:** [pymatgen discourse](https://pymatgen.discourse.group/) is a place to ask questions.* **To report bugs:** The [Github Issues](https://github.com/materialsproject/pymatgen/issues) page is a good place to report bugs.* **For Materials Project data and website discussions:** The Materials Project has its community [Materials Project Discussion](https://discuss.materialsproject.org) forum. * **For more example notebooks:** [matgenb](http://matgenb.materialsvirtuallab.org) is a new resource of Jupyter notebooks demonstrating various pymatgen functionality.If you want specific new features, you're welcome to ask! We try to respond to community needs. If you're a developer and can add the feature yourself, we actively encourage you to do so by creating a Pull Request on Github with your additional functionality. To date, pymatgen has seen over 19,000 commits and nearly 150 contributors, and we try to have an inclusive and welcoming development community. All contributors are also individually acknowledged on [materialsproject.org/about](https://materialsproject.org/about). Verify we have pymatgen installedFirst, let's verify we have pymatgen installed. The following command should produce no error or warning:
###Code
import pymatgen.core
###Output
_____no_output_____
###Markdown
We can show the specific version of pymatgen installed:
###Code
print(pymatgen.core.__version__)
###Output
_____no_output_____
###Markdown
For a list of new features, bug fixes and other changes, consult the [changelog on pymatgen.org](http://pymatgen.org/change_log.html).You can also see where pymatgen is installed on your computer:
###Code
print(pymatgen.__file__)
###Output
_____no_output_____
###Markdown
We can also see which version of the Python programming language we are using:
###Code
import sys
print(sys.version)
###Output
_____no_output_____
###Markdown
If you have problems or need to report bugs when using pymatgen after the workshop, the above information is often very useful to help us identify the problem. Structures and MoleculesMost of the fundamentals of pymatgen are expressed in terms of [**`Molecule`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Molecule) and [**`Structure`**](http://pymatgen.org/pymatgen.core.structure.htmlpymatgen.core.structure.Structure) objects.While we will mostly be using `Structure`, `Stucture` and `Molecule` are very similar conceptually. The main difference is that `Structure` supports full periodicity required to describe crystallographic structures.Creating a `Structure` can be done in one line, even for complicated crystallographic structures. However, we'll start by introducing the somewhat simpler `Molecule` object, and then use this understanding of `Molecule` to introduce `Structure`. Creating a MoleculeStart by importing `Molecule`:
###Code
from pymatgen.core.structure import Molecule
###Output
_____no_output_____
###Markdown
In a Jupyter notebook, you can show help for any Python object by clicking on the object and pressing **Shift+Tab**. This will give you a list of arguments and keyword arguments necessary to construct the object, as well as the documentation ('docstring') which gives more information on what each argument means.
###Code
Molecule
###Output
_____no_output_____
###Markdown
Molecule takes input **arguments** `species` and `coords`, and input **keyword arguments** `charge`, `spin_multiplicity`, `validate_proximity` and `site_properties`.Keyword arguments come with a default value (the value after the equals sign), and so keyword arguments are optional.Arguments (without default values) are mandatory.
###Code
c_monox = Molecule(["C","O"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.2]])
print(c_monox)
###Output
_____no_output_____
###Markdown
Alright, now let's use a keyword variable to change a default. How about we make an anion?
###Code
oh_minus = Molecule(["O", "H"], [[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]], charge=-1)
print(oh_minus)
###Output
_____no_output_____
###Markdown
You can also create Molecule objects from files. Let's say you have an \*.xyz file called "water.xyz". You can import that into pymatgen with `Molecule.from_file`, like:
###Code
water = Molecule.from_file("water.xyz")
print(water)
###Output
_____no_output_____
###Markdown
Exercise: Making MoleculesTry it yourself! Create molecules however you like!In this folder are several example molecules (`methane.xyz`, `furan.xyz`, and `benzene.xyz`). Try loading these files with `Molecule.from_file`. You can also try making a Molecule from a list of species and coordinates. Try changing the default parameters - see what you can and cannot do (for instance, look at varying the charge and the spin multiplicity). What's in a Molecule? Introducing Sites, Elements and Species You can access properties of the molecule, such as the Cartesian coordinates of its sites:
###Code
print(c_monox.cart_coords)
###Output
_____no_output_____
###Markdown
or properties that are computed on-the-fly, such as its center of mass:
###Code
print(c_monox.center_of_mass)
###Output
_____no_output_____
###Markdown
To see the full list of available properties and methods, press **Tab** after typing `my_molecule.` in your Jupyter notebook. There are methods used to modify the molecule and these take additional argument(s). For example, to add a charge to the molecule:
###Code
c_monox.set_charge_and_spin(charge=1)
print(c_monox)
###Output
_____no_output_____
###Markdown
A molecule is essentially a list of `Site` objects. We can access these sites like we would a list in Python. For example, to obtain the total number of sites in the molecule:
###Code
len(c_monox)
###Output
_____no_output_____
###Markdown
Or to access the first site (note that Python is a 0-indexed programming language, so the first site is site 0):
###Code
print(c_monox[0])
###Output
_____no_output_____
###Markdown
And just like a list, I can even change the elements of a molecule.
###Code
c_monox[0] = "O"
c_monox[1] = "C"
print(c_monox)
###Output
_____no_output_____
###Markdown
A site object contains information on the site's identity and position in space.
###Code
site0 = c_monox[0]
site0.coords
site0.specie
###Output
_____no_output_____
###Markdown
Here, because we switched the elements, the site holds the element O. In general, a site can hold an [**`Element`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Element), a [**`Specie`**](http://pymatgen.org/pymatgen.core.periodic_table.htmlpymatgen.core.periodic_table.Specie) or a [**`Composition`**](http://pymatgen.org/pymatgen.core.composition.htmlpymatgen.core.composition.Composition). Let's look at each of these in turn.
###Code
from pymatgen.core.composition import Element, Composition
from pymatgen.core.periodic_table import Specie
###Output
_____no_output_____
###Markdown
An `Element` is simply an element from the Periodic Table.
###Code
carbon = Element('C')
###Output
_____no_output_____
###Markdown
Elements have properties such as atomic mass, average ionic radius and more:
###Code
carbon.average_ionic_radius
###Output
_____no_output_____
###Markdown
A `Specie` can contain additional information, such as oxidation state:
###Code
o_ion = Specie('O', oxidation_state=-2)
o_ion
###Output
_____no_output_____
###Markdown
Again, we can access both these **`Specie`**-specific properties and more general properties of Elements.
###Code
o_ion.oxi_state
o_ion.atomic_mass
###Output
_____no_output_____
###Markdown
Or, for convenience, we can use strings, which will be interpreted as elements with oxidation states:
###Code
Specie.from_string('O2-')
###Output
_____no_output_____
###Markdown
Finally, a `Composition` is an object that can hold certain amounts of different elements or species. This is most useful in a disordered Structure, and would rarely be used in a Molecule. For example, a site that holds 50% Au and 50% Cu would be set as follows:
###Code
comp = Composition({'Au': 0.5, 'Cu': 0.5})
###Output
_____no_output_____
###Markdown
A **`Composition`** contains more information than either an **`Element`** or a **`Specie`**. Because it can contain multiple **`Elements`**/**`Species`**, you can obtain the formula, or the chemical system.
###Code
print("formula", comp.alphabetical_formula)
print("chemical system", comp.chemical_system)
###Output
_____no_output_____
###Markdown
When we construct a `Molecule`, the input argument will automatically be converted into one of `Element`, `Specie` or `Composition`. Thus, in the previous example, when we first defined carbon monoxide, the input `['C', 'O']` was converted to `[Element C, Element O]`. Exercise: Hydrogen CyanideConstruct the linear HCN molecule where each bond distance is the sum of the two atomic radii.HINT: To do this, you'll probably want to use some Element properties!
###Code
H_rad = Element('H').atomic_radius
C_rad = Element('C').atomic_radius
N_rad = Element('N').atomic_radius
HC_bond_dist = H_rad + C_rad
CN_bond_dist = C_rad + N_rad
H_pos = 0
C_pos = H_pos + HC_bond_dist
N_pos = C_pos + CN_bond_dist
hcn = Molecule(['H','C','N'], [[H_pos, 0, 0], [C_pos, 0, 0],[N_pos, 0, 0]])
print(hcn)
###Output
_____no_output_____
###Markdown
Creating a Structure and Lattice Creating a `Structure` is very similar to creating a `Molecule`, except we now also have to specify a `Lattice`.
###Code
from pymatgen.core import Lattice, Structure
###Output
_____no_output_____
###Markdown
A `Lattice` can be created in one of several ways, such as by inputting a 3x3 matrix describing the individual lattice vectors. For example, a cubic lattice of length 5 Ångstrom:
###Code
my_lattice = Lattice([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
print(my_lattice)
###Output
_____no_output_____
###Markdown
Equivalently, we can create it from its lattice parameters:
###Code
my_lattice_2 = Lattice.from_parameters(5, 5, 5, 90, 90, 90)
###Output
_____no_output_____
###Markdown
Or, since we know in this case that we have a cubic lattice, a == b == c and alpha == beta == gamma == 90 degrees, so we can simply put:
###Code
my_lattice_3 = Lattice.cubic(5)
###Output
_____no_output_____
###Markdown
We can confirm that these lattices are the same:
###Code
my_lattice == my_lattice_2 == my_lattice_3
###Output
_____no_output_____
###Markdown
Now, we can create a simple crystal structure. Let's start with body-centered-cubic iron:
###Code
bcc_fe = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [0.5, 0.5, 0.5]])
print(bcc_fe)
print(bcc_fe)
###Output
_____no_output_____
###Markdown
Creating this `Structure` was similar to creating a `Molecule` in that we provided a list of elements and a list of positions. However, there are two key differences: we had to include our `Lattice` object when creating the `Structure` and since we have a lattice, we can define the positions of our sites in *fractional coordinates* with respect to that lattice instead of Cartesian coordinates. It's also possible to create an equivalent `Structure` using Cartesian coordinates:
###Code
bcc_fe_from_cart = Structure(Lattice.cubic(2.8), ["Fe", "Fe"], [[0, 0, 0], [1.4, 1.4, 1.4]],
coords_are_cartesian=True)
print(bcc_fe_from_cart)
###Output
_____no_output_____
###Markdown
We can check that both structures are equivalent:
###Code
bcc_fe == bcc_fe_from_cart
###Output
_____no_output_____
###Markdown
As in a `Molecule`, we can access properties of the structure, such as its volume:
###Code
bcc_fe.volume
###Output
_____no_output_____
###Markdown
Creating Structures from Spacegroups Structures can also be created directly from their spacegroup:
###Code
bcc_fe = Structure.from_spacegroup("Im-3m", Lattice.cubic(2.8), ["Fe"], [[0, 0, 0]])
print(bcc_fe)
nacl = Structure.from_spacegroup("Fm-3m", Lattice.cubic(5.692), ["Na+", "Cl-"],
[[0, 0, 0], [0.5, 0.5, 0.5]])
print(nacl)
###Output
_____no_output_____
###Markdown
And spacegroups can be obtained from a structure:
###Code
nacl.get_space_group_info()
###Output
_____no_output_____
###Markdown
Where 225 is the spacegroup number. Supercells Alright, we are now well-versed in the art of creating singular structures. But in some cases, you really don't just want one unit cell; you want a supercell. Pymatgen provides a very simple interface to create superstructures. Let's start with the simplest structure that we can imagine: Polonium, a simple cubic metal.
###Code
polonium = Structure(Lattice.cubic(3.4), ["Po"], [[0.0, 0.0, 0.0]])
print(polonium)
###Output
_____no_output_____
###Markdown
To make a supercell, we can just multiply the structure by a tuple!
###Code
supercell = polonium * (2, 2, 2)
print(supercell)
###Output
_____no_output_____
###Markdown
Exercise: Barium TitanateLoad BaTiO3 from the CIF file provided (`BaTiO3.cif`). Replace the barium with a new element with atomic number equal to the the day of the month you were born (e.g. 1st = hydrogen, 31st = gallium). Then, make a supercell of N unit cells in one cartesian direction of your choice where N is the integer of your birth month (e.g. January = 1, December = 12).
###Code
BaTiO3=Structure.from_file("BaTiO3.cif")
print(BaTiO3.get_space_group_info())
BaTiO3.replace(0,'Mg')
BaTiO3=BaTiO3*(1,1,4)
print(BaTiO3)
print(BaTiO3.get_space_group_info())
###Output
_____no_output_____ |
clase_3.ipynb | ###Markdown
Repaso de fundamentos Ley fuerte de los grandes números Sean $X_1, X_2, \ldots, X_N$ variables aleatorias (V.A.) independientes e idénticamente distribuidas (iid) con $$\mathbb{E}[X_i] = \mu$$Se cumple que su promedio$$ \bar X = \frac{1}{N} (X_1 + X_2 + \ldots + X_N) \to \mu$$cuando $N \to \infty$> El promedio converge al valor esperado con N grandeLa ley fuerte nos dice que podemos aproximar $\mu$ con $\bar X$ pero no nos da pistas sobre que tan cerca está $\bar X$ de $\mu$Esto último es importante puesto que en la práctica $N$ nunca será $\infty$ Teorema central del límiteSi $X_1, X_2, \ldots, X_N$ son V.A iid, entonces su promedio de distribuye como$$ \bar X \sim \mathcal{N}(\mu, \sigma^2/N) $$cuando $N \to \infty$Es decir que> Cuando N es grande el promedio (suma) se distribuye como una normal centrada en $\mu$ y con desviación estándar $\sigma/\sqrt{N}$Esto ocurre sin importar la distribución original de las V.A., pero tienen que ser independientes!Más importante aun> La tasa a la que converge el estimador a su valor esperado es $\frac{1}{\sqrt{N}}$ Ejemplo: La distribución del promedio de lanzar $n$ dados
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib import animation
import numpy as np
import scipy.stats
fig, ax = plt.subplots(figsize=(8, 3), tight_layout=True)
def update_plot(k):
ax.cla()
ax.set_title("Promedio de {0} lanzamiento/s de dado".format(k+1))
dist = scipy.stats.multinomial(n=k+1, p=[1/6]*6)
repeats = dist.rvs(size=1000)
average_dice = np.sum(repeats*range(1, 7)/(k+1), axis=1)
ax.hist(average_dice, bins=12, density=True)
ax.set_xlim([1, 6])
anim = animation.FuncAnimation(fig, update_plot, frames=15, interval=1000,
repeat=True, blit=False)
###Output
_____no_output_____
###Markdown
Integración por Monte CarloLa integración por Monte Carlo consiste en estimar el valor esperado de una función $g(x)$, definido como$$\mathbb{E}[g(X)] = \int g(x) f(x) \,dx$$donde $f(x)$ es la densidad de $x$por medio de$$\mathbb{E}[g(X)] \approx \hat g_N = \frac{1}{N} \sum_{i=1}^N g(x_i) \quad x_i \sim f(x)$$Por otro lado el teorema central del límite nos asegura que$$\hat g_N \sim \mathcal{N} \left( \mathbb{E}[g(X)], \sigma_N^2/N \right)$$donde la varianza muestreal se puede estimar como$$\sigma_N^2 = \frac{1}{N-1} \sum_{i=1}^N (g(x_i) - \hat g_N)^2 $$> En resumen el estimador converge a su valor esperado a una tasa de $\frac{1}{\sqrt{N}}$. Esto es independiente de la dimensionalidad de la integral. Es decir que para una integral complicada con muchas dimensiones Monte Carlo puede darnos muchas ventajas Cadenas de MarkovEn la lección anterior vimos la definición de un proceso estocásticoEn lo que sigue asumiremos que el proceso estocástico solo puede tomar valores de un conjunto discreto $\mathcal{S}$ en tiempos $n>0$ que también son discretosLlamaremos a $\mathcal{S}=\{1, 2, \ldots, M\}$ el conjunto de **estados** del proceso y cada estado en particular se suele denotar por un número naturalPara que un proceso estocástico sea considerado una **cadena de Markov** se debe cumplir $$P(X_{n+1}|X_{n}, X_{n-1}, \ldots, X_{1}) = P(X_{n+1}|X_{n})$$que se conoce como la propiedad de Markov o propiedad markoviana> El estado futuro es independiente del pasado dado el presenteEn particular si la cadena de Markov tiene estados discretos y es homogenea, podemos escribir$$P(X_{n+1}=j|X_{n}=i) = P_{ij},$$donde homogeneo quiere decir que la probabilidad de transicionar de un estado a otro no cambia con el tiempoLa probabilidad $P_{i,j}$ se suele llamar probabilidad de transición "a un paso"El conjunto con todas las posibles combinaciones $P_{ij}$ para $i,j \in \mathcal{S}$ forma una matriz cuadrada de $M \times M$ que se conoce como matriz de transición$$P = \begin{pmatrix} P_{11} & P_{12} & \ldots & P_{1M} \\ P_{21} & P_{22} & \ldots & P_{2M} \\\vdots & \vdots & \ddots & \vdots \\P_{M1} & P_{M2} & \ldots & P_{MM}\end{pmatrix}$$Donde siempre se debe cumplir que las filas sumen 1$$\sum_{j \in \mathcal{S}} P_{ij} = 1$$Además todos los $P_{ij} \geq 0$ y $P_{ij} \in [0, 1]$Una matriz de transición o matriz estocástica puede representarse como un grafo dirigido donde los vertices son los estados y las aristas las probabilidades de transición o pesosEl siguiente es un ejemplo de grafo para un sistema de cuatro estados con todas sus transiciones equivalentes e iguales a $1/2$. Las transiciones con probabilidad $0$ no se muestran. Considere ahora el siguiente ejemplo Notemos que si salimos del estado $0$ o del estado $3$ ya no podemos volver a ellos. Esto se conoce como un estado **transitorio** o transientePor el contrario los estados a los que si tenemos posibilidad de retornar se llaman estados **recurrentes**Cuando se tienen estados a los que no se puede retornar se dice que cadena es **reducible**Por el contrario si podemos regresar a todos los estados se dice que la cadena es **irreducible**Una cadena reducible puede "dividirse" para crear cadenas irreducibles. En el ejemplo de arriba podemos separar $\{0\}$, $\{1,2\}$ y $\{3\}$ en tres cadenas irreduciblesLa cadena de Markov anterior modela un problema conocido como la ruina del apostador, puedes estudiar de que se trata [aquí](https://en.wikipedia.org/wiki/Gambler%27s_ruin) y [aquí](http://manjeetdahiya.com/posts/markov-chains-gamblers-ruin/) Ejemplo: Cadena de dos estadosDigamos que queremos predecir el clima de Valdivia por medio de un modelo de cadena de MarkovAsumiremos que el clima de mañana es perfectamente predecible a partir del clima de hoySean dos estados- $s_A$ Luvioso- $s_B$ SoleadoCon probabilidades condicionales $P(s_A|s_A) = 0.7$, $P(s_B|s_A) = 0.3$, $P(s_A|s_B) = 0.45$ y $P(s_B|s_B) = 0.55$En este caso la matriz de transición de la cadena es$$ P = \begin{pmatrix} P(s_A|s_A) & P(s_B|s_A) \\ P(s_A|s_B) & P(s_B|s_B) \end{pmatrix} = \begin{pmatrix} 0.7 & 0.3 \\ 0.45 & 0.55 \end{pmatrix} $$que también se puede visualizar como un mapa de transiciónSi está soleado hoy, ¿Cuál es la probabilidad de que llueva mañana, en tres dias más y en una semana más? Podemos usar la matriz de transición para responder esta pregunta
###Code
import numpy as np
P = np.array([[0.70, 0.30],
[0.45, 0.55]])
###Output
_____no_output_____
###Markdown
Podemos crear un vector de estado como un arreglo con codificación `one-hot`
###Code
s0 = np.array([0, 1]) # Estado soleado
###Output
_____no_output_____
###Markdown
Luego, las probabilidades para mañana dado que hoy esta soleado puede calcularse como$$s_1 = s_0 P$$que se conoce como transición a un paso
###Code
np.dot(s0, P)
###Output
_____no_output_____
###Markdown
La probabilidad para tres días más puede calcularse como$$s_3 = s_2 P = s_1 P^2 = s_0 P^3$$que se conoce como transición a 3 pasosSólo necesitamos elevar la matriz al cubo y multiplicar por el estado inicial
###Code
np.dot(s0, np.linalg.matrix_power(P, 3))
###Output
_____no_output_____
###Markdown
El pronóstico para una semana sería entonces la transición a 7 pasos
###Code
np.dot(s0, np.linalg.matrix_power(P, 7))
###Output
_____no_output_____
###Markdown
Notamos que el estado de nuestro sistema comienza a converger
###Code
np.dot(s0, np.linalg.matrix_power(P, 1000))
###Output
_____no_output_____
###Markdown
Estado estacionario de la cadena de MarkovSi la cadena de Markov converge a un estado, ese estado se llama **estado estacionario**Una cadena puede tener más de un estado estacionarioPor definición en un estado estacionario se cumple que $$s P = s$$Que corresponde al problema de valores y vectores propios> Es decir que los estados estacionarios son los vectores propios del sistemaPara el ejemplo anterior teniamos que$$\begin{pmatrix} s_1 & s_2 \end{pmatrix} P = \begin{pmatrix} s_1 & s_2 \end{pmatrix}$$Que resulta en las siguientes ecuaciones$$0.7 s_1 + 0.45 s_2 = s_1 $$$$0.3 s_1 + 0.55 s_2 = s_2$$Ambas nos dicen que $s_2 = \frac{2}{3} s_1$, si además consideramos que $s_1 + s_2 = 1$ podemos despejar y obtener- $s_1 = 3/5 = 0.6$- $s_2 = 0.4$Que es lo que vimos antesEsto nos dice que el 60\% de los días va a llover y un 40\% estará soleado Transición de n-pasosUna pregunta interesante a responder con una cadena de Markov es¿Cuál es la probabilidad de llegar al estado $j$ dado que estoy en el estado $i$ si doy exactamente $n$ pasos?Consideremos por ejemplo donde la matriz de transición es claramente$$P = \begin{pmatrix} 1/2 & 1/4 & 1/4 \\ 1/4 & 1/2 & 1/4 \\1/4 & 1/4 & 1/2\end{pmatrix}$$- ¿Cuántos caminos hay para llegar a $2$ desde $0$ en 2 pasos?- ¿Cuál es la probabilidad asociada?$$0.3125 = P_{00}P_{02} + P_{01}P_{12} + P_{02}P_{22} = \begin{pmatrix} P_{00} & P_{01} & P_{02} \end{pmatrix} \begin{pmatrix} P_{02} \\ P_{12} \\ P_{22} \end{pmatrix}$$ - ¿Cuántos caminos hay para llegar a $0$ desde $0$ en 2 pasos?- ¿Cuál es la probabilidad asociada?$$0.375 = P_{00}P_{00} + P_{01}P_{10} + P_{02}P_{20} = \begin{pmatrix} P_{00} & P_{01} & P_{02} \end{pmatrix} \begin{pmatrix} P_{00} \\ P_{10} \\ P_{20} \end{pmatrix}$$
###Code
P = np.array([[1/2, 1/4, 1/4],
[1/4, 1/2, 1/4],
[1/4, 1/4, 1/2]])
np.dot(P, P)
###Output
_____no_output_____
###Markdown
La probabilidad de llegar al estado $j$ desde el estado $i$ en $n$ pasos es equivalente al elemento en la fila $i$ y columna $j$ de la matriz $P^n$¿Qué ocurre cuando $n$ tiene a infinito?
###Code
display(np.linalg.matrix_power(P, 3),
np.linalg.matrix_power(P, 5),
np.linalg.matrix_power(P, 100))
###Output
_____no_output_____
###Markdown
Todas las filas convergen al mismo valor, esto se conoce como la distribución estacionaria de la cadena de Markov$P_{ij}^\infty$ nos da la probabilidad de estar en $j$ luego de infinitos pasosEl punto de partida ya no tiene relevanciaLas filas de $P^\infty$ convergen solo si la cadena es irreducible Algoritmo general para simular una cadena de Markov discretaAsumiendo que tenemos un sistema con un conjunto discreto de estados $\mathcal{S}$ estados y que conocemos la matriz de probabilidades de transición $P$ podemos simular su evolución con el siguiente algoritmo1. Setear $n=0$ y seleccionar un estado inicial $X_n = i$1. Para $n = 1,2,\ldots,T$ 1. Obtener la fila de $P$ que corresponde al estado actual $X_n$, es decir $P[X_n, :]$ 1. Generar $X_{n+1}$ de una distribución multinomial con vector de probabilidad igual a la fila seleccionada En este caso $T$ es el horizonte de la simulación Simulando una variable aleatoria multinomial
###Code
scipy.stats.multinomial.rvs(n=1, p=[0.7, 0.2, 0.1], size=1)
###Output
_____no_output_____
###Markdown
Tomando el argumento máximo de 100 realizaciones y graficando el histograma
###Code
fig, ax = plt.subplots(figsize=(5, 3), tight_layout=True)
ax.hist(np.argmax(scipy.stats.multinomial.rvs(n=1, p=[0.7, 0.2, 0.1], size=100), axis=1));
###Output
_____no_output_____
###Markdown
Simulando 1000 cadenas a un horizonte de 10 pasos para el ejemplo de dos estados
###Code
P = np.array([[0.70, 0.30],
[0.45, 0.55]])
n_chains = 10000
horizon = 10
states = np.zeros(shape=(n_chains, horizon), dtype='int')
states[:, 0] = 1
for i in range(n_chains):
for j in range(1, horizon):
states[i, j] = np.argmax(scipy.stats.multinomial.rvs(n=1, p=P[states[i, j-1], :], size=1))
fig, ax = plt.subplots(figsize=(6, 3), tight_layout=True)
for i in range(3):
ax.plot(states[i, :], '-o', alpha=0.5)
n_states = len(np.unique(states))
hist = np.zeros(shape=(horizon, n_states))
for j in range(horizon):
hist[j, :] = np.array([sum(states[:, j] == s) for s in range(n_states)])
print(hist)
fig, ax = plt.subplots(figsize=(4, 3), tight_layout=True)
ax.plot(np.argmax(hist, axis=1), marker='x')
ax.axhline(1, ls='--', c='k', alpha=0.5)
ax.axhline(0, ls='--', c='k', alpha=0.5)
###Output
_____no_output_____
###Markdown
Clase 3: rudimentos de python (II) Condicionales`if`, `elif`, `else`
###Code
x = 4
if x == 4:
print('Hola')
elif x < 10:
print('Efectivamente')
else:
print('No')
###Output
Hola
###Markdown
Construya una función que reciba un número entero. Si el número es menor a 0, que imprima 'Negativo'. Si el número es mayor a 0 y menor a 10, que imprima 'Positivo y menor a 10'. Si es entero y mayor a 10, que no imprima nada.Si es string, que imprima 'Es un texto. Se requiere un número entero.'Si se ingresa otro tipo de dato, que imprima 'Dato inválido. Se requiere un número entero.'
###Code
def entero(a):
if type(a)==int:
if a < 0:
print('Negativo')
elif (a > 0) and (a < 10):
print('Positivo y menor a 10.')
else:
None
elif type(a)==str:
print('Es un texto. Se requiere un número entero.')
else:
print('Dato inválido. Se requiere un número entero.')
entero(4.5)
###Output
_____no_output_____
###Markdown
Bucle for```for i in iterable:`````` cuerpo del bucle ```
###Code
#sintaxis básica
lista_pares = []
lista_impares = []
for i in range(0,30):
if i == 0:
None
elif i % 2 == 0:
lista_pares.append(i)
else:
lista_impares.append(i)
print(lista_pares)
print(lista_impares)
#bucles anidados
for i in range(3):
for j in range(3):
print(i,j)
###Output
0 0
0 1
0 2
1 0
1 1
1 2
2 0
2 1
2 2
###Markdown
Bucle while
###Code
#sintaxis básica
contador = 0
while contador < 10:
print(f'La cuenta va en {contador}')
contador += 1
contador
#bucle infinito
while contador == 10:
print('Esto es un bucle infinito.')
for i in range(50):
if i % 3 == 0:
print(i, 'ya no se va a ver el código')
continue
elif i > 30:
break
print(i,'Se ve el código')
###Output
_____no_output_____
###Markdown
List comprehension
###Code
%%time
lista_vacia = []
for i in range(20):
lista_vacia.append(i**2)
%%time
lista_2 = [(i**2,i**3) for i in range(20) if i %2 == 0]
%%time
matriz = [[1,2,3],
[4,5,6],
[7,8,9]]
vec = [valor for lista in matriz for valor in lista]
%%time
vector = []
for lista in matriz:
for valor in lista:
vector.append(valor)
vector
#sintaxis
###Output
_____no_output_____
###Markdown
Dict comprehension
###Code
dic_vacio = {}
for i in range(20):
dic_vacio[i] = i**2
dic_vacio
#sintaxis
dic_comp = {i:i**2 for i in range(20)}
###Output
_____no_output_____
###Markdown
Funciones zip, enumerate
###Code
#puedo iterar sobre varios iterables de manera simultánea
x = [i for i in range(10)]
y = [i**2 for i in range(10)]
z = [i**3 for i in range(11)]
dic = {}
for i,j,k in zip(x,y,z):
dic[i] = (j,k)
dic
#puedo iterar con un indicador de posición
lista = list(range(0,21,3))
for i in range(len(lista)):
print(i, lista[i])
for i in enumerate(lista):
print(i)
###Output
(0, 0)
(1, 3)
(2, 6)
(3, 9)
(4, 12)
(5, 15)
(6, 18)
###Markdown
Generadores: yield
###Code
#sintaxis básica
def generador(n):
for i in range(n):
yield i
n = 10
a = generador(n)
contador = 0
while contador < n:
print(next(a))
contador += 1
next(a)
list(generador(10))
import random as rd
import statistics as st
def gen_1(n):
for i in range(n):
yield rd.normalvariate(0,1)
for i in range(50):
media = st.mean(list(gen_1(200)))
print(media)
b = generador(3)
next(b)
###Output
_____no_output_____
###Markdown
Funciones filter, map
###Code
#iterable, condición
#iterable, función
def gen_2(n):
for i in range(n):
yield i
list(filter(lambda x: x%2==0, gen_2(50)))
list(map(lambda x: (x**2, x**3), gen_2(15)))
###Output
_____no_output_____
###Markdown
Módulos`math`, `statistics`, `random`
###Code
import math
import statistics as st
import random as rd
#from math import *
from math import factorial
###Output
_____no_output_____
###Markdown
$$\text{Factorial}$$$$n!=\prod_{i=1}^n i$$
###Code
math.factorial(6)
math.comb(6,3)
###Output
_____no_output_____
###Markdown
$$\text{Coeficiente binomial}$$$$\frac{n!}{k!\cdot(n-k)!}$$
###Code
math.comb(3,2)
###Output
_____no_output_____
###Markdown
$$e^x$$
###Code
math.exp(4)
###Output
_____no_output_____
###Markdown
$$\ln 5$$
###Code
math.log(5)
###Output
_____no_output_____
###Markdown
$$x^n$$
###Code
math.pow(5,2)
pow(5,2)
###Output
_____no_output_____
###Markdown
$$\sqrt{25}$$
###Code
math.sqrt(5)
###Output
_____no_output_____
###Markdown
$$\pi$$
###Code
math.pi
###Output
_____no_output_____
###Markdown
$$e$$
###Code
math.e
math.log(math.e)
lista = [rd.randint(0,26) for i in range(100)]
###Output
_____no_output_____
###Markdown
$$\frac{1}{n}\sum_{i=1}^n x_i$$
###Code
st.mean(lista)
st.median(lista)
st.mode(lista)
st.quantiles(lista)
st.stdev(lista)
st.variance(lista)
rd.randrange(0,50)
rd.randint(0,26)
rd.choice([9,5,6,7])
rd.gauss(1,0.5)
muestra = rd.sample(lista,5)
muestra
rd.shuffle(muestra)
muestra
help(rd)
###Output
_____no_output_____ |
source/AI/KI_testing/ki prototypes/first anaconda python/Untitled.ipynb | ###Markdown
Import dependencies
###Code
import gym
import random
env = gym.make('CartPole-v0')
states = env.observation_space.shape[0]
actions = env.action_space.n
episodes = 10
'''for episode in range (1, episodes +1):
state = env.reset()
done = False
score = 0
while not done:
env.render()
action = random.choice([0,1])
n_state, reward, done, info = env.step(action)
score += reward
print('Episode:{} Score:{}'.format(episode, score))'''
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import Adam
def build_model(states, actions):
model = Sequential()
model.add(Flatten(input_shape=(1, states)))
model.add(Dense(24, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(actions, activation='linear'))
return model
model = build_model(states, actions)
model.summary()
from rl.agents import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory
import tensorflow as tf
def build_agent(model, actions):
policy = BoltzmannQPolicy()
memory = SequentialMemory(limit=50000, window_length=1)
dqn = DQNAgent(model=model, memory=memory, policy=policy,
nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
return dqn
dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)
# to solve error
del model
###Output
_____no_output_____ |
01_nlp_twitter_sentiment_overview.ipynb | ###Markdown
This project will explore the popular Kaggle dataset of airline twitter sentiment, with the goal of using various sentiment anlaysis and classifcations methodology in order to find the highest accuracy
###Code
# Fist let us import some of the packages we will be using
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('data/Tweets.csv')
df.head()
###Output
_____no_output_____
###Markdown
Let us remove some of the columns we will not be using from this dataframe. Such as: 1. tweet_id2. nameAll the other columns will be helpful in our classification methodologies.
###Code
df = df.drop('tweet_id',axis = 1)
df = df.drop('name', axis = 1)
###Output
_____no_output_____
###Markdown
Exploritory analysis Let's create some basic visualizations and see what we know about this dataset.
###Code
#This tells us there is 14,640 tweets in this data set
df.shape
# Improve plot to show various counts
plt.figure(figsize = (10, 6))
sns.countplot(x = df['airline_sentiment'])
plt.show()
# Lets look at the postive tweets
# Lets create a dataframe with only positive tweets
p_df = df[df['airline_sentiment'].str.contains('positive')]
# Lets look at the top airlines based on amount of positive tweets
top_airline = p_df[['airline','airline_sentiment_confidence']]
top_airline = top_airline.groupby('airline', as_index = False).count()
top_airline.sort_values('airline_sentiment_confidence', ascending = False)
# We can see that Southwest airlines as the highest number of positive tweets
# Lets do the same but for negative tweets, which airlines are commited the worse offence
# to passengers'
n_df = df[df['airline_sentiment'].str.contains('negative')]
bad_airline = n_df[['airline','airline_sentiment_confidence','negativereason']]
bad_airline = bad_airline.groupby('airline', as_index = False).count()
bad_airline.sort_values('airline_sentiment_confidence', ascending = False)
#Seems as if United has been pretty naughty, as well as US airways
# we should look at which airlines has the highest percentage of tweets that are positive
# Lets expand a bit more on the negativereason column, and see what types of complaints
# consmers had
complaints_df = n_df[['airline','negativereason']]
complaints_df = complaints_df.groupby('negativereason', as_index = False).count()
complaints_df.sort_values('airline', ascending = False)
# we can see that Customer Service Issue and Late flights took the bulk of it
#interesing to see what "Can't Tell" is, overall just a bad experience.
#tbh i have to admit, american based carriers truly arn't the best relative to the world
#in terms of service.
###Output
_____no_output_____
###Markdown
Let's do to NLP work
###Code
# Let us import our NLP plugins, lets break it down differnetly for NLTK, and Spacey
## MORE WORK NEEDED
# INtorduce SPACY VERSION
# introduce a stemmer
# clean up the code
# DO sentiment anlaysis, and compare it to what was labeled, see if its the same or different with NLTK version
# Plus spacey version
# Plus Gensim see the difference
# NLTK
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
stop_words = stopwords.words('english')
#Adds stuff to our stop wors list
stop_words.extend(["@","n't",'.',','])
def WN_lemmatizer_NLTK(token):
return WordNetLemmatizer().lemmatize(token)
def tokenizer_NLTK_1(text):
#simple version
return nltk.word_tokenize(text)
def tokenizer_NLTK_2(text):
#a bit more complex
#Tokenizes
the_tokens = nltk.word_tokenize(text)
#Removes Stop_WOrds
the_tokens = [token for token in the_tokens if token not in stop_words]
#Lemmatized the word too
for i in the_tokens:
WN_lemmatizer_NLTK(i)
#the_tokens = [WN_lemmatizer_NLTK for i in the_tokens]
return the_tokens
# In order to do a sentiment anlaysis we need to tokenize our text data
# We need to put our text data in list form
tweet_text_list = []
for i in df['text']:
tweet_text_list.append(tokenizer_NLTK_2(i))
###Output
_____no_output_____
###Markdown
Once we completed the pre_processing, we can move on to do some classification work :)
###Code
#Lets import some mchine learning plugins
# We go to split our data into train and test, for our classifiers
from sklearn.model_selection import train_test_split
# Some evaluation metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
# Classifcation Algorithms
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
def convert_sentiment_to_integer(word_sentiment):
return {'negative': 0, 'neutral':1, 'positive': 2}[word_sentiment]
# We need to establish our dependent variable
y = df.airline_sentiment
# The independent variable
X = df.text
count_vectorizer = CountVectorizer(analyzer = 'word')
split = 0.2
X_train, X_test = train_test_split(df, test_size=split)
X_train_tok = []
X_test_tok = []
#X_train = count_vectorizer.fit_transform(X_train)
#X_test = count_vectorizer.fit_transform(X_test)
for i in X_train['text']:
X_train_tok.append(i)
for i in X_test['text']:
X_test_tok.append(i)
X_train_features = count_vectorizer.fit_transform(X_train_tok)
X_test_features = count_vectorizer.fit_transform(X_test_tok)
dense_features = X_train_features.toarray()
dense_test = X_test_features.toarray()
DT_model = DecisionTreeClassifier()
#fit = DT_model.fit(dense_features, X_train['airline_sentiment'])
fit = DT_model.fit(X_train_features, X_train['airline_sentiment'])
pred = DT_model.predict(X_test_features)
###Output
_____no_output_____ |
qsar_tut.ipynb | ###Markdown
importiranje programskih biblioteka
###Code
import pandas as pd
import numpy as np
from rdkit import Chem
from rdkit.Chem import AllChem, Draw, Descriptors
###Output
_____no_output_____
###Markdown
importiranje i pregled sirovih podataka
###Code
sirovi_podaci=pd.read_csv('data/solubility.txt',\
delim_whitespace=True,names=['cas','smiles','logs'])
sirovi_podaci.head()
###Output
_____no_output_____
###Markdown
pretvaranje SMILES u mol zapis strukture
###Code
mol_column=sirovi_podaci.smiles.apply(Chem.MolFromSmiles).rename('mol', inplace=True)
###Output
_____no_output_____
###Markdown
računanje deskriptora
###Code
logp=mol_column.apply(Descriptors.MolLogP).rename('logp', inplace=True)
molwt=mol_column.apply(Descriptors.MolWt).rename('molwt', inplace=True)
balabanj=mol_column.apply(Descriptors.BalabanJ).rename('balabanj', inplace=True)
tpsa=mol_column.apply(Descriptors.TPSA).rename('tpsa', inplace=True)
###Output
_____no_output_____
###Markdown
fuzija sirovih podataka i deskriptora
###Code
final_data=pd.concat([sirovi_podaci, mol_column, logp, molwt, balabanj, tpsa], axis=1)
final_data.head(4)
###Output
_____no_output_____
###Markdown
final_data.to_csv('data/solubility_all_data.csv') razdvajanje podataka na trening i test set
###Code
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(final_data, test_size=.2, random_state=42)
train_set.shape, test_set.shape
###Output
_____no_output_____
###Markdown
vizualizacija odnosa varijabli
###Code
train_set[['logp','molwt','tpsa','balabanj','logs']].corr()
from pandas.plotting import scatter_matrix
%matplotlib inline
scatter_matrix(train_set[['logp','molwt','tpsa','balabanj','logs']], figsize=(10,12));
###Output
_____no_output_____
###Markdown
izbor najbolje varijable
###Code
from sklearn.feature_selection import f_regression
help(f_regression)
featsel=f_regression(X=train_set[['logp','molwt','tpsa','balabanj']], y=train_set['logs'])
F_values=featsel[0]
P_values=featsel[1]
P_values
###Output
_____no_output_____
###Markdown
treniranje modela
###Code
from sklearn.linear_model import LinearRegression as linreg
xtrain=np.array(train_set.logp)[:, np.newaxis]
ytrain=np.array(train_set.logs)
xtest=np.array(test_set.logp)[:, np.newaxis]
y_true=np.array(test_set.logs)
fitter=linreg().fit(X=xtrain, y=ytrain)
a=fitter.coef_
b=fitter.intercept_
a, b
###Output
_____no_output_____
###Markdown
fitanje modela i predikcija
###Code
y_pred=fitter.predict(X=xtest)
###Output
_____no_output_____
###Markdown
vizualizacija predikcije
###Code
import matplotlib.pyplot as plt
fig, ax =plt.subplots(figsize=(10,12))
#scatter plot
plt.scatter(x=test_set.logp, y=test_set.logs, marker='^', label='skup za vrednovanje')
#regresijska linija
x_space=np.array([min(train_set.logp),max(train_set.logp)])
y_fit=x_space*a + b
plt.plot(x_space,y_fit, linestyle='--', c='r', lw=3, label="regresijski pravac")
#
plt.ylabel('logS', fontsize=20, color='red')
plt.xlabel('logP', fontsize=20, color='green')
plt.title('logS vs. logP', fontsize=20)
plt.legend(fontsize=20)
###Output
_____no_output_____
###Markdown
kvaliteta predikcije
###Code
import seaborn as sns
sns.residplot(y_true,y_pred)
from sklearn.metrics import mean_squared_error as MSE
'MSE:',MSE(y_true, y_pred)
pd.DataFrame((y_pred-y_true)).hist()
fig, ax =plt.subplots(figsize=(10,12))
#scatter plot
plt.scatter(x=train_set.logp, y=train_set.logs, marker='^', label='skup za učenje')
#regresijska linija
x_space=np.array([min(train_set.logp),max(train_set.logp)])
y_fit=x_space*fitter.coef_ + fitter.intercept_
ax.plot(x_space,y_fit, linestyle='--', c='r', lw=3, label="regresijski pravac")
#
ax.set_ylabel('logS', fontsize=20, color='red')
ax.set_xlabel('logP', fontsize=20, color='green')
ax.set_title('logS vs. logP', fontsize=20)
plt.legend(fontsize=20)
train_set[['logs','logp']].describe()
train_set[['logs','logp']].plot.box(figsize=(5,10), grid=True)
import statsmodels.api as sm
x_sm=sm.add_constant(xtrain)
model = sm.OLS(ytrain, x_sm)
fitted_model = model.fit()
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.influence_plot(fitted_model , alpha = 0.05, ax=ax, criterion="cooks")
xtrain[941]
xtrain[692]
train_set[train_set.logp<-4.8][['smiles','logp']]
outlier1=train_set.smiles.loc[1056]
outlier2=train_set.smiles.loc[105]
o1_mol=Chem.MolFromSmiles(outlier1)
AllChem.Compute2DCoords(o1_mol)
print(Chem.MolToMolBlock(o1_mol))
Draw.MolToImage(o1_mol)
o2_mol=Chem.MolFromSmiles(outlier2)
AllChem.Compute2DCoords(o2_mol)
Draw.MolToImage(o2_mol)
###Output
_____no_output_____ |
ThermalCircuits.ipynb | ###Markdown
$$R_{eq,atm}= \left(\frac{1}{R_{rad}}+\frac{1}{R_{conv,o}}\right)^{-1}$$
###Code
d = schem.Drawing()
d.add(e.DOT, label = r"$T_m$")
d.add(e.RES, d = 'right', label = R[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = R[1].name)
d.add(e.DOT, label = r"$T_{s,o}$")
d.add(e.RES, d='right', label = r"$R_{eq,atm}$")
d.add(e.DOT, botlabel="$T_\infty$")
L1 = d.add(e.LINE, toplabel = "$q'$", endpts = [[-2.25, 0], [-0.25, 0]])
d.labelI(L1, arrowofst = 0)
d.draw()
###Output
_____no_output_____
###Markdown
$$R_{eq,atm}= \left(\frac{1}{R_{rad}}+\frac{1}{R_{conv,o}}\right)^{-1}$$
###Code
d = schem.Drawing()
d.add(e.DOT, label = r"$T_m$")
d.add(e.RES, d = 'right', label = R[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = R[1].name)
d.add(e.DOT, label = r"$T_{s,o}$")
d.add(e.RES, d='right', label = r"$R_{eq,atm}$")
d.add(e.DOT, botlabel="$T_\infty$")
L1 = d.add(e.LINE, toplabel = "$q'$", endpts = [[-2.25, 0], [-0.25, 0]])
d.labelI(L1, arrowofst = 0)
d.draw()
###Output
_____no_output_____ |
notebooks/PolynomialHyperelastic.ipynb | ###Markdown
Polynomial Hyperelasticity OverviewThe polynomial hyperelastic model is a model in which the strain energy is a polynomial function of the invariants of the deformation tensor. The polynomial hyperelastic model- is a phenomenologial model,- has an arbitrary number of parameters, though most materials use fewer than 4- is typically fit to uniaxial extension/compression, equibiaxial extension/compression, or shear experimental data- usually valid for strains less than 100% See Also- [User Defined Materials](UserMaterial.ipynb)- [Linear Elastic Material](LinearElastic.ipynb)- [Mooney-Rivlien Hyperelasticity](MooneyRivlin.ipynb) Contents1. Fundamental Equations2. Model Implementation3. Model Verification Fundamental EquationsThe strain energy in the polynomial hyperelastic model is defined as$$W = \sum_{i,j=0}^N c_{ij}\left(\overline{I}_1 - 3\right)^i \left(\overline{I}_2 - 3\right)^j + + \frac{1}{D_1}\left(J-1\right)^2$$where the $c_{ij}$ and $D_1$ are material parameters, the $\overline{I}_i$ are the isochoric invariants of the right Cauchy deformation tensor $\pmb{C} = \pmb{F}^T{\cdot}\pmb{F}$, where $\pmb{F}$ is deformation gradient tensor, and $J$ is the determinant of the deformation gradient. The Second Piola-Kirchhoff Stress Tensor$$ \pmb{T} = 2\frac{\partial W}{\partial \pmb{C}}$$For isotropic hyperelasticity $W=W\left(\overline{I}_1, \overline{I}_2, J\right)$ and$$ \pmb{T} = 2\left(\frac{\partial W}{\partial \overline{I}_1}\frac{\partial \overline{I}_1}{\partial \pmb{C}} + \frac{\partial W}{\partial \overline{I}_2}\frac{\partial \overline{I}_2}{\partial \pmb{C}} + \frac{\partial W}{\partial J}\frac{\partial J}{\partial \pmb{C}}\right) = \pmb{A} \cdot \pmb{B}$$where$$ \pmb{A} = \left[\frac{\partial W}{\partial \overline{I}_1} \quad \frac{\partial W}{\partial \overline{I}_2} \quad \frac{\partial W}{\partial J}\right]$$and\begin{align} \pmb{B} &= \left[ \frac{\partial \overline{I}_1}{\partial \pmb{C}} \quad \frac{\partial \overline{I}_2}{\partial \pmb{C}} \quad \frac{\partial J}{\partial \pmb{C}} \right] \\ &= \left[ J^{-2/3}\left(\pmb{I} - \frac{1}{3}I_1 \pmb{C}^{-1} \right) \quad J^{-4/3}\left( I_1\pmb{I} - \pmb{C} - \frac{2}{3}I_2\pmb{C}^{-1} \right)\quad \frac{1}{2}J\pmb{C}^{-1} \right]\end{align} Elastic stiffness in the material frame is given by\begin{align} \mathbb{L} &= 4\frac{\partial^2 W}{\partial\pmb{C}\partial\pmb{C}} = 4\frac{\partial}{\partial \pmb{C}}\left( \pmb{A}\cdot\pmb{B}\right) \\ &= 4\left(\frac{\partial \pmb{A}}{\partial \pmb{C}}\cdot\pmb{B} + \pmb{A}\cdot\frac{\partial \pmb{B}}{\partial \pmb{C}}\right)\end{align}where\begin{equation} \frac{\partial \pmb{A}}{\partial \pmb{C}} = \pmb{H} \cdot \pmb{B}, \quad \pmb{H} = \begin{bmatrix} \frac{\partial^2 W}{\partial \overline{I}_1 \partial\overline{I}_1} & \frac{\partial^2 W}{\partial \overline{I}_1 \partial \overline{I}_2} & \frac{\partial^2 W}{\partial \overline{I}_1 \partial J} \\ \frac{\partial^2 W}{\partial \overline{I}_2 \partial \overline{I}_1} & \frac{\partial^2 W}{\partial \overline{I}_2 \partial \overline{I}_2} & \frac{\partial^2 W}{\partial \overline{I}_2 \partial J} \\ \frac{\partial^2 W}{\partial J \partial \overline{I}_1} & \frac{\partial^2 W}{\partial J \partial \overline{I}_2} & \frac{\partial^2 W}{\partial J \partial J} \end{bmatrix}\end{equation}and\begin{equation} \frac{\partial \pmb{B}}{\partial \pmb{C}} = \begin{Bmatrix} \frac{1}{3}J^{-2/3}\left[ \pmb{I}\pmb{C}^{-1} - \pmb{C}^{-1}\pmb{I} - I_1\left( \pmb{C}^{-1}\odot\pmb{C}{^-1} - \frac{1}{3}\pmb{C}^{-1}\pmb{C}{^-1}\right) \right] \\ \frac{2}{3}J^{-4/3}\left[ \frac{3}{2}\left(\mathbb{I}_1 - \mathbb{I}_2\right) + \left(\pmb{C}^{-1}\pmb{C} + \pmb{C}\pmb{C}^{-1} \right) - I_1\left( \pmb{C}^{-1}\pmb{I} + \pmb{I}\pmb{C}^{-1} \right) - I_2\left( \pmb{C}^{-1}\odot\pmb{C}^{-1} - \frac{2}{3}\pmb{C}^{-1}\pmb{C}^{-1} \right) \right] \\ \frac{1}{4} J\left( \pmb{C}^{-1}\pmb{C}^{-1} - 2\pmb{C}^{-1}\odot\pmb{C}^{-1} \right) \end{Bmatrix}\end{equation}The operator $\odot$ is defined such that$$\mathbb{X}_{ijkl} = \pmb{A} \odot \pmb{B} = (A_{ik} B_{jl} + A_{il} B_{jk}) / 2$$ Requirements of ObjectivityThe constitutive equations for the hyperelastic material evaluate the stressdirectly in the reference configuration. The components of the stress areidentified as the components of the Second Piola-Kirchhoff stress $\pmb{T}$.The push forward of the Second Piola-Kirchhoff stress gives the Cauchy stress$\pmb{\sigma}$ in the spatial configuration, as required by most finiteelement packages. The push forward of the corresponding material stiffness$\mathbb{L}$ does not, however, correspond to the rate of Cauchy stress butthe Truesdell rate of the Cauchy stress. Furthermore, the rate of Cauchystress is not objective, requiring finite element packages to use otherso-called objective stress rates in the incremental solution of the momentumequation. In the following sections, it is demonstrated that the push forwardof the material stiffness does not correspond to the rate of Cauchy stress andequations relating the stiffness corresponding to the Jaumann rate of theKirchhoff stress to the push forward of the material stiffness are developed.The rate of change of Cauchy stress\begin{align} \dot{\pmb{\sigma}} &= \frac{d}{dt}\left( \frac{1}{J}\pmb{F}\cdot\pmb{T}\cdot{\pmb{F}}^T \right) \\ &= \frac{1}{J}\left[ -\frac{\dot{J}}{J}\pmb{F}\cdot\pmb{T}\cdot{\pmb{F}}^T + \dot{\pmb{F}}\cdot\pmb{T}\cdot{\pmb{F}}^T + \pmb{F} \cdot\frac{d\pmb{T}}{d\pmb{E}}{:}\dot{\pmb{E}} \cdot{\pmb{F}}^T + \pmb{F}\cdot\pmb{T}\cdot\dot{\pmb{F}}^T \right]\end{align}With the following identities\begin{equation} \mathrm{tr}{\pmb{d}} = \frac{\dot{J}}{J} \quad \dot{\pmb{F}} = \pmb{L}\cdot\pmb{F} \quad \dot{\pmb{E}} = \pmb{F}^T\cdot\pmb{d}\cdot\pmb{F} \quad \dot{\pmb{F}}^T = \pmb{F}^T\cdot\pmb{L}^T\end{equation} \begin{align} \dot{\pmb{\sigma}} &= \frac{1}{J}\left[ -\mathrm{tr}{\pmb{d}}\pmb{\sigma} + \pmb{L}\cdot\pmb{F}\cdot\pmb{T}\cdot{\pmb{F}}^T + \pmb{F}\cdot\mathbb{L}{:}\left( \pmb{F}^T\cdot\pmb{d}\cdot\pmb{F} \right)\cdot\pmb{F}^T + \pmb{F}\cdot\pmb{T}\cdot\pmb{F}^T\cdot\pmb{L}^T \right] \\ &= -\mathrm{tr}{\pmb{d}}\pmb{\sigma} + \pmb{L}\cdot\pmb{\sigma} + \frac{1}{J}\left( \pmb{F}\cdot\pmb{F}\cdot \mathbb{L} \cdot\pmb{F}^T\cdot\pmb{F}^T \right){:}\pmb{d} + \pmb{\sigma}\cdot\pmb{L}^T \\ &= -\mathrm{tr}{\pmb{d}}\pmb{\sigma} + \pmb{L}\cdot\pmb{\sigma} + \mathbb{C}{:}\pmb{d} + \pmb{\sigma}\cdot\pmb{L}^T\end{align}rearranging\begin{align} \dot{\pmb{\sigma}} - \pmb{\sigma}\cdot\pmb{L}^T - \pmb{L}\cdot\pmb{\sigma} + \mathrm{tr}{\pmb{d}}\pmb{\sigma} &= \mathbb{C}{:}\pmb{d} \\ \mathring{\pmb{\sigma}} &= \mathbb{C}{:}\pmb{d}\end{align}where\begin{align} \mathring{\pmb{\sigma}} = \dot{\pmb{\sigma}} - \pmb{\sigma}\cdot\pmb{L}^T - \pmb{L}\cdot\pmb{\sigma} + \mathrm{tr}{\pmb{d}}\pmb{\sigma}\end{align}is the Truesdell rate of the Cauchy stress. The Truesdell rate of the Cauchystress is related to the Lie derivative of the Kirchhoff stress by\begin{equation} \mathring{\pmb{\sigma}} = \frac{1}{J}\pmb{F}\cdot\dot{\pmb{T}}\pmb{F}^T = \frac{1}{J}\pmb{F}\cdot\left[\frac{d}{dt}\left( J\pmb{F}^{-1}\cdot\pmb{\sigma}\pmb{F}^{-T} \right)\right]\cdot\pmb{F}^T = \frac{1}{J}\pmb{F}\cdot\left[\frac{d}{dt}\left( \pmb{F}^{-1}\cdot\pmb{\tau}\pmb{F}^{-T} \right)\right]\cdot\pmb{F}^{T} = \frac{1}{J}\mathscr{L}\left({\pmb{\tau}}\right)\end{equation}Thus, the push forward of the material time derivative of $\pmb{T}$ is theTruesdell rate of the Cauchy stress. Correspondingly, the Truesdell rate ofthe Kirchhoff stress is related to the push forward of the material stiffness$\mathbb{L}$ by\begin{equation} \mathscr{L}\left({\pmb{\tau}}\right) = J\mathbb{C}{:}\pmb{d}\end{equation} In many finite element packages, the Jaumann rate of the Kirchhoff stress, andnot the Truesdell rate, is required. Thus, the elastic stiffness mustcorrespond to the Jaummann rate.. The Jaumann rate of the Kirchhoff stress isgiven by\begin{align} \stackrel{\nabla}{\pmb{\tau}} &= { \dot{\pmb{\tau}} + \pmb{\tau}\cdot\pmb{w} - \pmb{w}\cdot\pmb{\tau} } \\ &= \dot{J}\pmb{\sigma} + J\dot{\pmb{\sigma}} + J\pmb{\sigma}\cdot\pmb{w} - \pmb{w}\cdot J\pmb{\sigma} \\ &= J\left(\mathrm{tr}{\pmb{d}}\pmb{\sigma} + \dot{\pmb{\sigma}} + \pmb{\sigma}\cdot\pmb{w} - \pmb{w}\cdot J\pmb{\sigma}\right) \\ &= J\mathbb{D}{:}\pmb{d}\end{align}where $\mathbb{D}$ is the stiffness corresponding to the Jaumann rate.Subtracting $\mathring{\pmb{\tau}}$ from $\stackrel{\nabla}{\pmb{\tau}}$, the Jaumann stiffnesscan be cast in terms of $\mathbb{C}$\begin{align} \left(\mathbb{D} - \mathbb{C}\right){:}\pmb{d} &= \left(\mathrm{tr}{\pmb{d}}\pmb{\sigma} + \dot{\pmb{\sigma}} + \pmb{\sigma}\cdot\pmb{w} - \pmb{w}\cdot\pmb{\sigma}\right) - \left(\dot{\pmb{\sigma}} - \pmb{\sigma}\cdot{\pmb{L}}^T - \pmb{L}\cdot\pmb{\sigma} + \mathrm{tr}{\pmb{d}}\pmb{\sigma}\right) \\ &= \pmb{\sigma}\cdot\pmb{w} + \pmb{\sigma}\cdot{\pmb{L}}^T + \pmb{L}\cdot\pmb{\sigma} - \pmb{w}\cdot\pmb{\sigma} \\ &= \pmb{\sigma}\cdot\pmb{d} + \pmb{d}\cdot\pmb{\sigma} \\\end{align}Using indicial notation, and the fact that $\pmb{d}$ and $\pmb{\sigma}$are symmetric,\begin{align} \left(D_{ijkl} - C_{ijkl}\right)d_{kl} &= \sigma_{im}d_{mj} + d_{im}\sigma_{mj} \\ &= \left( \sigma_{ik}\delta_{jl} + \delta_{il}\sigma_{jk} \right)d_{kl}\end{align}from which\begin{align} \left( D_{ijkl} - C_{ijkl} - \sigma_{ik}\delta_{jl} - \delta_{il}\sigma_{jk} \right)d_{kl} = 0_{ij}\end{align} Since the preceding equation must hold for all $d_{kl}$, the stiffnesscorresponding to the Jaumman rate of the Kirchhoff stress is related to thestiffness corresponding to the Truesdell rate of the Kirchhoff stress as\begin{align} D_{ijkl} = C_{ijkl} + \sigma_{ik}\delta_{jl} + \delta_{il}\sigma_{jk}\end{align}Note that the stiffness must be made minorsymmetric\begin{align} D_{ijkl} = C_{ijkl} + \frac{1}{2}\left( \sigma_{ik}\delta_{jl} + \sigma_{il}\delta_{jk} + \delta_{il}\sigma_{jk} + \delta_{ik}\sigma_{jl} \right)\end{align}In symbolic notation,$$\mathbb{D} = \mathbb{C} + \pmb{\sigma}\odot\pmb{I} + \pmb{I}\odot\pmb{\sigma}$$which is the correct stiffness corresponding to the Jaumann rate of the Kirchhoffstress, in terms of the push forward of the material stiffness. Equations relating $\mathbb{C}$ to stiffnesses corresponding to other objective rates are derived similarly. Matmodlab ImplementationA polynomial material model is implemented as a standard Matmodlab material in `matmodlab2/materials/polyhyper.py` as a subclass of `Material` class. The model defines the following (required) attributes and methods- `name`: name by which the model is referenced - `eval`: method the updates the material stateAdditionally, several helper functions are imported from various locations in Matmodlab:- `matmodlab.materials.tensor` - `symsq`: Computes $\pmb{F}^T{\cdot}\pmb{F}$ and returns the result as a 6D first order tensor - `det`, `inv`: Computes the determinant and inverse of second order tensors (stored as matrix or array) - `invariants`: Computes the invariants of a second order tensor - `push`: Performs the push forward operation (for both second and fourth order tensors) - `dyad`: Computes the dyadic product of two vectors or second order tensors - `symshuffle`: Computes the product $X_{ijkl} = .5 (A_{ik} B_{jl} + A_{il} B_{jk}$) - `II1`, `II5`: Fourth order identity tensors - `I6`: Identity tensor stored as an array of length 6 The model implementation can be viewed by executing the following cell.
###Code
%pycat ../matmodlab2/materials/polyhyper.py
%pylab inline
from bokeh.io import output_notebook
from bokeh.plotting import *
from matmodlab2 import *
output_notebook()
def create_figure(**kwargs):
TOOLS = ('resize,crosshair,pan,wheel_zoom,box_zoom,'
'reset,box_select,lasso_select')
TOOLS = 'resize,pan,wheel_zoom,box_zoom,reset,save'
return figure(tools=TOOLS, **kwargs)
###Output
Populating the interactive namespace from numpy and matplotlib
Setting up the Matmodlab notebook environment
###Markdown
Verification Uniaxial StressFor an incompressible isotropic material, uniaxial stress is produced by the following deformation state$$[F] = \begin{bmatrix}\lambda && \\ & \frac{1}{\sqrt{\lambda}} & \\ & & \frac{1}{\sqrt{\lambda}} \end{bmatrix}$$The stress difference $\sigma_{\text{axial}} - \sigma_{\text{lateral}}$ is given by
###Code
from sympy import Symbol, Matrix, Rational, symbols, sqrt
lam = Symbol('lambda')
F = Matrix(3, 3, [lam, 0, 0, 0, 1/sqrt(lam), 0, 0, 0, 1/sqrt(lam)])
B = Matrix(3, 3, F.dot(F.T))
Bsq = Matrix(3, 3, B.dot(B))
I = Matrix(3, 3, lambda i,j: 1 if i==j else 0)
I1 = B.trace()
I2 = ((B.trace()) ** 2 - Bsq.trace()) / 2
J = F.det()
X = J ** Rational(1, 3)
C1, C2, D1 = symbols('C10 C01 D1')
I1B = I1 / X ** 2
I2B = I2 / X ** 4
S = 2 / J * (1 / X ** 2 * (C1 + I1B * C2) * B - 1 / X ** 4 * C2 * Bsq) \
+ (2 / D1 * (J - 1) - 2 * (C1 * I1B + 2 * C2 * I2B) / 3) * I
(S[0,0] - S[1,1]).simplify()
###Output
_____no_output_____
###Markdown
We now exercise the Mooney-Rivlin material model using Matmodlab
###Code
# Hyperelastic parameters, D1 set to a large number to force incompressibility
parameters = {'D1': 1.e-5, 'C10': 1e6, 'C01': .1e6}
# stretch to 300%
lam = linspace(.5, 3, 50)
# Set up the simulator
mps = MaterialPointSimulator('test1')
mps.material = PolynomialHyperelasticMaterial(**parameters)
# Drive the *incompressible* material through a path of uniaxial stress by
# prescribing the deformation gradient.
Fij = lambda x: (x, 0, 0, 0, 1/sqrt(x), 0, 0, 0, 1/sqrt(x))
mps.run_step('F', components=Fij(lam[0]), frames=10)
mps.run_step('F', components=Fij(1), frames=1)
mps.run_step('F', components=Fij(lam[-1]), frames=20)
# plot the analytic solution and the simulation
p = create_figure(x_axis_label='Stretch', y_axis_label='Stress')
C10, C01 = parameters['C10'], parameters['C01']
# analytic solution for true and engineering stress
s = 2*C01*lam - 2*C01/lam**2 + 2*C10*lam**2 - 2*C10/lam
# plot the analytic solutions
p.line(lam, s, color='blue', legend='True', line_width=2)
p.line(lam, s/lam, color='green', legend='Engineering', line_width=2)
lam_ = np.exp(mps.get('E.XX'))
ss = mps.get('S.XX') - mps.get('S.ZZ')
p.circle(lam_, ss, color='orange', legend='Simulation, True')
p.circle(lam_, ss/lam_, color='red', legend='Simulation, Engineering')
p.legend.location = 'top_left'
show(p)
# check the actual solutions
assert abs(amax(ss) - amax(s)) / amax(s) < 1e-6
assert abs(amin(ss) - amin(s)) / amin(s) < 1e-6
###Output
_____no_output_____ |
stats-279/SLU04 - Basic Stats with Pandas/Exercise notebook.ipynb | ###Markdown
SLU04 - Basic Stats with Pandas: Exercises notebookIn these exercises we'll use a real dataset about house prices in King County in the US to learn how to obtain basic statistics from the data. This dataset comes from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). ObjectivesIn these exercises the objective is for you to learn how to use pandas to obtain simple statistics of datasets. Dataset informationThis dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.The columns in the dataset are the following:- `id`: a notation for a house- `date`: Date house was sold- `price`: Price is prediction target- `bedrooms`: Number of Bedrooms/House- `bathrooms`: Number of bathrooms/House- `sqft_living`: square footage of the home- `sqft_lot`: square footage of the lot- `floors`: total floors in house- `waterfront`: house which has a view to a waterfront- `view`: has been viewed- `condition`: how good the condition is (overall)- `grade`: overall grade given to the housing unit, based on King County grading system- `sqft_above`: square footage of house apart from basement- `sqft_basements`: quare footage of the basement- `yr_built`: built Year- `yr_renovated`: Year when house was renovated- `zipcode`: zip- `lat`: latitude coordinate- `long`: longitude coordinate- `sqft_living15`: Living room area in 2015 (implies-- some renovations) This might or might not have affected the lotsize area- `sqft_lot15`: lotSize area in 2015 (implies-- some renovations)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import hashlib # this is used to validate an answer, if you don't have it do !pip3 install hashlib
data = pd.read_csv('data/kc_house_data.csv')
data.head()
###Output
_____no_output_____
###Markdown
---- Exercise 1 Ok, let's start with very basic statistics. - How much is the most expensive house? What is its `id`? - How much is the cheapest house? What is its `id`?
###Code
# price_most_expensive = ...
# id_most_expensive = ...
# price_cheapest = ...
# id_cheapest = ...
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert price_most_expensive == 7700000.0
assert id_most_expensive == 6762700020
assert price_cheapest == 75000.0
assert id_cheapest == 3421079032
###Output
_____no_output_____
###Markdown
--- Exercise 2 Let's check the number of bedrooms.- What is the maximum and minimum number of bedrooms in the dataset?- What is the most common number of bedrooms?- What is the average number of bedrooms?- What is the median?- What is the standard deviation?
###Code
# maximum = ...
# minimum = ...
# most_common = ... # Hint: you should return a number, not a pandas Series :)
# average = ...
# median = ...
# standard_deviation = ...
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert maximum == 33
assert minimum == 0
assert most_common == 3
assert math.isclose(average, 3.37, abs_tol = 0.01)
assert median == 3
assert math.isclose(standard_deviation, 0.93, abs_tol = 0.01)
###Output
_____no_output_____
###Markdown
---- Exercise 3 In the previous exercise, you calculated some basic statistics for the number of bedrooms of the houses in the dataset. Have a look at the numbers you got again. - Are the minimum and maximum close to the mean and median? - Is the median smaller or larger than the mean? Did you expect that? - Given your answers to the questions above, do you expect the distribution of the number of bedrooms to be skewed? If so, do you expect it to be positively or negatively skewed? Plot two histograms for the number of bedrooms, one with 10 bins and another with 30 bins. Can you visually confirm if the distribution is skewed?Compute the skewness measure for the number of bedrooms. Is it positive or negative?
###Code
## plot two histograms, side-by-side
# plt.figure(1, figsize=(12,4), dpi=200)
# plt.subplot(121)
## plot first histogram
# ...
# plt.subplot(122)
## plot first histogram
# ...
## compute skew measure
# skew = ...
# YOUR CODE HERE
raise NotImplementedError()
print("The skewness measure is {:.3f}.".format(skew))
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert math.isclose(skew, 1.974, abs_tol=0.001)
###Output
_____no_output_____
###Markdown
--- Exercise 4 Now let's look at the area of the homes and compare it to the area of the homes excluding the basement. - Do you expect the distributions of the areas to be skewed? If so, positively or negatively? - Which of the distributions do you expect to have more kurtosis (i.e., a longer "tail")? Why? Verify the answers to the questions above by plotting a histogram (with 30 bins) for each of the distributions and by computing the skewness and kurtosis measures for each.
###Code
# plt.figure(1, figsize=(12,4), dpi=200)
# plt.subplot(121)
# ...
# plt.subplot(122)
# ...
# skew_living = ...
# skew_above = ...
# kurt_living = ...
# kurt_above = ...
# YOUR CODE HERE
raise NotImplementedError()
print("The skewness measure for the area of the home distribution is {:.3f}.".format(skew_living))
print("The skewness measure for the area minus the basement distribution is {:.3f}.".format(skew_above))
print("The kurtosis measure for the area of the home distribution is {:.3f}.".format(kurt_living))
print("The kurtosis measure for the area minus the basement distribution is {:.3f}.".format(kurt_above))
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert(math.isclose(skew_living, 1.472, abs_tol=0.01))
assert(math.isclose(skew_above, 1.447, abs_tol=0.01))
assert(math.isclose(kurt_living, 5.243, abs_tol=0.01))
assert(math.isclose(kurt_above, 3.402, abs_tol=0.01))
###Output
_____no_output_____
###Markdown
--- Exercise 5 Find the quartiles of the area of the home and the area of the home minus basement.
###Code
# output quartiles as pandas Seroes
# living_quartiles = ...
# above_quartiles = ...
# YOUR CODE HERE
raise NotImplementedError()
display(pd.DataFrame({'Area of home Quantiles': living_quartiles,
'Area minus basement Quantiles': above_quartiles}))
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert (living_quartiles.sum() == 19717)
assert (above_quartiles.sum() == 14660)
assert list(living_quartiles.index) == [0.0, 0.25, 0.5, 0.75, 1.0]
###Output
_____no_output_____
###Markdown
--- Exercise 6 In this exercise, obtain:* a list of all the years during which the houses were built;* a list of all the years during which the houses were renovated.*Reminder: none of the houses in this dataset was built and/or renovated when Jesus Christ was born.*
###Code
# list_years_built = ...
# list_years_renovated = ...
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert isinstance(list_years_built, (list,))
assert len(list_years_built) == 116
assert isinstance(list_years_renovated, (list,))
assert len(list_years_renovated) == 69
###Output
_____no_output_____
###Markdown
---- Exercise 7 In Exercise 4, we saw that the area of the homes had a skewed distribution, with certainly a few outliers (*really huge houses!*).Let's plot the histogram for the area again.
###Code
data.sqft_living.plot.hist(bins=30, figsize=(10,6));
plt.xlim(0);
plt.xlabel('area of the home (in square feet)');
###Output
_____no_output_____
###Markdown
How to deal with the outliers?In the Learning Notebook, you learned a few ways to deal with the outliers, in case they are negatively affecting your Machine Learning models. In this exercise, let's explore the **log transformation** and see if it helps us in this case.Do the following:* obtain the mean and the median of the areas of the homes; which one is greater?* create a new field called `log_sqrt_living` with the log of the areas;* obtain the mean and the median of the log of the areas; are they very different from each other?* obtain the skewness measure for the distribution of the log of the areas; how does it compare to the one you obtained in Exercise 4?* plot an histogram with 30 bins of the log of the areas.What do you think? Were the outliers dealt with? How about the skewness of the distribution?
###Code
## Compute mean and median of the areas of the homes
# area_mean = ...
# area_median = ...
## Create new field called `log_sqrt_living` with the log of the areas
# ...
## Compute mean and median of the log of the areas
# log_area_mean = ...
# log_area_median = ...
## Compute the skewness measure of the distribution of the log of the areas
# log_area_skew = ...
## Plot an histogram (with 30 bins) of the log of the areas
## For the x-axis, have the label 'log(area)'
# ...
# YOUR CODE HERE
raise NotImplementedError()
print('The home areas have mean %0.1f and median %0.1f' % (area_mean, area_median))
print('The log of areas have mean %0.3f and median %0.3f' % (log_area_mean, log_area_median))
print('The distribution of log of areas have a skewness measure of %0.3f' % log_area_skew)
###Output
_____no_output_____
###Markdown
Asserts
###Code
assert math.isclose(area_mean, 2079.9, abs_tol=0.1)
assert math.isclose(area_median, 1910.0, abs_tol=0.1)
assert math.isclose(log_area_mean, 7.55, abs_tol=0.01)
assert math.isclose(log_area_median, 7.55, abs_tol=0.01)
assert math.isclose(log_area_skew, -0.035, abs_tol=0.001)
assert math.isclose(data['log_sqft_living'].sum(), 163185.38, abs_tol=0.1)
###Output
_____no_output_____ |
Hood To Coast 2016 Leg Algorithm.ipynb | ###Markdown
Hood To Coast 2016Leg Assigments for Team Midwest Expressbased on stated preferences:
###Code
legs=[1,2,3,4,5,6,7,8,9,10,11,12]
preferences={'adam':None,
'audrey':[7,8,12],
'brian':[10,7,3],
'chris':[2,5,6],
'dan':[5,9,6],
'ellen':[6,8,10],
'emily':[4,1,11],
'jaro':None,
'mike':None,
'nora':[6,8,3],#,10,7,11,12,1],
'rob':[6,5,10],
'robert':[2,3,12]}
assigned_legs = assign_legs(legs,preferences)
van_image = plt.imread('/Users/danieljdenman/Desktop/h2c_vans_empy-01.png')
xpos = [450,450,450,600,600,600,460,460,460,610,610,610]
ypos = [90,140,180,85,130,165,380,430,465,375,425,460]
plt.imshow(van_image)
for runner,leg in assigned_legs.iteritems():
plt.text(xpos[leg-1], ypos[leg-1], str(leg)+': '+runner, ha='left', rotation=0)
ax=plt.gca();ax.set_frame_on(False);ax.set_xticklabels('',visible=False);ax.set_xticks([]);ax.set_yticklabels('',visible=False);ax.set_yticks([])
f=plt.title('Midwest Express')
###Output
_____no_output_____
###Markdown
the algorithm used above:
###Code
def assign_legs(legs, preferences):
assigned_legs = {}
#first, deal with people who specified legs
runners = [preferences.keys()[i] for i,x in enumerate(preferences.values()) if x != None]
#randomly choosing who gets choice preference
random.shuffle(runners)
for runner in runners:
preferred_legs = preferences[runner]
for preference in preferred_legs:
if preference in legs:
assigned_legs[runner]=preference
legs.pop(int(np.where(np.array(legs)==preference)[0][0]))
break
if runner not in assigned_legs.keys():
print 'could not assign '+runner+' a desired leg. assigned a leftover leg instead, sorry.'
preferences[runner]=None
#after that, give everybody else a random remaining leg
for runner in [preferences.keys()[i] for i,x in enumerate(preferences.values()) if x == None]:
leg = legs.pop(int(np.floor(np.random.rand()*len(legs))))
assigned_legs[runner]=int(leg)
return assigned_legs
###Output
_____no_output_____ |
Machine Learning Projects/flight delay prediction/flight_delay_prediction.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
df=pd.read_csv('FlightData.csv') #Importing the dataset
#Watching the Dataset in DataFrame
df.head()
df.columns
df.shape
df['MONTH'].unique(),df['YEAR'].unique()
df.info()
#Finding the sum of null values in each column
df.isnull().sum()
#Dropping out the useless columns from the DataFrame
df.drop(['Unnamed: 25'],axis=1,inplace=True)
df.head()
df.shape
df.info()
#Keep the columns which are uselful for the model
df=df[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK','ORIGIN','DEST','CRS_ARR_TIME','ARR_DEL15']]
df.head()
df.info()
#checking out again the null values in our new dataframe
df[df.isnull().values==True]
#Fill out the null values with 0,1 in the ARR_DEL15 column
#Note:I didn't try to remove the complete rows containing the null values as i didnt wanted to loose the data that maybe useful
df.fillna(1,inplace=True)
#visualizing ARR_DEL15 column
fig, axs = plt.subplots(2)
df2=df[df['ARR_DEL15']==0]
Y=[len(df2[df2['MONTH']==i]) for i in df2['MONTH'].unique()]
axs[0].bar(df['MONTH'].unique(),Y)
axs[0].set(xlabel='MONTH', ylabel='No. of Flights late')
df2=df[df['ARR_DEL15']==1]
Y=[len(df2[df2['MONTH']==i]) for i in df2['MONTH'].unique()]
axs[1].bar(df['MONTH'].unique(),Y)
axs[1].set(xlabel='MONTH', ylabel='No. of Flights on time')
df.info()
df.head()
#feature scaling
df['CRS_ARR_TIME']=(df['CRS_ARR_TIME']/100).astype(int) #Feature Scaling is required for training our model to get more accurate results
df.head()
df['ORIGIN'].unique(),df['DEST'].unique()
df=pd.get_dummies(df,columns=['ORIGIN','DEST'])
df.head()
df.shape
#bulding machine learning model
x=df.iloc[:,[0,1,2,3,5,6,7,8,9,10,11,12,13,14]].values
y=df.iloc[:,[4]].values
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0)
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier(n_estimators=10,criterion='entropy',random_state=0)
classifier.fit(x_train,y_train)
x_train.shape
y_train.shape,y_test.shape
y_pred=classifier.predict(x_test)
res=classifier.score(x_test,y_test)
res #score is not best method to check the accuracy.sometimes it is not useful
#confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,y_pred)
#using ROC AUC
from sklearn.metrics import roc_auc_score
probab=classifier.predict_proba(x_test)
probab[:,1]
roc_auc_score(y_test, probab[:, 1])
#using recall
from sklearn.metrics import recall_score
recall_score(y_test,y_pred) # It may be low because falsse negatives maybe high in number
from sklearn.metrics import roc_curve
fpr, tpr , _ = roc_curve(y_test, probab[:, 1])
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], color='grey', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
df.columns
def predict_delay(dep_date_time,origin,destination):
from datetime import datetime
try:
dep_date_time_parsed = datetime.strptime(dep_date_time, '%m/%d/%Y %H:%M:%S')
except ValueError as e:
print('Error parsing date/time - {}'.format(e))
date=dep_date_time_parsed.day
month=dep_date_time_parsed.month
day_of_week=dep_date_time_parsed.isoweekday()
hour=dep_date_time_parsed.hour
origin=origin.upper()
destination=destination.upper()
input_1=[{'MONTH':month,'DAY_OF_MONTH':
date,'DAY_OF_WEEK':day_of_week,
'CRS_DEP_TIME':hour,
'ORIGIN_ATL': 1 if origin=='ATL' else 0,
'ORIGIN_DTW': 1 if origin == 'DTW' else 0,
'ORIGIN_JFK': 1 if origin == 'JFK' else 0,
'ORIGIN_MSP': 1 if origin == 'MSP' else 0,
'ORIGIN_SEA': 1 if origin == 'SEA' else 0,
'DEST_ATL': 1 if destination == 'ATL' else 0,
'DEST_DTW': 1 if destination == 'DTW' else 0,
'DEST_JFK': 1 if destination == 'JFK' else 0,
'DEST_MSP': 1 if destination == 'MSP' else 0,
'DEST_SEA': 1 if destination == 'SEA' else 0 }]
return classifier.predict_proba(pd.DataFrame(input_1))[0][0] #It returns the probability of on time flight arrival
predict_delay('10/01/2018 21:45:00', 'JFK', 'ATL')
from datetime import datetime
dep_date_time='10/01/2018 21:45:00'
dep_date_time_parsed = datetime.strptime(dep_date_time, '%m/%d/%Y %H:%M:%S')
dep_date_time_parsed.date
#visualization
x_label=['aug 1','aug 2','aug 3','aug 4']
values=(predict_delay('10/01/2018 21:45:00', 'JFK', 'ATL'),predict_delay('10/02/2018 21:45:00', 'JFK', 'ATL'),
predict_delay('10/03/2018 21:45:00', 'JFK', 'ATL'),predict_delay('10/04/2018 21:45:00', 'JFK', 'ATL'))
plt.bar(x_label,values)
###Output
_____no_output_____ |
notebooks/end-to-end-structured/labs/.ipynb_checkpoints/3b_bqml_linear_transform_babyweight-checkpoint.ipynb | ###Markdown
LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.**Learning Objectives**1. Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS1. Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE1. Create and evaluate linear model with ML.TRANSFORM Introduction In this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook [BQML Baseline Model](../solutions/3a_bqml_baseline_babyweight.ipynb).We will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/3b_bqml_linear_transform_babyweight.ipynb). Load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
_____no_output_____
###Markdown
Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Lab Task 1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical featuresBigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations. Create model with feature cross.
###Code
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_1
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
ML.FEATURE_CROSS(
# TODO: Cross categorical features
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_train
###Output
_____no_output_____
###Markdown
Create two SQL statements to evaluate the model.
###Code
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
# TODO: Select just the calculated RMSE
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
###Output
_____no_output_____
###Markdown
Lab Task 2: Model 2: Apply the BUCKETIZE Function Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds. Apply the BUCKETIZE function within FEATURE_CROSS.* Hint: Create a model_2.
###Code
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_2
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
# TODO: Bucketize mother_age
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
# TODO: Bucketize gestation_weeks
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_train
###Output
_____no_output_____
###Markdown
Create three SQL statements to EVALUATE the model.Let's now retrieve the training statistics and evaluate the model.
###Code
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
###Output
_____no_output_____
###Markdown
We now evaluate our model on our eval dataset:
###Code
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
###Output
_____no_output_____
###Markdown
Let's select the `mean_squared_error` from the evaluation table we just computed and square it to obtain the rmse.
###Code
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
###Output
_____no_output_____
###Markdown
Lab Task 3: Model 3: Apply the TRANSFORM clauseBefore we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRANSFORM clause to the model_3 and run the query.
###Code
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_3
TRANSFORM(
# TODO: Add base features and label as you would in select
# TODO: Add transformed features as you would in select
)
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
###Output
_____no_output_____
###Markdown
Let's retrieve the training statistics:
###Code
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
###Output
_____no_output_____
###Markdown
We now evaluate our model on our eval dataset:
###Code
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
###Output
_____no_output_____
###Markdown
Let's select the `mean_squared_error` from the evaluation table we just computed and square it to obtain the rmse.
###Code
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
###Output
_____no_output_____ |
Python/MachineLearning_Ng/andrew_summary.ipynb | ###Markdown
TomMitchell:一个程序从经验E中学习,解决任务T,达到性能度量值P;当且仅当有了经验E后,经P评判,程序处理T时的性能有所提高。 从单变量的线性回归开始特征的数量:$$1$$训练集中实例的数量:$$m$$特征/输入变量:$$x$$目标/输出变量:$$y$$训练集中的实例:$$(x,y)$$训练集中第i个实例:$$(x^{(i)},y^{(i)})$$函数/假设:$$h$$单变量线性回归问题:$$h_\theta(x)=\theta_0+\theta_1x$$代价函数(建模误差):$$J(\theta_0,\theta_1)=\cfrac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$$我们的目标是建立模型h,使得代价函数J最小,即:$$\underbrace{minimize}_{\theta_0,\theta_1}J(\theta_0,\theta_1)$$ 批量梯度下降(同时更新2个theta):$$\theta_j:=\theta_j-\alpha\cfrac{\partial}{\partial\theta_j}J(\theta_0,\theta_1)$$(更新theta的原理在于后面的导数项,通过斜率正负确保随着theta的增大或减小,J(theta)一定随之减小,同时因为J(theta)的斜率变化,theta的变化也随着J的减小而减小,如在接近局部最低点时导数接近0,此时梯度下降法会自动采取更小的幅度)。所以梯度下降的实现关键在于求出代价函数J的导数:$$\cfrac{\partial}{\partial\theta_j}J(\theta_0,\theta_1)=\cfrac{\partial}{\partial\theta_j}\cfrac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$$j=0:$$\cfrac{\partial}{\partial\theta_j}J(\theta_0,\theta_1)=\cfrac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})$$j=1:$$\cfrac{\partial}{\partial\theta_j}J(\theta_0,\theta_1)=\cfrac{1}{2m}\sum_{i=1}^m((h_\theta(x^{(i)})-y^{(i)})·x^{(i)})$$ 正规方程通过求解方程来找出使J最小的参数(不可逆矩阵不可用):$$\cfrac{\partial}{\partial\theta_j}J(\theta_j)=0$$假设训练集的特征矩阵为X,训练集的结果为向量y,则:$$\theta=(X^TX)^{(-1)}X^Ty$$
###Code
import numpy as np
def normalEqn(X,y):
theta=np.linalg.inv(X.T@X)@X.T@y
return theta
###Output
_____no_output_____
###Markdown
多维特征情形 通用情形特征的数量:$$n$$训练集中的实例成为行向量$$x^{(i)}$$第i个训练实例的第j个特征$$x_j^{(i)}$$X是类似m×n的矩阵,m是实例个数,n是特征个数多变量的假设:$$h_\theta(x)=\theta_0+\theta_1x_1+\theta_2x_2+...+\theta_nx_n$$为了简化公式,引入$x_0=1$:$$h_\theta(x)=\theta_0x_0+\theta_1x_1+\theta_2x_2+...+\theta_nx_n$$即:$$h_\theta(x)=\theta^T(X)$$我们的目标和单变量情形下一样,只不过theta更多:$$\theta_0=\theta_0-\alpha\cfrac{1}{m}\sum_{i=1}^m((h_\theta(x^{(i)})-y^{(i)})·x_0^{(i)})$$$$\theta_1=\theta_1-\alpha\cfrac{1}{m}\sum_{i=1}^m((h_\theta(x^{(i)})-y^{(i)})·x_1^{(i)})$$$$...$$
###Code
import numpy as np
def computeCost(X,y,theta):
inner=np.power(((X*theta.T)-y),2)
return np.sum(inner)/(2*len(X))
#len(X)代表m而不是n,注意!
###Output
_____no_output_____
###Markdown
优化 特征放缩当两个特征值范围差的很远,如0-5和0-2000时,代价函数的等高线图能看出图像很扁,梯度下降也需要非常多次迭代才能收敛,方法是尝试将所有特征的尺度都缩放到-1到1之间:$$x_n=\cfrac{x_n-\mu_n}{s_n}$$平均值$\mu_n$,标准差$s_n$ 学习率梯度下降算法的迭代受到学习率alpha的影响,如果学习率过小,则达到收敛所需的迭代次数会非常高;如过大,则每次迭代可能不会减小代价函数,越过局部最小值而导致无法收敛。建议的学习率:$$\alpha=0.01,0.03,0.1,0.3,1,3,10$$ 逻辑回归引入模型,使得模型的输出变量范围始终在0和1之间,以便完成分类:$$h_\theta(x)=g(\theta^TX)$$X代表特征向量,g代表逻辑函数,此处为Sigmoid函数:$$g(z)=\cfrac{1}{1+e^{-z}}$$故:$$h_\theta(x)=\cfrac{1}{1+e^{-\theta^TX}}$$此时h的作用是对于给定的输入变量,输出变量=1的可能性,即:$$h_\theta(x)=P(y=1|x;\theta)$$
###Code
import numpy as np
def sigmoid(z):
return 1/(1+np.exp(-z))
###Output
_____no_output_____
###Markdown
我们预测的规则是h>0.5则y=1,反之y=0;又因为g(z)的0.5分界在于z=0,所以$\theta^Tx?0$是判断的边界。故决策边界(曲线)方程即是:$$\theta^Tx=0$$线性回归的代价函数:$$J(\theta)=\cfrac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$$逻辑回归的代价函数:$$J(\theta)=\cfrac{1}{m}\sum_{i=1}^mCost(h_\theta(x^{(i)})-y^{(i)})$$y=1时:$$Cost(h_\theta(x),y)=-log(h_\theta(x))$$y=0时:$$Cost(h_\theta(x),y)=-log(1-h_\theta(x))$$简化之:$$J(\theta)=-\cfrac{1}{m}\sum_{i=1}^m[y^{(i)}log(h_\theta(x^{(i)}))+(1-y^{(i)})log(1-h_\theta(x^{(i)}))]$$
###Code
import numpy as np
def cost(theta,X,y):
theta=np.matrix(theta)
X=np.matrix(X)
y=np.matrix(y)
first=np.multiply(y,np.log(sigmoid(X*theta.T)))
second=np.multiply((1-y),mnp.log(1-sigmoid(X*theta.T)))
return np.sum(first+second)/-(len(X))
###Output
_____no_output_____ |
examples/Marks/Pyplot/Map.ipynb | ###Markdown
Basic Map
###Code
fig = plt.figure(title="Basic Map")
plt.geo(map_data="WorldMap")
fig
###Output
_____no_output_____
###Markdown
Advanced Map and Projection
###Code
fig = plt.figure(title="Advanced Map")
geo_scale = Orthographic(scale_factor=375, center=[0, 25], rotate=(-50, 0))
plt.scales(scales={"projection": geo_scale})
map_mark = plt.geo(
map_data="WorldMap",
colors={682: "Green", 356: "Red", 643: "#0000ff", "default_color": "DarkOrange"},
)
fig
geo_scale.scale = 350
###Output
_____no_output_____
###Markdown
Choropleth
###Code
fig = plt.figure(title="Choropleth")
plt.scales(scales={"projection": Mercator(), "color": ColorScale(scheme="YlOrRd")})
chloro_map = plt.geo(
map_data="WorldMap",
color={643: 105.0, 4: 21.0, 398: 23.0, 156: 42.0, 124: 78.0, 76: 98.0},
colors={"default_color": "Grey"},
)
fig
###Output
_____no_output_____
###Markdown
USA State Map
###Code
fig = plt.figure(title="US States Map")
plt.scales(scales={"projection": AlbersUSA()})
states_map = plt.geo(map_data="USStatesMap")
fig
###Output
_____no_output_____
###Markdown
Europe Country Map
###Code
fig = plt.figure(title="Europe Countries Map")
plt.scales(scales={"projection": Mercator(scale_factor=450)})
euro_map = plt.geo(map_data="EuropeMap")
fig
###Output
_____no_output_____
###Markdown
Interactions
###Code
fig = plt.figure(title="Interactions")
def_tt = Tooltip(fields=["id", "name"])
map_mark = plt.geo(
map_data="WorldMap",
tooltip=def_tt,
interactions={"click": "select", "hover": "tooltip"},
)
fig
# clicking on any country will update the 'selected' attribute of map_mark
map_mark.selected
###Output
_____no_output_____
###Markdown
Basic Map
###Code
fig = plt.figure(title='Basic Map')
plt.geo(map_data='WorldMap')
fig
###Output
_____no_output_____
###Markdown
Advanced Map and Projection
###Code
fig = plt.figure(title='Advanced Map')
geo_scale = Orthographic(scale_factor=375, center=[0, 25], rotate=(-50, 0))
plt.scales(scales={'projection': geo_scale})
map_mark = plt.geo(map_data='WorldMap',
colors={682: 'Green', 356: 'Red', 643: '#0000ff', 'default_color': 'DarkOrange'})
fig
geo_scale.scale = 350
###Output
_____no_output_____
###Markdown
Choropleth
###Code
fig = plt.figure(title='Choropleth')
plt.scales(scales={'projection': Mercator(), 'color': ColorScale(scheme='YlOrRd')})
chloro_map = plt.geo(map_data='WorldMap',
color={643: 105., 4: 21., 398: 23., 156: 42., 124:78., 76: 98.},
colors={'default_color': 'Grey'})
fig
###Output
_____no_output_____
###Markdown
USA State Map
###Code
fig = plt.figure(title='US States Map')
plt.scales(scales={'projection': AlbersUSA()})
states_map = plt.geo(map_data='USStatesMap')
fig
###Output
_____no_output_____
###Markdown
Europe Country Map
###Code
fig = plt.figure(title='Europe Countries Map')
plt.scales(scales={'projection': Mercator(scale_factor=450)})
euro_map = plt.geo(map_data='EuropeMap')
fig
###Output
_____no_output_____
###Markdown
Interactions
###Code
fig = plt.figure(title='Interactions')
def_tt = Tooltip(fields=['id', 'name'])
map_mark = plt.geo(map_data='WorldMap', tooltip=def_tt,
interactions={'click': 'select', 'hover': 'tooltip'})
fig
# clicking on any country will update the 'selected' attribute of map_mark
map_mark.selected
###Output
_____no_output_____
###Markdown
Basic Map
###Code
fig = plt.figure(title='Basic Map')
plt.geo(map_data='WorldMap')
fig
###Output
_____no_output_____
###Markdown
Advanced Map and Projection
###Code
fig = plt.figure(title='Advanced Map')
geo_scale = Orthographic(scale_factor=375, center=[0, 25], rotate=(-50, 0))
plt.scales(scales={'projection': geo_scale})
map_mark = plt.geo(map_data='WorldMap',
colors={682: 'Green', 356: 'Red', 643: '#0000ff', 'default_color': 'DarkOrange'})
fig
geo_scale.scale = 350
###Output
_____no_output_____
###Markdown
Choropleth
###Code
fig = plt.figure(title='Choropleth')
plt.scales(scales={'projection': Mercator(), 'color': ColorScale(scheme='YlOrRd')})
chloro_map = plt.geo(map_data='WorldMap',
color={643: 105., 4: 21., 398: 23., 156: 42., 124:78., 76: 98.},
colors={'default_color': 'Grey'})
fig
###Output
_____no_output_____
###Markdown
USA State Map
###Code
fig = plt.figure(title='US States Map')
plt.scales(scales={'projection': AlbersUSA()})
states_map = plt.geo(map_data='USStatesMap')
fig
###Output
_____no_output_____
###Markdown
Europe Country Map
###Code
fig = plt.figure(title='Europe Countries Map')
plt.scales(scales={'projection': Mercator(scale_factor=450)})
euro_map = plt.geo(map_data='EuropeMap')
fig
###Output
_____no_output_____
###Markdown
Interactions
###Code
fig = plt.figure(title='Interactions')
def_tt = Tooltip(fields=['id', 'name'])
map_mark = plt.geo(map_data='WorldMap', tooltip=def_tt,
interactions={'click': 'select', 'hover': 'tooltip'})
fig
# clicking on any country will update the 'selected' attribute of map_mark
map_mark.selected
###Output
_____no_output_____ |
notebooks/FF_III_final.ipynb | ###Markdown
ASSIGNMENT 3 - BDE STREAMING WITH KAFKA AND SPARK STREAMING TEAM - FINAL FANTASY III
###Code
# Importing required packages and libraries
from pyspark.sql import SparkSession
from IPython.display import display, clear_output
import time
from pyspark.sql import functions as F
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
import confluent_kafka
import random as rd
import numpy as np
from IPython.display import clear_output
import matplotlib.pyplot as plt
#Instantiate a local spark session and saved variable
spark = SparkSession.builder \
.appName('kafka') \
.getOrCreate()
#Subscribe to the topic stock-trades from the Kafka broker and read latest data into a Spark dataframe called stream_df
stream_df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "broker:29092") \
.option("startingOffsets", "latest") \
.option("subscribe", "stock-trades") \
.load()
stream_df.printSchema()
raw_stream = stream_df \
.writeStream \
.format("memory") \
.queryName("raw_stocktrade_view") \
.start()
clear_output(wait=True)
display(spark.sql('SELECT * FROM raw_stocktrade_view').show(20))
time.sleep(10)
raw_stream.stop()
#Create string stream from Stock Trade data
string_stream_df = stream_df \
.withColumn("key", stream_df["key"].cast(StringType())) \
.withColumn("value", stream_df["value"].cast(StringType()))
string_stream = string_stream_df \
.writeStream \
.format("memory") \
.queryName("string_stocktrade_view") \
.start()
#Show 5 second output
clear_output(wait=True)
display(spark.sql('SELECT * FROM string_stocktrade_view').show(20, False))
time.sleep(5)
string_stream.stop()
#Define values in the JSON from streaming
schema_stocktrade = StructType([
StructField('payload', StructType([
StructField("side", StringType(), True),
StructField("quantity", IntegerType(), True),
StructField("symbol", StringType(), True),
StructField("price", IntegerType(), True),
StructField("account", StringType(), True),
StructField("userid", StringType(), True)
]))
])
###Output
_____no_output_____
###Markdown
PART 3a - STREAM CREATION
###Code
#Stream 1 - JSON value stream
json_stream_df = string_stream_df\
.withColumn("value", F.from_json("value", schema_stocktrade))
json_stream_df.printSchema()
#Stream 2 - Define Stock Trade values stream
stocktrade_stream_df = json_stream_df \
.select( \
F.col("key").alias("event_key"), \
F.col("topic").alias("event_topic"), \
F.col("timestamp").alias("event_timestamp"), \
"value.payload.account", \
"value.payload.symbol", \
"value.payload.side", \
"value.payload.price", \
"value.payload.quantity", \
"value.payload.userid"
)
stocktrade_stream_df.printSchema()
#Create base Stock Trade View stream
stocktrade_stream = stocktrade_stream_df \
.writeStream \
.format("memory") \
.queryName("stocktrade_view") \
.start()
clear_output(wait=True)
display(spark.sql('SELECT * FROM stocktrade_view').show(20))
time.sleep(1)
#Create
clear_output(wait=True)
display(spark.sql('SELECT event_key, COUNT(1) AS count, round(mean(price),0) as price, round(mean(quantity),0) as qty FROM stocktrade_view GROUP BY 1').show(20))
time.sleep(1)
stocktrade_stream.stop()
###Output
_____no_output_____
###Markdown
Part 3b - WINDOW STEAM WITH WATERMARK
###Code
#Set Window parameters
window_duration = '60 seconds'
slide_duration = '10 seconds'
#Create window view from stock_trade stream
windowed_count_df = stocktrade_stream_df \
.withWatermark("event_timestamp", "1 minutes") \
.groupBy(F.window(stocktrade_stream_df.event_timestamp, window_duration, slide_duration), stocktrade_stream_df.symbol) \
.count()
###Output
_____no_output_____
###Markdown
Part 3c - SPARK QUERY
###Code
#Count view = count of trade symbols within a
count_stream = windowed_count_df \
.writeStream \
.format("memory") \
.outputMode("Complete") \
.queryName("count_view") \
.start()
while True:
clear_output(wait=True)
display(spark.sql('SELECT * FROM count_view LIMIT 20').show())
time.sleep(1)
count_stream.stop()
###Output
_____no_output_____
###Markdown
PART 3e - VISUALISATIONS
###Code
# Graphing real-time data
# loading packages
import time
import random as rd
import numpy as np
from IPython.display import clear_output
import matplotlib.pyplot as plt
# sample data from stocktrade_view
x_y = spark.sql('''SELECT * FROM stocktrade_view order by event_timestamp desc limit 2000''')
x_y_df = x_y.toPandas()
x_y_df
# The following routine displays 3 graphs which include 1 scatter plot, 1 line graph and 1 column chart)
def bunchofplots(x1, y1, title1, x2, data2, labels2, title2, x3, y3, title3):
clear_output(wait=True)
# Placing the plots in the plane
fig = plt.figure(figsize = (15,10))
plot1 = plt.subplot2grid((4, 9), (0, 0), rowspan=2, colspan=3)
plot2 = plt.subplot2grid((4, 9), (0, 4), rowspan=4, colspan=5)
plot3 = plt.subplot2grid((4, 9), (2, 0), rowspan=2, colspan=3)
# Using numpy to create an array x
x = [x for x in range(0,11)]
# scatter plot
plot1.scatter(x1, y1, color='k') # black dots
m, b = np.polyfit(x1, y1, 1)
plot1.plot(x1, [x * m for x in x1] + b) # showing the line of best-fit
plot1.set(xlabel = 'Price', ylabel = 'Quantity')
plot1.set_title(title1)
# column chart
plot2.bar(x2 + 0.00, data2[0], color = 'g', width = 0.25)
plot2.bar(x2 + 0.25, data2[1], color = 'r', width = 0.25)
plot2.legend(labels=['BUY', 'SELL'])
plot2.set_xticks(x2)
plot2.set_xticklabels(labels2)
plot2.set_title(title2)
plot2.grid(True)
plot2.set(xlabel = 'Stock symbol', ylabel = 'Traded amount in $')
# line graph
plot3.plot(x3, y3, color='r')
plot3.set_title(title3)
plot3.set(xlabel = 'Time', ylabel = 'Price')
# Packing all the plots and displaying them
plt.tight_layout()
plt.show()
# combining and updating 3 graphs simultaneously
stocktrade_stream.stop()
stocktrade_stream = stocktrade_stream_df \
.writeStream \
.format("memory") \
.queryName("stocktrade_view") \
.start()
while True:
# aggregating streamed data and displaying in a scatter plot
# plotting Quantity vs Price
x_y1 = spark.sql('''SELECT * FROM stocktrade_view order by event_timestamp desc limit 200''')
x_y1_df = x_y1.toPandas()
if x_y1_df.shape[0] > 0:
x1 = x_y1_df.price
y1 = x_y1_df.quantity
title1 = f'Quantity vs Price in last 200 transactions before {max(x_y1_df.event_timestamp)}'
# aggregating the streamed data and displaying in a column plot
# plotting traded amount vs stock (symbol)
x_y2= spark.sql('''SELECT
symbol
, side
, sum(price*quantity)/100 as traded_amount_in_dollars
, max(event_timestamp) as max_event_timestamp
FROM (select * from stocktrade_view order by event_timestamp desc limit 200) as current
group by
side
, symbol
''')
x_y2_df = x_y2.toPandas()
labels2 = list(x_y2_df.symbol.unique())
data2 = [ x_y2_df[x_y2_df['side'] == 'BUY']['traded_amount_in_dollars']
,x_y2_df[x_y2_df['side'] == 'SELL']['traded_amount_in_dollars']
]
x2 = np.arange(len(x_y2_df.symbol.unique()))
title2 = 'Traded amount of last 200 transactions'
# aggregating the streamed data and displaying in a line plot
x_y3= spark.sql('''SELECT
price
, event_timestamp
FROM stocktrade_view
where side = 'SELL'
and symbol = 'ZTEST'
order by event_timestamp desc limit 10
''')
x_y3_df = x_y3.toPandas()
x3 = x_y3_df.event_timestamp
y3 = x_y3_df.price
title3 = 'Last 10 sell-prices for the stock, ZTEST'
print('Preparing streamed data for plots - refreshing every 10 s')
if (x_y1_df.shape[0] > 0):
bunchofplots(x1, y1, title1, x2, data2, labels2, title2, x3, y3, title3)
# updating every 10 s
time.sleep(10)
###Output
_____no_output_____
###Markdown
PART 3d - PARQUET SINK
###Code
# Create dataframe for training and test data, then export to parquet file
from pyspark import SparkFiles
from pyspark.sql import SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
# Capture training data - Automatically stops when rows exceed 10000 (checking every 30 seconds).
df = sqlContext.sql('SELECT * FROM stocktrade_view')
while(df.count() < 10000):
time.sleep(30)
df = sqlContext.sql('SELECT * FROM stocktrade_view')
df.count()
# Set whole streaming dataset to 10000 rows
df_train = sqlContext.sql('SELECT * FROM stocktrade_view order by event_timestamp limit 8000')
df_test = sqlContext.sql('SELECT * FROM stocktrade_view order by event_timestamp desc limit 2000')
df_concat = df_train.union(df_test)
df_concat.count()
# Sink all data and traning data to parquet files separately
df_train.write.parquet('./query2_trainingdata.parquet',mode='append')
df_concat.write.parquet('./query2_alldata.parquet',mode='append')
# Read parquet file for visualisation and machine learning
query2_df = sqlContext.read.parquet('./query2_alldata.parquet')
query2_df.show(20)
###Output
+---------+------------+--------------------+-------+------+----+-----+--------+------+
|event_key| event_topic| event_timestamp|account|symbol|side|price|quantity|userid|
+---------+------------+--------------------+-------+------+----+-----+--------+------+
| ZVZZT|stock-trades|2021-06-13 07:07:...| XYZ789| ZVZZT|SELL| 411| 1897|User_7|
| ZVZZT|stock-trades|2021-06-13 07:07:...| LMN456| ZVZZT| BUY| 253| 3377|User_4|
| ZJZZT|stock-trades|2021-06-13 07:07:...| ABC123| ZJZZT| BUY| 925| 3648|User_7|
| ZXZZT|stock-trades|2021-06-13 07:07:...| XYZ789| ZXZZT|SELL| 543| 1046|User_1|
| ZVV|stock-trades|2021-06-13 07:07:...| ABC123| ZVV| BUY| 62| 3178|User_8|
| ZTEST|stock-trades|2021-06-13 07:07:...| LMN456| ZTEST|SELL| 270| 3418|User_9|
| ZTEST|stock-trades|2021-06-13 07:07:...| ABC123| ZTEST| BUY| 329| 4134|User_8|
| ZVZZT|stock-trades|2021-06-13 07:07:...| XYZ789| ZVZZT| BUY| 668| 3429|User_6|
| ZTEST|stock-trades|2021-06-13 07:07:...| LMN456| ZTEST| BUY| 745| 4448|User_5|
| ZXZZT|stock-trades|2021-06-13 07:07:...| ABC123| ZXZZT| BUY| 706| 2733|User_4|
| ZBZX|stock-trades|2021-06-13 07:07:...| LMN456| ZBZX|SELL| 308| 387|User_4|
| ZTEST|stock-trades|2021-06-13 07:07:...| LMN456| ZTEST|SELL| 920| 380|User_2|
| ZVV|stock-trades|2021-06-13 07:07:...| ABC123| ZVV|SELL| 966| 3548|User_4|
| ZJZZT|stock-trades|2021-06-13 07:07:...| XYZ789| ZJZZT|SELL| 846| 2400|User_2|
| ZVZZT|stock-trades|2021-06-13 07:07:...| LMN456| ZVZZT|SELL| 618| 831|User_2|
| ZVZZT|stock-trades|2021-06-13 07:07:...| LMN456| ZVZZT| BUY| 478| 1898|User_9|
| ZBZX|stock-trades|2021-06-13 07:07:...| ABC123| ZBZX|SELL| 660| 2327|User_6|
| ZTEST|stock-trades|2021-06-13 07:07:...| XYZ789| ZTEST|SELL| 506| 2575|User_6|
| ZVV|stock-trades|2021-06-13 07:07:...| ABC123| ZVV|SELL| 624| 1706|User_8|
| ZXZZT|stock-trades|2021-06-13 07:07:...| LMN456| ZXZZT| BUY| 737| 1972|User_8|
+---------+------------+--------------------+-------+------+----+-----+--------+------+
only showing top 20 rows
###Markdown
PART 4a - ML WITH SPARK
###Code
# Setup pipelines to train regression model to predict "quantity"
from pyspark.ml.regression import LinearRegression
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.feature import StringIndexer,VectorAssembler,VectorIndexer
from pyspark.ml.feature import OneHotEncoder
cat_cols = ['userid','account','symbol','side']
stages = []
for cat_col in cat_cols:
col_indexer = StringIndexer(inputCol=cat_col, outputCol =f"{cat_col}_ind")
col_encoder = OneHotEncoder(inputCols=[f"{cat_col}_ind"],outputCols=[f"{cat_col}_ohe"])
stages += [col_indexer, col_encoder]
num_cols = ['price']
cat_cols_ohe = [f"{cat_col}_ohe" for cat_col in cat_cols]
assembler = VectorAssembler(inputCols=cat_cols_ohe+num_cols, outputCol="features")
regression = LinearRegression(labelCol='quantity',regParam = 0, maxIter = 10)
stages += [assembler,regression]
print(stages)
pipeline = Pipeline(stages=stages)
pipeline = pipeline.fit(query2_df)
train_assessment = pipeline.transform(query2_df)
#Trained regression model evaluation- RMSE
lr_evaluator = RegressionEvaluator(metricName="rmse", labelCol=regression.getLabelCol(), \
predictionCol=regression.getPredictionCol())
RMSE = lr_evaluator.evaluate(train_assessment)
print(RMSE)
train_assessment.show(10)
train_assessment.count()
test_assessment = pipeline.transform(df_test)
#Regression model test evaluation- RMSE
lr_evaluator = RegressionEvaluator(metricName="rmse", labelCol=regression.getLabelCol(), \
predictionCol=regression.getPredictionCol())
RMSE = lr_evaluator.evaluate(test_assessment)
print(RMSE)
# Create pipelines for RF classification model to predict "side"
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
cols_list = ['side','userid','account','symbol','quantity','price']
cat_cols = ['userid','account','symbol']
stages = []
for cat_col in cat_cols:
col_indexer = StringIndexer(inputCol=cat_col, outputCol =f"{cat_col}_ind")
col_encoder = OneHotEncoder(inputCols=[f"{cat_col}_ind"],outputCols=[f"{cat_col}_ohe"])
stages += [col_indexer, col_encoder]
target_indexer = StringIndexer(inputCol='side', outputCol='label')
num_cols = ['quantity','price']
cat_cols_ohe = [f"{cat_col}_ohe" for cat_col in cat_cols]
assembler = VectorAssembler(inputCols= cat_cols_ohe + num_cols, outputCol="features")
stages += [target_indexer,assembler]
print(stages)
pipeline = Pipeline(stages=stages)
pipeline_model = pipeline.fit(query2_df)
df = pipeline_model.transform(query2_df)
df = df.select(['label','features'] + cols_list)
df.show()
train_data,test_data = df.randomSplit([0.8,0.2], seed = 1)
rf = RandomForestClassifier(featuresCol='features', labelCol='label')
rf_model = rf.fit(train_data)
train_preds = rf_model.transform(train_data)
test_preds = rf_model.transform(test_data)
#Create test prediction - output
train_preds.select(cols_list + ['prediction','probability']).show(10)
#RF classification - AUROC
from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator
evaluator = BinaryClassificationEvaluator(metricName= 'areaUnderROC')
print(f"[AUROC] train:{evaluator.evaluate(train_preds)} - test: {evaluator.evaluate(test_preds)}")
#RF classification - Accuracy
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator_multi = MulticlassClassificationEvaluator()
acc_metric = {evaluator_multi.metricName: "accuracy"}
print(f"[Accuracy] train:{evaluator_multi.evaluate(train_preds,acc_metric)} - test: {evaluator_multi.evaluate(test_preds, acc_metric)}")
spark.stop()
###Output
_____no_output_____ |
Notebooks/Cleaning_New.ipynb | ###Markdown
Start Up: Before Running the cell below, you must ensure that these have been run in Terminal **IN ORDER** : If you have any questions, Message me on Slack. - Haden Moore- conda update -n base -c defaults conda - cd SageMaker - cd yelp-dataset-challenge-1-ds - conda env create -f environment.yml - source activate ydc1 - pip install python-decouple - pip install pprintpp Imports:
###Code
import pandas as pd
import s3
from pprintpp import pprint as pp
from sklearn.externals import joblib
import json
# Load in Bucket
bucket = s3.Bucket('yelpchallenge1')
# Look inside Bucket
bucket.contents
### ***** DO NOT RUN. ******* ####
### ***** ALREADY INSTALLED. ****** ###
# Installs the File 'Locally' on SageMaker Instance / Only have to run these once:
bucket.get('datasets/df.csv', 'df.csv')
# Installing .json Files 'Locally'
bucket.get('datasets/user.json', 'user.json')
bucket.get('datasets/review.json', 'review.json')
###Output
_____no_output_____
###Markdown
Cleaning Data: Complete as of ***8:14 PM : 12/19/2019*** Cleaning df.csv & saving Cleaned df.csv
###Code
### ***** DO NOT RUN. ******* ####
### ***** ALREADY COMPLETE. ****** ###
# Further Cleaning of df.csv:
# Import
df = pd.read_csv('df.csv')
# Dropping Columns:
#df = df.drop(columns=['Unnamed: 0', 'stars'])
# Dropping all Missing / Na Values from Entire Dataframe:
df = df.dropna()
# Saving Cleaned df.csv
df.to_csv(index=True)
df.to_csv(r'df.csv')
###Output
_____no_output_____
###Markdown
Converting user_json & to Pandas DataFrame / Saving as user.csv & review.csv
###Code
# ******* DO NOT RUN! ******* #
# ***** ALREADY COMPLETE. ****** #
# import user.json
with open('user.json') as f:
user = json.loads("[" +
f.read().replace("}\n{", "},\n{") +
"]")
# convert user.json files to pandas DataFrame 'user_df'
user_df = pd.DataFrame(user)
# Saving user_df as csv file.
user_df.to_csv(index=True)
user_df.to_csv(r'user.csv')
# Import review.json
with open('review.json') as f:
review = json.loads("[" +
f.read().replace("}\n{", "},\n{") +
"]")
# convert review.json files to pandas DataFrame 'review_df'
review_df = pd.DataFrame(review)
# Saving user_df as csv file.
review_df.to_csv(index=True)
review_df.to_csv(r'review.csv')
###Output
_____no_output_____
###Markdown
Data Merging: Complete as of ***8:14 PM : 12/19/2019***
###Code
# ***** New DTM DF HAS BEEN CREATED. DO NOT RUN THIS CELL **** #
# Read-in dtm.csv (Original)
dtm = pd.read_csv('dtm.csv')
# Taking Stars Column
stars = df['stars']
# Adding stars column to dtm
dtm['stars']=df['stars']
# Shifting 'Stars' Column to front of Df,
cols = list(dtm.columns)
cols = [cols[-1]] + cols[:-1]
dtm = dtm[cols]
# Dropping "-PRON-", 'year -PRON-', and ' ' Columns
dtm = dtm.drop(columns=[' ', ' -PRON-', 'year -PRON-'])#Cut 135,000 Rows of df['stars'] Column to fix Memory Error.
# Label as "stars"
stars = df.stars[0:135000]
stars.shape
# Adding stars to dtm2
dtm2['stars']=df['stars'][0:135000]
# ***** New DTM2 DF HAS BEEN CREATED. DO NOT RUN THIS CELL **** #
# Read-in dtm2.csv(Old)
dtm2 = pd.read_csv('dtm2.csv')
# Taking Stars Column
stars = df['stars']
# Adding stars column to dtm
dtm2['stars']=df['stars']
# Shifting 'Stars' Column to front of Df,
cols = list(dtm2.columns)
cols = [cols[-1]] + cols[:-1]
dtm2 = dtm2[cols]
dtm2 = dtm2.drop(columns=['stars'])
# Dropping columns:
dtm2 = dtm2.drop(columns=[' ' , ' '])
dtm2 = dtm2.drop(columns=[' -PRON-',' i', ' the', ' this', '$', "'s"])
# Saving dtm2.csv
dtm2.to_csv(index=True)
dtm2.to_csv(r'dtm2.csv')
# Cut 135,000 Rows of df['stars'] Column to fix Memory Error.
# Label as "stars"
stars = df.stars[0:135000]
stars.shape
# Adding stars to dtm2
dtm2['stars']=df['stars'][0:135000]
# Saving Final df as 'dtm_final'
dtm_final = dtm2
# Saving dtm_final.csv
dtm_final.to_csv(index=True)
dtm_final.to_csv(r'dtm_final.csv')
# Read-in dtm_final.csv (FINAL)
#dtm_final = pd.read_csv('dtm_final.csv')
###Output
_____no_output_____
###Markdown
Clean / Analyze user.csv: Complete as of ***10:14 PM 12/19/2019***
###Code
# Imports
# Read-in user.csv
user = pd.read_csv('user.csv')
# Read-in review.csv
review = pd.read_csv('review.csv')
# Check Read-in of df_user
# Checking Null Values and Shape
pp(user.isna().sum())
pp(user.shape)
###Output
_____no_output_____
###Markdown
Three Problems: **Problem 1:**user['Unamed: 0'] should not exist. **Problem 2:**user['elite'] has 1,565,761 Missing Values. **Problem 3:** user['name'] has 3 Missing Values. **Solution?:**Drop user['Unamed: 0' , 'elite'] Columns.Drop Missing Values.
###Code
# Solution:
user = user.drop(columns=['Unnamed: 0', 'elite' ])
user = user.dropna()
# Save Cleaned user_df.csv
user.to_csv(index=True)
user.to_csv(r'user.csv')
user.columns
# drop unused columns from user_df
user = user.drop(columns=['average_stars', 'compliment_cool', 'compliment_cute',
'compliment_funny', 'compliment_hot', 'compliment_list',
'compliment_more', 'compliment_note', 'compliment_photos',
'compliment_plain', 'compliment_profile', 'compliment_writer', 'cool', 'friends', 'funny', 'useful'])
# Save Cleaned user_df.csv
user.to_csv(index=True)
user.to_csv(r'user.csv')
###Output
_____no_output_____
###Markdown
Clean / Analyze review.csv:
###Code
# Check Read-in of review.csv
# Checking Null Values and Shape
pp(review.isna().sum())
pp(review.shape)
###Output
_____no_output_____
###Markdown
**Minor Problem(s) with a Simple Solution****Problems?:** review['date', 'funny', 'review_id', 'stars', 'text', 'useful'] Columns have NaN's. review['Unnamed: 0'] Not Supposed to be there.**Solution?:**Drop Missing Values from review DataFrame.
###Code
# Solution:
review = review.dropna()
review = review.drop(columns=['Unnamed: 0', 'stars', 'business_id'])
# Save Cleaned review_df.csv
review.to_csv(index=True)
review.to_csv(r'review.csv')
review = review.drop(columns=['text'])
review.to_csv(index=True)
review.to_csv(r'review.csv')
#Adding df['text', 'tokens'] to review.csv
review['text'] = df['text']
review['tokens'] = df['tokens']
#review = review.drop(columns=['tokens'])
review.head(5)
###Output
_____no_output_____
###Markdown
Combining review.csv, & df.csv **Description**: Combining based on their *Unique Account ID's.*The end Product will be One DataFrame Consisting of Each Account:- **Name**, - **User_ID**,- **Review_ID**,- **Text**,- **That Users respective review(s)**,- **Interactions that Review (i.e: Cool, Funny, Useful)** The goal of the model is to have the ability to type in the Review you are wanting to post on Yelp, and give the User the ability to Predict What type of Interaction they would potentially receive and Total Number of each interaction. The model Accuracy will be Displayed beside the Prediction.
###Code
# Changing Layout of Columns
final_df = review[['user_id', 'date', 'review_id', 'useful', 'funny', 'cool', 'text']]
#Saving Final_df
final_df.to_csv(index=True)
final_df.to_csv(r'final.csv')
# Checking Null Values and Shape
pp(final_df.isna().sum())
pp(final_df.shape)
# Dropping Null Values from [text] column
final = final_df.dropna()
# Checking Null Values and Shape
pp(final.isna().sum())
pp(final.shape)
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
final.dtypes
final['cool'] = final.cool.astype(float)
final.dtypes
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
final = pd.read_csv('final.csv')
final = final.drop(columns=['Unnamed: 0'])
final.columns
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
###Output
_____no_output_____
###Markdown
Testing a Another Dataframe
###Code
# Imports
df = pd.read_csv('df.csv')
df = df.drop(columns=['Unnamed: 0','Unnamed: 0.1'])
final = pd.read_csv('final.csv')
final = final.drop(columns=['Unnamed: 0'])
final.head(5)
# Number of Stopwords in 'text'
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora
STOPWORDS = set(STOPWORDS).union(set(['I', 'We', 'i', 'we', 'it', "it's",
'it', 'the', 'this', 'they', 'They',
'he', 'He', 'she', 'She', '\n', '\n\n']))
final['stopwords'] = final['text'].apply(lambda x: len([x for x in x.split() if x in STOPWORDS]))
final[['text', 'stopwords']].head(5)
# Number of Special Characters
final['astrics'] = final['text'].apply(lambda x: len([x for x in x.split() if x.startswith('*')]))
final[['text','astrics']].head(5)
# Turning all ['text'] into lower case to avoid having multiple copies of the same word:
final['text'] = final['text'].apply(lambda x: " ".join(x.lower() for x in x.split()))
final['text'].head(5)
final = final.drop(columns=['stopwords', 'astrics'])
final.columns
final['total_votes'] = final['useful'] + final['funny'] + final['cool']
# Changing Layout of Columns
final = final[['user_id', 'date', 'review_id', 'total_votes', 'useful', 'funny', 'cool', 'text']]
#Saving Final_df
final.to_csv(index=True)
final.to_csv(r'final.csv')
###Output
_____no_output_____
###Markdown
Some Visulizations:
###Code
# Imports
import seaborn as sns
sns.set_style("whitegrid")
import matplotlib.pyplot as plt
# Code for hiding seaborn warnings
import warnings
warnings.filterwarnings("ignore")
plt.figure(figsize=(12.8,6))
sns.distplot(final['useful']).set_title('Useful Interaction Distribution');
###Output
_____no_output_____
###Markdown
Start Up: Before Running the cell below, you must ensure that these have been run in Terminal **IN ORDER** : - conda update -n base -c defaults conda - cd SageMaker - cd yelp-dataset-challenge-1-ds - conda env create -f environment.yml - source activate ydc1 - pip install python-decouple - pip install pprintpp Imports:
###Code
import pandas as pd
import s3
from pprintpp import pprint as pp
from sklearn.externals import joblib
import json
# Load in Bucket
bucket = s3.Bucket('yelpchallenge1')
# Look inside Bucket
bucket.contents
### ***** DO NOT RUN. ******* ####
### ***** ALREADY INSTALLED. ****** ###
# Installs the File 'Locally' on SageMaker Instance / Only have to run these once:
bucket.get('datasets/df.csv', 'df.csv')
# Installing .json Files 'Locally'
bucket.get('datasets/user.json', 'user.json')
bucket.get('datasets/review.json', 'review.json')
###Output
_____no_output_____
###Markdown
Cleaning Data: Complete as of ***8:14 PM : 12/19/2019*** Cleaning df.csv & saving Cleaned df.csv
###Code
### ***** DO NOT RUN. ******* ####
### ***** ALREADY COMPLETE. ****** ###
# Further Cleaning of df.csv:
# Import
df = pd.read_csv('df.csv')
# Dropping Columns:
#df = df.drop(columns=['Unnamed: 0', 'stars'])
# Dropping all Missing / Na Values from Entire Dataframe:
df = df.dropna()
# Saving Cleaned df.csv
df.to_csv(index=True)
df.to_csv(r'df.csv')
###Output
_____no_output_____
###Markdown
Converting user_json & to Pandas DataFrame / Saving as user.csv & review.csv
###Code
# ******* DO NOT RUN! ******* #
# ***** ALREADY COMPLETE. ****** #
# import user.json
with open('user.json') as f:
user = json.loads("[" +
f.read().replace("}\n{", "},\n{") +
"]")
# convert user.json files to pandas DataFrame 'user_df'
user_df = pd.DataFrame(user)
# Saving user_df as csv file.
user_df.to_csv(index=True)
user_df.to_csv(r'user.csv')
# Import review.json
with open('review.json') as f:
review = json.loads("[" +
f.read().replace("}\n{", "},\n{") +
"]")
# convert review.json files to pandas DataFrame 'review_df'
review_df = pd.DataFrame(review)
# Saving user_df as csv file.
review_df.to_csv(index=True)
review_df.to_csv(r'review.csv')
###Output
_____no_output_____
###Markdown
Data Merging: Complete as of ***8:14 PM : 12/19/2019***
###Code
# ***** New DTM DF HAS BEEN CREATED. DO NOT RUN THIS CELL **** #
# Read-in dtm.csv (Original)
dtm = pd.read_csv('dtm.csv')
# Taking Stars Column
stars = df['stars']
# Adding stars column to dtm
dtm['stars']=df['stars']
# Shifting 'Stars' Column to front of Df,
cols = list(dtm.columns)
cols = [cols[-1]] + cols[:-1]
dtm = dtm[cols]
# Dropping "-PRON-", 'year -PRON-', and ' ' Columns
dtm = dtm.drop(columns=[' ', ' -PRON-', 'year -PRON-'])#Cut 135,000 Rows of df['stars'] Column to fix Memory Error.
# Label as "stars"
stars = df.stars[0:135000]
stars.shape
# Adding stars to dtm2
dtm2['stars']=df['stars'][0:135000]
# ***** New DTM2 DF HAS BEEN CREATED. DO NOT RUN THIS CELL **** #
# Read-in dtm2.csv(Old)
dtm2 = pd.read_csv('dtm2.csv')
# Taking Stars Column
stars = df['stars']
# Adding stars column to dtm
dtm2['stars']=df['stars']
# Shifting 'Stars' Column to front of Df,
cols = list(dtm2.columns)
cols = [cols[-1]] + cols[:-1]
dtm2 = dtm2[cols]
dtm2 = dtm2.drop(columns=['stars'])
# Dropping columns:
dtm2 = dtm2.drop(columns=[' ' , ' '])
dtm2 = dtm2.drop(columns=[' -PRON-',' i', ' the', ' this', '$', "'s"])
# Saving dtm2.csv
dtm2.to_csv(index=True)
dtm2.to_csv(r'dtm2.csv')
# Cut 135,000 Rows of df['stars'] Column to fix Memory Error.
# Label as "stars"
stars = df.stars[0:135000]
stars.shape
# Adding stars to dtm2
dtm2['stars']=df['stars'][0:135000]
# Saving Final df as 'dtm_final'
dtm_final = dtm2
# Saving dtm_final.csv
dtm_final.to_csv(index=True)
dtm_final.to_csv(r'dtm_final.csv')
# Read-in dtm_final.csv (FINAL)
#dtm_final = pd.read_csv('dtm_final.csv')
###Output
_____no_output_____
###Markdown
Clean / Analyze user.csv: Complete as of ***10:14 PM 12/19/2019***
###Code
# Imports
# Read-in user.csv
user = pd.read_csv('user.csv')
# Read-in review.csv
review = pd.read_csv('review.csv')
# Check Read-in of df_user
# Checking Null Values and Shape
pp(user.isna().sum())
pp(user.shape)
###Output
_____no_output_____
###Markdown
Three Problems: **Problem 1:**user['Unamed: 0'] should not exist. **Problem 2:**user['elite'] has 1,565,761 Missing Values. **Problem 3:** user['name'] has 3 Missing Values. **Solution?:**Drop user['Unamed: 0' , 'elite'] Columns.Drop Missing Values.
###Code
# Solution:
user = user.drop(columns=['Unnamed: 0', 'elite' ])
user = user.dropna()
# Save Cleaned user_df.csv
user.to_csv(index=True)
user.to_csv(r'user.csv')
user.columns
# drop unused columns from user_df
user = user.drop(columns=['average_stars', 'compliment_cool', 'compliment_cute',
'compliment_funny', 'compliment_hot', 'compliment_list',
'compliment_more', 'compliment_note', 'compliment_photos',
'compliment_plain', 'compliment_profile', 'compliment_writer', 'cool', 'friends', 'funny', 'useful'])
# Save Cleaned user_df.csv
user.to_csv(index=True)
user.to_csv(r'user.csv')
###Output
_____no_output_____
###Markdown
Clean / Analyze review.csv:
###Code
# Check Read-in of review.csv
# Checking Null Values and Shape
pp(review.isna().sum())
pp(review.shape)
###Output
_____no_output_____
###Markdown
**Minor Problem(s) with a Simple Solution****Problems?:** review['date', 'funny', 'review_id', 'stars', 'text', 'useful'] Columns have NaN's. review['Unnamed: 0'] Not Supposed to be there.**Solution?:**Drop Missing Values from review DataFrame.
###Code
# Solution:
review = review.dropna()
review = review.drop(columns=['Unnamed: 0', 'stars', 'business_id'])
# Save Cleaned review_df.csv
review.to_csv(index=True)
review.to_csv(r'review.csv')
review = review.drop(columns=['text'])
review.to_csv(index=True)
review.to_csv(r'review.csv')
#Adding df['text', 'tokens'] to review.csv
review['text'] = df['text']
review['tokens'] = df['tokens']
#review = review.drop(columns=['tokens'])
review.head(5)
###Output
_____no_output_____
###Markdown
Combining review.csv, & df.csv **Description**: Combining based on their *Unique Account ID's.*The end Product will be One DataFrame Consisting of Each Account:- **Name**, - **User_ID**,- **Review_ID**,- **Text**,- **That Users respective review(s)**,- **Interactions that Review (i.e: Cool, Funny, Useful)** The goal of the model is to have the ability to type in the Review you are wanting to post on Yelp, and give the User the ability to Predict What type of Interaction they would potentially receive and Total Number of each interaction. The model Accuracy will be Displayed beside the Prediction.
###Code
# Changing Layout of Columns
final_df = review[['user_id', 'date', 'review_id', 'useful', 'funny', 'cool', 'text']]
#Saving Final_df
final_df.to_csv(index=True)
final_df.to_csv(r'final.csv')
# Checking Null Values and Shape
pp(final_df.isna().sum())
pp(final_df.shape)
# Dropping Null Values from [text] column
final = final_df.dropna()
# Checking Null Values and Shape
pp(final.isna().sum())
pp(final.shape)
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
final.dtypes
final['cool'] = final.cool.astype(float)
final.dtypes
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
final = pd.read_csv('final.csv')
final = final.drop(columns=['Unnamed: 0'])
final.columns
#Saving Final
final.to_csv(index=True)
final.to_csv(r'final.csv')
###Output
_____no_output_____
###Markdown
Testing a Another Dataframe
###Code
# Imports
df = pd.read_csv('df.csv')
df = df.drop(columns=['Unnamed: 0','Unnamed: 0.1'])
final = pd.read_csv('final.csv')
final = final.drop(columns=['Unnamed: 0'])
final.head(5)
# Number of Stopwords in 'text'
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim import corpora
STOPWORDS = set(STOPWORDS).union(set(['I', 'We', 'i', 'we', 'it', "it's",
'it', 'the', 'this', 'they', 'They',
'he', 'He', 'she', 'She', '\n', '\n\n']))
final['stopwords'] = final['text'].apply(lambda x: len([x for x in x.split() if x in STOPWORDS]))
final[['text', 'stopwords']].head(5)
# Number of Special Characters
final['astrics'] = final['text'].apply(lambda x: len([x for x in x.split() if x.startswith('*')]))
final[['text','astrics']].head(5)
# Turning all ['text'] into lower case to avoid having multiple copies of the same word:
final['text'] = final['text'].apply(lambda x: " ".join(x.lower() for x in x.split()))
final['text'].head(5)
final = final.drop(columns=['stopwords', 'astrics'])
final.columns
final['total_votes'] = final['useful'] + final['funny'] + final['cool']
# Changing Layout of Columns
final = final[['user_id', 'date', 'review_id', 'total_votes', 'useful', 'funny', 'cool', 'text']]
#Saving Final_df
final.to_csv(index=True)
final.to_csv(r'final.csv')
###Output
_____no_output_____
###Markdown
Some Visulizations:
###Code
# Imports
import seaborn as sns
sns.set_style("whitegrid")
import matplotlib.pyplot as plt
# Code for hiding seaborn warnings
import warnings
warnings.filterwarnings("ignore")
plt.figure(figsize=(12.8,6))
sns.distplot(final['useful']).set_title('Useful Interaction Distribution');
###Output
_____no_output_____ |
POC.ipynb | ###Markdown
BERTerReads: Proof of concept notebookThis notebook outlines how the BERTerReads app works. The process is simple:1. A GoodReads book URL is provided as input1. The first page of the book's reviews are scraped live on the spot1. The reviews are divided into their individual sentences1. Each sentence is transformed into a 768-dimensional vector with DistilBERT1. The set of vectors is run through a K-means clustering algorithm, dividing the sentences into 3 clusters1. The vector closest to each cluster centre is identified1. The sentences corresponding to these 3 vectors are displayed back to the user Imports
###Code
# Imports
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
from nltk.tokenize import sent_tokenize
from sentence_transformers import SentenceTransformer
from sklearn.cluster import KMeans
# Load DistilBERT model
model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')
###Output
_____no_output_____
###Markdown
1. Retrieve URL from user
###Code
url = input('Input GoodReads book URL:')
###Output
Input GoodReads book URL: https://www.goodreads.com/book/show/51791252-the-vanishing-half
###Markdown
2. Scrape reviews from URL
###Code
def get_reviews(url):
'''
Function to scrape all the reviews from the first page of a GoodReads book URL
'''
r = requests.get(url)
soup = BeautifulSoup(r.content, features='html.parser')
reviews_src = soup.find_all('div', class_='reviewText stacked')
reviews = []
for review in reviews_src:
reviews.append(review.text)
df = pd.DataFrame(reviews, columns=['review'])
return df
reviews_df = get_reviews(url)
###Output
_____no_output_____
###Markdown
3. Divide reviews into individual sentences
###Code
def clean_reviews(df):
'''
Function to clean review text and divide into individual sentences
'''
# Define spoiler marker & "...more" strings, and remove from all reviews
spoiler_str_gr = ' This review has been hidden because it contains spoilers. To view it,\n click here.\n\n\n'
more_str = '\n...more\n\n'
df['review'] = df['review'].str.replace(spoiler_str_gr, '')
df['review'] = df['review'].str.replace(more_str, '')
# Scraped reviews from GoodReads typically repeat the first ~500 characters
# The following loop removes these repeated characters
# Loop through each row in dataframe
for i in range(len(df)):
# Save review and review's first ~250 characters to variables
review = df.iloc[i]['review']
review_start = review[2:250]
# Loop through all of review's subsequent character strings
for j in range(3, len(review)):
# Check if string starts with same sequence as review start
if review[j:].startswith(review_start):
# If so, chop off all previous characters from review
df.at[i, 'review'] = review[j:]
# Replace all new line characters
df['review'] = df['review'].str.replace('\n', ' ')
# Append space to all sentence end characters
df['review'] = df['review'].str.replace('.', '. ').replace('!', '! ').replace('?', '? ')
# Initialize dataframe to store review sentences, and counter
sentences_df = pd.DataFrame()
# Loop through each review
for i in range(len(df)):
# Save row and review to variables
row = df.iloc[i]
review = row.loc['review']
# Tokenize review into sentences
sentences = sent_tokenize(review)
# Loop through each sentence in list of tokenized sentences
for sentence in sentences:
# Add row for sentence to sentences dataframe
new_row = row.copy()
new_row.at['review'] = sentence
sentences_df = sentences_df.append(new_row, ignore_index=True)
sentences_df.rename(columns={'review':'sentence'}, inplace=True)
lower_thresh = 5
upper_thresh = 50
# Remove whitespaces at the start and end of sentences
sentences_df['sentence'] = sentences_df['sentence'].str.strip()
# Create list of sentence lengths
sentence_lengths = sentences_df['sentence'].str.split(' ').map(len)
num_short = (sentence_lengths <= lower_thresh).sum()
num_long = (sentence_lengths >= upper_thresh).sum()
num_sents = num_short + num_long
# Filter sentences
sentences_df = sentences_df[
(sentence_lengths > lower_thresh) & (sentence_lengths < upper_thresh)]
sentences_df.reset_index(drop=True, inplace=True)
return sentences_df['sentence']
sentences = clean_reviews(reviews_df)
###Output
_____no_output_____
###Markdown
4. Transform each sentence into a vector
###Code
sentence_vectors = model.encode(sentences)
###Output
_____no_output_____
###Markdown
5. Cluster sentences and print sentences closest to each cluster centre
###Code
def get_opinions(sentences, sentence_vectors, k=3, n=1):
'''
Function to extract the n most representative sentences from k clusters, with density scores
'''
# Instantiate the model
kmeans_model = KMeans(n_clusters=k, random_state=24)
# Fit the model
kmeans_model.fit(sentence_vectors);
# Set the number of cluster centre points to look at when calculating density score
centre_points = int(len(sentences) * 0.02)
# Initialize list to store mean inner product value for each cluster
cluster_density_scores = []
# Initialize dataframe to store cluster centre sentences
df = pd.DataFrame()
# Loop through number of clusters
for i in range(k):
# Define cluster centre
centre = kmeans_model.cluster_centers_[i]
# Calculate inner product of cluster centre and sentence vectors
ips = np.inner(centre, sentence_vectors)
# Find the sentences with the highest inner products
top_index = pd.Series(ips).nlargest(n).index
top_sentence = sentences[top_index].iloc[0]
centre_ips = pd.Series(ips).nlargest(centre_points)
density_score = round(np.mean(centre_ips), 5)
# Create new row with cluster's top 10 sentences and density score
new_row = pd.Series([top_sentence, density_score])
# Append new row to master dataframe
df = df.append(new_row, ignore_index=True)
# Rename dataframe columns
df.columns = ['sentence', 'density']
# Sort dataframe by density score, from highest to lowest
df = df.sort_values(by='density', ascending=False).reset_index(drop=True)
for i in range(len(df)):
print(f"Opinion #{i+1}: {df['sentence'][i]}\n")
get_opinions(sentences, sentence_vectors, 3)
###Output
Opinion #1: I found this to be a beautifully written and thought-provoking book.
Opinion #2: While racial identity is the core of the story, there are so many other layers here with characters that the author portrays in such a way that I got a sense of who they were, even if at times they questioned their own identities.
Opinion #3: Nearly broken from her sister’s choice to leave her, she never gives up hope of finding Stella until it’s nearly too late.
###Markdown
Quick run
###Code
url = input('Input GoodReads book URL:')
reviews_df = get_reviews(url)
print('\nScraped reviews!')
sentences = clean_reviews(reviews_df)
print('Cleaned reviews!')
sentence_vectors = model.encode(sentences)
print('Embedded sentences!\n')
get_opinions(sentences, sentence_vectors, 3)
###Output
Input GoodReads book URL: https://www.goodreads.com/book/show/48570454-transcendent-kingdom
###Markdown
Testing code
###Code
prob_dist = model.prob_classify(rw_to_vec(
"The flair bartenders are absolutely amazing!"
))
for k in prob_dist.samples():
print(k, prob_dist.prob(k))
prob_dist.max()
model.classify(test_set[50][0])
###Output
_____no_output_____
###Markdown
ConeOpt
###Code
print("ER (train) = {}".format(sum(y_train)/len(y_train)))
print("ER (test) = {}".format(sum(y_test)/len(y_test)))
#np.select(y_test == 1, y_test)
# get current point and reference point
idx_y_test_pos = np.argwhere(y_test == 1).flatten()
idx_y_test_neg = np.argwhere(y_test == 0).flatten()
idx_curr = idx_y_test_pos[5]
idx_ref = idx_y_test_neg[4]
print("=" * 80)
X_curr = X_test[idx_curr:idx_curr+1, :]
print("Y (curr) has prob = ", lr.predict_proba(X_curr)[:, 1])
print("X (curr) = ", X_curr)
print("=" * 80)
X_ref = X_test[idx_ref:idx_ref+1, :]
print("Y (ref) has prob = ", lr.predict_proba(X_ref)[:, 1])
print("X (ref) = ", X_ref)
#https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#r108fc14fa019-1
def run_coneopt(X_curr, X_ref, max_step = 0.3, fixed_features = []):
print("=" * 80)
X_cone = X_ref - X_curr
print("Cone = ", X_cone)
bounds = list(zip(X_curr.flatten(), (X_curr + X_cone * max_step).flatten()))
for b in range(len(bounds)):
bound = bounds[b]
if bound[0] > bound[1]:
bounds[b] = bound[1], bound[0]
for idx_feat in fixed_features:
bounds[idx_feat] = (X_curr[0][idx_feat], X_curr[0][idx_feat])
print("Bounds = ", bounds)
#print(X_curr, X_curr + X_cone * max_step)
def my_predict_proba(x, method):
return method.predict_proba(x.reshape(1, len(x)))[:, 1]
result = differential_evolution(
func=my_predict_proba,
bounds=bounds,
args=[lr],
disp=True,
seed=0)
X_opt = result.x.reshape(1, len(result.x))
print("=" * 80)
print("CURR")
print("Y (curr) has prob = ", lr.predict_proba(X_curr)[:, 1])
print("X (curr) = ", X_curr)
print("=" * 80)
print("OPT")
print("Y (opt) has prob = ", lr.predict_proba(X_opt)[:, 1])
print("X (opt) = ", X_opt)
print("=" * 80)
print("REF")
print("Y (ref) has prob = ", lr.predict_proba(X_ref)[:, 1])
print("X (ref) = ", X_ref)
print("=" * 80)
return X_opt
#https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#r108fc14fa019-1
def run_coneopt2(X_curr, X_ref, max_step = 0.3, fixed_features = []):
print("=" * 80)
X_cone = X_ref - X_curr
print("Cone = ", X_cone)
bounds = list(zip(X_curr.flatten(), (X_curr + X_cone * max_step).flatten()))
for b in range(len(bounds)):
bound = bounds[b]
if bound[0] > bound[1]:
bounds[b] = bound[1], bound[0]
bounds2 = []
fixed_x = []
non_fixed_features = []
for b in range(len(bounds)):
if b not in set(fixed_features):
bounds2.append(bounds[b])
non_fixed_features.append(b)
else:
fixed_x.append(X_curr[0][b])
num_features = len(bounds)
bounds = bounds2
num_features_active = len(bounds)
print("Bounds = ", bounds)
print("fixed_features = ", fixed_features)
print("fixed_x = ", fixed_x)
print("non_fixed_features = ", non_fixed_features)
#print(X_curr, X_curr + X_cone * max_step)
def get_full_x(non_fixed_x, fixed_features, non_fixed_features, fixed_x):
full_x = [b for b in range(len(fixed_features) + len(non_fixed_features))]
for b in range(len(non_fixed_features)):
full_x[non_fixed_features[b]] = non_fixed_x[b]
for b in range(len(fixed_features)):
full_x[fixed_features[b]] = fixed_x[b]
return full_x
def my_predict_proba(non_fixed_x, method, fixed_features, non_fixed_features, fixed_x):
if non_fixed_features == []:
return method.predict_proba(x.reshape(1, len(x)))[:, 1]
else:
full_x = get_full_x(non_fixed_x, fixed_features, non_fixed_features, fixed_x)
#print("non_fixed_features", non_fixed_features)
#print("fixed_features", fixed_features)
return method.predict_proba(np.array(full_x).reshape(1, len(full_x)))[:, 1]
result = differential_evolution(
func=my_predict_proba,
bounds=bounds,
args=[lr, fixed_features, non_fixed_features, fixed_x],
disp=True,
seed=0)
full_x = get_full_x(result.x, fixed_features, non_fixed_features, fixed_x)
X_opt = np.array(full_x).reshape(1, len(full_x))
print("=" * 80)
print("CURR")
print("Y (curr) has prob = ", lr.predict_proba(X_curr)[:, 1])
print("X (curr) = ", X_curr)
print("=" * 80)
print("OPT")
print("Y (opt) has prob = ", lr.predict_proba(X_opt)[:, 1])
print("X (opt) = ", X_opt)
print("=" * 80)
print("REF")
print("Y (ref) has prob = ", lr.predict_proba(X_ref)[:, 1])
print("X (ref) = ", X_ref)
print("=" * 80)
return X_opt
def identify_fixed_features(
X_curr,
X_opt,
influential_features_percentage = 0.5,
delta_feature_eps = 0.0001):
# Identify the most influential features -- 50% of the most important features.
#influential_features_percentage = 0.5
#delta_feature_eps = 0.0001
num_features = X_curr.shape[1]
diff = list(map(abs, X_opt.flatten() - X_curr.flatten()))
for i in range(len(diff)):
if diff[i] == 0:
diff[i] += random.randrange(100)*delta_feature_eps
num_features_changed = sum(np.array(diff) > delta_feature_eps)
num_target_features = int(max(1, influential_features_percentage * num_features_changed))
print("Will use [{}] feautres for the analysis".format(num_target_features))
#print("diff", diff)
#print("list(map(abs, X_curr))", list(map(abs, X_curr)))
delta_changes = np.divide(diff, list(map(abs, X_curr)))[0]
print("delta_changes = ", delta_changes)
cutoff_feature_value = sorted(delta_changes, reverse = True)[num_target_features - 1]
print("Cutoff feature values (only feature with values >= cutoff will be included) = {}".format(cutoff_feature_value))
flag_required_feature = delta_changes >= cutoff_feature_value
#print(idx_required_feature)
assert(sum(flag_required_feature) == num_target_features)
return [i for i in range(num_features) if flag_required_feature[i]==False]
max_step = 0.35
X_opt_init = run_coneopt(X_curr, X_ref, max_step=max_step, fixed_features=[])
fixed_features = identify_fixed_features(X_curr, X_opt_init)
print(fixed_features)
#X_opt = run_coneopt(X_curr, X_ref, max_step=max_step, fixed_features=fixed_features)
X_opt = run_coneopt2(X_curr, X_ref, max_step=max_step, fixed_features=fixed_features)
#X_opt = run_coneopt(X_curr, X_ref, max_step=max_step, fixed_features=fixed_features)
###Output
================================================================================
Cone = [[ 3.8590803 -2.59719 -1.7566043 -1.5680219 0. -0.01268381
-0.55541945 0.93659806]]
Bounds = [(-1.1405786, 0.21009946), (1.223208, 2.1322243), (1.1882036, 1.8030151), (-0.2676677, 0.28113997), (-0.69355917, -0.69355917), (0.28849986, 0.2929392), (-0.3051005, -0.1107037), (0.6625124, 0.99032176)]
differential_evolution step 1: f(x)= 0.535521
differential_evolution step 2: f(x)= 0.533366
differential_evolution step 3: f(x)= 0.533366
differential_evolution step 4: f(x)= 0.531819
differential_evolution step 5: f(x)= 0.531819
differential_evolution step 6: f(x)= 0.530634
differential_evolution step 7: f(x)= 0.52867
differential_evolution step 8: f(x)= 0.525298
differential_evolution step 9: f(x)= 0.524621
differential_evolution step 10: f(x)= 0.523298
differential_evolution step 11: f(x)= 0.523298
differential_evolution step 12: f(x)= 0.519777
differential_evolution step 13: f(x)= 0.519777
differential_evolution step 14: f(x)= 0.519777
differential_evolution step 15: f(x)= 0.519777
differential_evolution step 16: f(x)= 0.519777
differential_evolution step 17: f(x)= 0.519777
differential_evolution step 18: f(x)= 0.519777
================================================================================
CURR
Y (curr) has prob = [0.73912398]
X (curr) = [[-1.1405786 2.1322243 1.8030151 0.28113997 -0.69355917 0.2929392
-0.1107037 0.6625124 ]]
================================================================================
OPT
Y (opt) has prob = [0.51977657]
X (opt) = [[-1.13587187 1.22822682 1.79190315 -0.16958951 -0.69355917 0.28972474
-0.29757597 0.66395002]]
================================================================================
REF
Y (ref) has prob = [0.54667594]
X (ref) = [[ 2.7185016 -0.4649656 0.04641078 -1.2868819 -0.69355917 0.28025538
-0.66612315 1.5991105 ]]
================================================================================
Will use [4] feautres for the analysis
delta_changes = [4.12663700e-03 4.23969228e-01 6.16298924e-03 1.60322093e+00
2.88367610e-04 1.09730683e-02 1.68803996e+00 2.16991633e-03]
Cutoff feature values (only feature with values >= cutoff will be included) = 0.0109730682680633
[0, 2, 4, 7]
================================================================================
Cone = [[ 3.8590803 -2.59719 -1.7566043 -1.5680219 0. -0.01268381
-0.55541945 0.93659806]]
Bounds = [(1.223208, 2.1322243), (-0.2676677, 0.28113997), (0.28849986, 0.2929392), (-0.3051005, -0.1107037)]
fixed_features = [0, 2, 4, 7]
fixed_x = [-1.1405786, 1.8030151, -0.69355917, 0.6625124]
non_fixed_features = [1, 3, 5, 6]
differential_evolution step 1: f(x)= 0.521164
differential_evolution step 2: f(x)= 0.517264
differential_evolution step 3: f(x)= 0.517264
differential_evolution step 4: f(x)= 0.517264
differential_evolution step 5: f(x)= 0.517264
differential_evolution step 6: f(x)= 0.517264
differential_evolution step 7: f(x)= 0.517264
differential_evolution step 8: f(x)= 0.517264
================================================================================
CURR
Y (curr) has prob = [0.73912398]
X (curr) = [[-1.1405786 2.1322243 1.8030151 0.28113997 -0.69355917 0.2929392
-0.1107037 0.6625124 ]]
================================================================================
OPT
Y (opt) has prob = [0.51599731]
X (opt) = [[-1.14057863 1.22320795 1.80301511 -0.26766771 -0.69355917 0.28849986
-0.3051005 0.66251242]]
================================================================================
REF
Y (ref) has prob = [0.54667594]
X (ref) = [[ 2.7185016 -0.4649656 0.04641078 -1.2868819 -0.69355917 0.28025538
-0.66612315 1.5991105 ]]
================================================================================
|
mm4.ipynb | ###Markdown
TOC: [1](./mm1.ipynb) [2](./mm2.ipynb) [4](./mm4.ipynb) [5](./mm5.ipynb) [6](./mm6.ipynb) A & B MODULES T & E MODULES S MODULE
###Code
# build a Python dictionary of volumes
from math import sqrt
φ = (1 + sqrt(5))/2
volumes = dict((("tetrahedron", 1),
("cube", 3),
("octahedron", 4),
("rhombic dodecahedron", 6)))
modules = dict((("A module", 1/24),
("B module", 1/24),
("T module", 1/24),
("E module", (sqrt(2)/8) * (φ ** -3)),
("S module", (φ **-5)/2)))
volumes.update(modules)
template = "| {s:30} | {v:7} |"
print(template.format(s = "SHAPE", v = "VOLUME"))
print("-" * 45)
template = "| {s:30} | {v:6.5f} |"
# sorted by volume, not shape name
for shape, volume in sorted(tuple(volumes.items()), key=lambda x: x[1]):
print(template.format(s=shape, v=volume))
###Output
| SHAPE | VOLUME |
---------------------------------------------
| A module | 0.04167 |
| B module | 0.04167 |
| T module | 0.04167 |
| E module | 0.04173 |
| S module | 0.04508 |
| tetrahedron | 1.00000 |
| cube | 3.00000 |
| octahedron | 4.00000 |
| rhombic dodecahedron | 6.00000 |
|
t81_558_class_12_04_atari.ipynb | ###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Deep Learning and Security*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/).  Installing Atari Emulator```pip install gym[atari]``` Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari BreakoutOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) On Mac/Linux installation is as easy as:(from Wikipedia)Breakout begins with eight rows of bricks, with each two rows a different color. The color order from the bottom up is yellow, green, orange and red. Using a single ball, the player must knock down as many bricks as possible by using the walls and/or the paddle below to ricochet the ball against the bricks and eliminate them. If the player's paddle misses the ball's rebound, he or she will lose a turn. The player has three turns to try to clear two screens of bricks. Yellow bricks earn one point each, green bricks earn three points, orange bricks earn five points and the top-level red bricks score seven points each. The paddle shrinks to one-half its size after the ball has broken through the red row and hit the upper wall. Ball speed increases at specific intervals: after four hits, after twelve hits, and after making contact with the orange and red rows.The highest score achievable for one player is 896; this is done by eliminating two screens of bricks worth 448 points per screen. Once the second screen of bricks is destroyed, the ball in play harmlessly bounces off empty walls until the player restarts the game, as no additional screens are provided. However, a secret way to score beyond the 896 maximum is to play the game in two-player mode. If "Player One" completes the first screen on his or her third and last ball, then immediately and deliberately allows the ball to "drain", Player One's second screen is transferred to "Player Two" as a third screen, allowing Player Two to score a maximum of 1,344 points if he or she is adept enough to keep the third ball in play that long. Once the third screen is eliminated, the game is over.The original arcade cabinet of Breakout featured artwork that revealed the game's plot to be that of a prison escape. According to this release, the player is actually playing as one of a prison's inmates attempting to knock a ball and chain into a wall of their prison cell with a mallet. If the player successfully destroys the wall in-game, their inmate escapes with others following. Breakout (BreakoutDeterministic-v4) Specs:* BreakoutDeterministic-v4* State size (RGB): (210, 160, 3)* Actions: 4 (discrete)The video for this course demonstrated playing Breakout. The following [example code](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py) was used. The following code can be used to probe an environment to see the shape of its states and actions.
###Code
import gym
env = gym.make("BreakoutDeterministic-v4")
print(f"Obesrvation space: {env.observation_space}")
print(f"Action space: {env.action_space}")
###Output
Obesrvation space: Box(210, 160, 3)
Action space: Discrete(4)
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Deep Learning and Security*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/).  Installing Atari Emulator```pip install gym[atari]``` Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari BreakoutOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) On Mac/Linux installation is as easy as:(from Wikipedia)Breakout begins with eight rows of bricks, with each two rows a different color. The color order from the bottom up is yellow, green, orange and red. Using a single ball, the player must knock down as many bricks as possible by using the walls and/or the paddle below to ricochet the ball against the bricks and eliminate them. If the player's paddle misses the ball's rebound, he or she will lose a turn. The player has three turns to try to clear two screens of bricks. Yellow bricks earn one point each, green bricks earn three points, orange bricks earn five points and the top-level red bricks score seven points each. The paddle shrinks to one-half its size after the ball has broken through the red row and hit the upper wall. Ball speed increases at specific intervals: after four hits, after twelve hits, and after making contact with the orange and red rows.The highest score achievable for one player is 896; this is done by eliminating two screens of bricks worth 448 points per screen. Once the second screen of bricks is destroyed, the ball in play harmlessly bounces off empty walls until the player restarts the game, as no additional screens are provided. However, a secret way to score beyond the 896 maximum is to play the game in two-player mode. If "Player One" completes the first screen on his or her third and last ball, then immediately and deliberately allows the ball to "drain", Player One's second screen is transferred to "Player Two" as a third screen, allowing Player Two to score a maximum of 1,344 points if he or she is adept enough to keep the third ball in play that long. Once the third screen is eliminated, the game is over.The original arcade cabinet of Breakout featured artwork that revealed the game's plot to be that of a prison escape. According to this release, the player is actually playing as one of a prison's inmates attempting to knock a ball and chain into a wall of their prison cell with a mallet. If the player successfully destroys the wall in-game, their inmate escapes with others following. Breakout (BreakoutDeterministic-v4) Specs:* BreakoutDeterministic-v4* State size (RGB): (210, 160, 3)* Actions: 4 (discrete)The video for this course demonstrated playing Breakout. The following [example code](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py) was used. The following code can be used to probe an environment to see the shape of its states and actions.
###Code
import gym
env = gym.make("BreakoutDeterministic-v4")
print(f"Obesrvation space: {env.observation_space}")
print(f"Action space: {env.action_space}")
###Output
Obesrvation space: Box(210, 160, 3)
Action space: Discrete(4)
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
_____no_output_____
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network, network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.networks import categorical_q_network
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
! wget http://www.atarimania.com/roms/Roms.rar
! mkdir /content/ROM/
! unrar e /content/Roms.rar /content/ROM/
! python -m atari_py.import_roms /content/ROM/
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
class AtariCategoricalQNetwork(network.Network):
"""CategoricalQNetwork subclass that divides observations by 255."""
def __init__(self, input_tensor_spec, action_spec, **kwargs):
super(AtariCategoricalQNetwork, self).__init__(
input_tensor_spec, state_spec=())
input_tensor_spec = tf.TensorSpec(
dtype=tf.float32, shape=input_tensor_spec.shape)
self._categorical_q_network = categorical_q_network.CategoricalQNetwork(
input_tensor_spec, action_spec, **kwargs)
@property
def num_atoms(self):
return self._categorical_q_network.num_atoms
def call(self, observation, step_type=None, network_state=()):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
# TODO(b/129805821): handle the division by 255 for train_eval_atari.py in
# a preprocessing layer instead.
state = state / 255
return self._categorical_q_network(
state, step_type=step_type, network_state=network_state)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariCategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
agent = categorical_dqn_agent.CategoricalDqnAgent(
time_step_spec,
action_spec,
categorical_q_network=q_net,
optimizer=optimizer,
#epsilon_greedy=epsilon,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False)
"""
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
"""
agent.initialize()
q_net.input_tensor_spec
train_env.observation_spec()
train_py_env.observation_spec()
train_py_env
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
_____no_output_____
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
_____no_output_____
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
# HIDE OUTPUT
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q ale-py
!pip install -q 'gym==0.17.3'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q 'tf-agents==0.12.0'
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.8-0ubuntu0.2).
The following NEW packages will be installed:
xvfb
0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded.
Need to get 784 kB of archives.
After this operation, 2,271 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 xvfb amd64 2:1.19.6-1ubuntu4.10 [784 kB]
Fetched 784 kB in 0s (7,462 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 1.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Selecting previously unselected package xvfb.
(Reading database ... 156210 files and directories currently installed.)
Preparing to unpack .../xvfb_2%3a1.19.6-1ubuntu4.10_amd64.deb ...
Unpacking xvfb (2:1.19.6-1ubuntu4.10) ...
Setting up xvfb (2:1.19.6-1ubuntu4.10) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[K |████████████████████████████████| 1.6 MB 11.4 MB/s
[K |████████████████████████████████| 3.3 MB 15.2 MB/s
[?25h Building wheel for imageio (setup.py) ... [?25l[?25hdone
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.[0m
[K |████████████████████████████████| 1.0 MB 16.5 MB/s
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
gym 0.17.3 requires pyglet<=1.5.0,>=1.4.0, but you have pyglet 1.3.2 which is incompatible.[0m
[K |████████████████████████████████| 1.3 MB 15.4 MB/s
[K |████████████████████████████████| 1.0 MB 53.4 MB/s
[?25h
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc., Released on September 11, 1977. Most credit the Atari with popularizing microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games built into the unit. Atari bundled their console with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow gamers to play many old Atari video games on modern computers. These emulators are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). You can see the Atari 2600 in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). It uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is achievable only with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid-line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongYou can use OpenAI Gym with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the Atari 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games. Some changes are required compared to the pole-cart game presented earlier in this chapter.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment
from tf_agents.environments import batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network, network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.networks import categorical_q_network
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
# 10K already takes awhile to complete, with minimal results.
# To get an effective agent requires much more.
num_iterations = 10000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 1000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari EnvironmentYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens or a vector representing the game's computer RAM state. To preprocess Atari games for greater computational efficiency, we skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
# HIDE OUTPUT
! wget http://www.atarimania.com/roms/Roms.rar
! mkdir /content/ROM/
! unrar e -o+ /content/Roms.rar /content/ROM/
! python -m atari_py.import_roms /content/ROM/
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
/usr/local/lib/python3.7/dist-packages/ale_py/roms/__init__.py:94: DeprecationWarning: Automatic importing of atari-py roms won't be supported in future releases of ale-py. Please migrate over to using `ale-import-roms` OR an ALE-supported ROM package. To make this warning disappear you can run `ale-import-roms --import-from-pkg atari_py.atari_roms`.For more information see: https://github.com/mgbellemare/Arcade-Learning-Environment#rom-management
_RESOLVED_ROMS = _resolve_roms()
/usr/local/lib/python3.7/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: obs_type "image" should be replaced with the image type, one of: rgb, grayscale[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
/usr/local/lib/python3.7/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: obs_type "image" should be replaced with the image type, one of: rgb, grayscale[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
AgentFrom TF-Agents examples, I used the following class to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values between 0 and 1.
###Code
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
class AtariCategoricalQNetwork(network.Network):
"""CategoricalQNetwork subclass that divides observations by 255."""
def __init__(self, input_tensor_spec, action_spec, **kwargs):
super(AtariCategoricalQNetwork, self).__init__(
input_tensor_spec, state_spec=())
input_tensor_spec = tf.TensorSpec(
dtype=tf.float32, shape=input_tensor_spec.shape)
self._categorical_q_network = \
categorical_q_network.CategoricalQNetwork(
input_tensor_spec, action_spec, **kwargs)
@property
def num_atoms(self):
return self._categorical_q_network.num_atoms
def call(self, observation, step_type=None, network_state=()):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than
# storing normalized values beause uint8s are 4x cheaper to
# store than float32s.
# TODO(b/129805821): handle the division by 255 for
# train_eval_atari.py in
# a preprocessing layer instead.
state = state / 255
return self._categorical_q_network(
state, step_type=step_type, network_state=network_state)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariCategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually comprise several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The **QNetwork** accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the **QNetwork**.The **QNetwork** defined here is not the agent. Instead, the **QNetwork** is used by the DQN agent to implement the actual neural network. This technique allows flexibility as you can set your class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also created the DQN agent and referenced the Q-network.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
agent = categorical_dqn_agent.CategoricalDqnAgent(
time_step_spec,
action_spec,
categorical_q_network=q_net,
optimizer=optimizer,
#epsilon_greedy=epsilon,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.
Instructions for updating:
Use `as_dataset(..., single_deterministic_pass=False) instead.
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. Depending on how many episodes you wish to run through, this process can take many hours. This code will update on both the loss and average return as training occurs. As training becomes more successful, the average return should increase. The losses reported reflect the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 1000: loss = 3.9279017448425293
step = 2000: loss = 3.9280214309692383
step = 3000: loss = 3.924931526184082
step = 4000: loss = 3.9209065437316895
step = 5000: loss = 3.919551134109497
step = 6000: loss = 3.919588327407837
step = 7000: loss = 3.9074008464813232
step = 8000: loss = 3.8954014778137207
step = 9000: loss = 3.8865578174591064
step = 10000: loss = 3.895845890045166
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosPerhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. We now have a trained model and observed its training progress on a graph. The following functions are defined to watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
# HIDE OUTPUT
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, with enough training, it does outperform the random agent considerably.
###Code
# HIDE OUTPUT
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 5000
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-4
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
/usr/local/lib/python3.6/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: <class 'tf_agents.environments.atari_preprocessing.AtariPreprocessing'> doesn't implement 'reset' method, which is required for wrappers derived directly from Wrapper. Deprecated default implementation is used.[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and Evaluation
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay Buffer
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random Collectionn
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agent
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.010096915997564793
step = 10000: loss = 0.029444240033626556
step = 15000: loss = 0.0049832831136882305
step = 20000: loss = 0.01718844845890999
step = 25000: loss = 0.009340734221041203
step = 25000: Average Return = -21.0
step = 30000: loss = 0.006228224840015173
step = 35000: loss = 0.004158324096351862
step = 40000: loss = 0.014960331842303276
step = 45000: loss = 0.013988891616463661
step = 50000: loss = 0.0053611560724675655
step = 50000: Average Return = -19.399999618530273
step = 55000: loss = 0.011948538944125175
step = 60000: loss = 0.013576810248196125
step = 65000: loss = 0.014345762319862843
step = 70000: loss = 0.013468008488416672
step = 75000: loss = 0.019563401117920876
step = 75000: Average Return = -17.600000381469727
step = 80000: loss = 0.006855367217212915
step = 85000: loss = 0.02645200304687023
step = 90000: loss = 0.028687894344329834
step = 95000: loss = 0.0077339960262179375
step = 100000: loss = 0.008796805515885353
step = 100000: Average Return = -15.399999618530273
step = 105000: loss = 0.007690057158470154
step = 110000: loss = 0.009481238201260567
step = 115000: loss = 0.01592394895851612
step = 120000: loss = 0.03809519112110138
step = 125000: loss = 0.017644602805376053
step = 125000: Average Return = -18.399999618530273
step = 130000: loss = 0.016021577641367912
step = 135000: loss = 0.025841666385531425
step = 140000: loss = 0.01505729928612709
step = 145000: loss = 0.0053392802365124226
step = 150000: loss = 0.010156139731407166
step = 150000: Average Return = -14.199999809265137
step = 155000: loss = 0.011142105795443058
step = 160000: loss = 0.008239060640335083
step = 165000: loss = 0.011521872133016586
step = 170000: loss = 0.010998032987117767
step = 175000: loss = 0.010871205478906631
step = 175000: Average Return = -16.0
step = 180000: loss = 0.02065308950841427
step = 185000: loss = 0.015561554580926895
step = 190000: loss = 0.009061465971171856
step = 195000: loss = 0.008069772273302078
step = 200000: loss = 0.03638923540711403
step = 200000: Average Return = -13.0
step = 205000: loss = 0.008515113033354282
step = 210000: loss = 0.016671765595674515
step = 215000: loss = 0.01088558416813612
step = 220000: loss = 0.03991032764315605
step = 225000: loss = 0.04335315525531769
step = 225000: Average Return = -11.199999809265137
step = 230000: loss = 0.003953642677515745
step = 235000: loss = 0.009820730425417423
step = 240000: loss = 0.01306198351085186
step = 245000: loss = 0.020174827426671982
step = 250000: loss = 0.01625116355717182
step = 250000: Average Return = -11.199999809265137
###Markdown
Visualization Plots
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
Videos
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
create_policy_eval_video(agent.policy, "trained-agent")
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
# HIDE OUTPUT
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q ale-py
!pip install -q 'gym==0.17.3'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q 'tf-agents==0.12.0'
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.8-0ubuntu0.2).
The following NEW packages will be installed:
xvfb
0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded.
Need to get 784 kB of archives.
After this operation, 2,271 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 xvfb amd64 2:1.19.6-1ubuntu4.10 [784 kB]
Fetched 784 kB in 0s (7,462 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 1.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Selecting previously unselected package xvfb.
(Reading database ... 156210 files and directories currently installed.)
Preparing to unpack .../xvfb_2%3a1.19.6-1ubuntu4.10_amd64.deb ...
Unpacking xvfb (2:1.19.6-1ubuntu4.10) ...
Setting up xvfb (2:1.19.6-1ubuntu4.10) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[K |████████████████████████████████| 1.6 MB 11.4 MB/s
[K |████████████████████████████████| 3.3 MB 15.2 MB/s
[?25h Building wheel for imageio (setup.py) ... [?25l[?25hdone
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.[0m
[K |████████████████████████████████| 1.0 MB 16.5 MB/s
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
gym 0.17.3 requires pyglet<=1.5.0,>=1.4.0, but you have pyglet 1.3.2 which is incompatible.[0m
[K |████████████████████████████████| 1.3 MB 15.4 MB/s
[K |████████████████████████████████| 1.0 MB 53.4 MB/s
[?25h
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc., Released on September 11, 1977. Most credit the Atari with popularizing microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games built into the unit. Atari bundled their console with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow gamers to play many old Atari video games on modern computers. These emulators are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). You can see the Atari 2600 in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). It uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is achievable only with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid-line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongYou can use OpenAI Gym with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the Atari 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games. Compared to the pole-cart game presented earlier in this chapter, some changes are required.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment
from tf_agents.environments import batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network, network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.networks import categorical_q_network
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
# 10K already takes awhile to complete, with minimal results.
# To get an effective agent requires much more.
num_iterations = 10000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 1000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm train. Atari EnvironmentYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens or a vector representing the game's computer RAM state. To preprocess Atari games for greater computational efficiency, we skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
# HIDE OUTPUT
! wget http://www.atarimania.com/roms/Roms.rar
! mkdir /content/ROM/
! unrar e -o+ /content/Roms.rar /content/ROM/
! python -m atari_py.import_roms /content/ROM/
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
/usr/local/lib/python3.7/dist-packages/ale_py/roms/__init__.py:94: DeprecationWarning: Automatic importing of atari-py roms won't be supported in future releases of ale-py. Please migrate over to using `ale-import-roms` OR an ALE-supported ROM package. To make this warning disappear you can run `ale-import-roms --import-from-pkg atari_py.atari_roms`.For more information see: https://github.com/mgbellemare/Arcade-Learning-Environment#rom-management
_RESOLVED_ROMS = _resolve_roms()
/usr/local/lib/python3.7/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: obs_type "image" should be replaced with the image type, one of: rgb, grayscale[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
/usr/local/lib/python3.7/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: obs_type "image" should be replaced with the image type, one of: rgb, grayscale[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
AgentI used the following code from the TF-Agents examples to wrap up the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values between 0 and 1.
###Code
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
class AtariCategoricalQNetwork(network.Network):
"""CategoricalQNetwork subclass that divides observations by 255."""
def __init__(self, input_tensor_spec, action_spec, **kwargs):
super(AtariCategoricalQNetwork, self).__init__(
input_tensor_spec, state_spec=())
input_tensor_spec = tf.TensorSpec(
dtype=tf.float32, shape=input_tensor_spec.shape)
self._categorical_q_network = \
categorical_q_network.CategoricalQNetwork(
input_tensor_spec, action_spec, **kwargs)
@property
def num_atoms(self):
return self._categorical_q_network.num_atoms
def call(self, observation, step_type=None, network_state=()):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than
# storing normalized values beause uint8s are 4x cheaper to
# store than float32s.
# TODO(b/129805821): handle the division by 255 for
# train_eval_atari.py in
# a preprocessing layer instead.
state = state / 255
return self._categorical_q_network(
state, step_type=step_type, network_state=network_state)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params = ((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariCategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually comprise several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The **QNetwork** accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the **QNetwork**.The **QNetwork** defined here is not the agent. Instead, the **QNetwork** is used by the DQN agent to implement the actual neural network. This technique allows flexibility as you can set your class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also created the DQN agent and referenced the Q-network.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period = 32000 # ALE frames
update_period = 16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
agent = categorical_dqn_agent.CategoricalDqnAgent(
time_step_spec,
action_spec,
categorical_q_network=q_net,
optimizer=optimizer,
# epsilon_greedy=epsilon,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network fits the collected data and does not indicate how effectively the DQN maximizes rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored; older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.
Instructions for updating:
Use `as_dataset(..., single_deterministic_pass=False) instead.
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step,\
next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer,
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the AgentWe are now ready to train the DQN. Depending on how many episodes you wish to run through, this process can take many hours. This code will update both the loss and average return as training occurs. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy,
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and
# save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and
# update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy,
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 1000: loss = 3.9279017448425293
step = 2000: loss = 3.9280214309692383
step = 3000: loss = 3.924931526184082
step = 4000: loss = 3.9209065437316895
step = 5000: loss = 3.919551134109497
step = 6000: loss = 3.919588327407837
step = 7000: loss = 3.9074008464813232
step = 8000: loss = 3.8954014778137207
step = 9000: loss = 3.8865578174591064
step = 10000: loss = 3.895845890045166
###Markdown
VideosPerhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. We now have a trained model and observed its training progress on a graph. The following functions are defined to watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename, 'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
# HIDE OUTPUT
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, with enough training, it does outperform the random agent considerably.
###Code
# HIDE OUTPUT
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Deep Learning and Security*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module Video MaterialMain video lecture:* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/playlist?list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_reinforcement.ipynb)* Part 12.2: Introduction to Q-Learning for Keras [[Video]](https://www.youtube.com/playlist?list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_reinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/playlist?list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_reinforcement.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/playlist?list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_reinforcement.ipynb)* 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/playlist?list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_reinforcement.ipynb) Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/).  Installing Atari Emulator```pip install gym[atari]``` Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari BreakoutOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) On Mac/Linux installation is as easy as:(from Wikipedia)Breakout begins with eight rows of bricks, with each two rows a different color. The color order from the bottom up is yellow, green, orange and red. Using a single ball, the player must knock down as many bricks as possible by using the walls and/or the paddle below to ricochet the ball against the bricks and eliminate them. If the player's paddle misses the ball's rebound, he or she will lose a turn. The player has three turns to try to clear two screens of bricks. Yellow bricks earn one point each, green bricks earn three points, orange bricks earn five points and the top-level red bricks score seven points each. The paddle shrinks to one-half its size after the ball has broken through the red row and hit the upper wall. Ball speed increases at specific intervals: after four hits, after twelve hits, and after making contact with the orange and red rows.The highest score achievable for one player is 896; this is done by eliminating two screens of bricks worth 448 points per screen. Once the second screen of bricks is destroyed, the ball in play harmlessly bounces off empty walls until the player restarts the game, as no additional screens are provided. However, a secret way to score beyond the 896 maximum is to play the game in two-player mode. If "Player One" completes the first screen on his or her third and last ball, then immediately and deliberately allows the ball to "drain", Player One's second screen is transferred to "Player Two" as a third screen, allowing Player Two to score a maximum of 1,344 points if he or she is adept enough to keep the third ball in play that long. Once the third screen is eliminated, the game is over.The original arcade cabinet of Breakout featured artwork that revealed the game's plot to be that of a prison escape. According to this release, the player is actually playing as one of a prison's inmates attempting to knock a ball and chain into a wall of their prison cell with a mallet. If the player successfully destroys the wall in-game, their inmate escapes with others following. Breakout (BreakoutDeterministic-v4) Specs:* BreakoutDeterministic-v4* State size (RGB): (210, 160, 3)* Actions: 4 (discrete)The video for this course demonstrated playing Breakout. The following [example code](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py) was used. The following code can be used to probe an environment to see the shape of its states and actions.
###Code
import gym
env = gym.make("BreakoutDeterministic-v4")
print(f"Obesrvation space: {env.observation_space}")
print(f"Action space: {env.action_space}")
###Output
Obesrvation space: Box(210, 160, 3)
Action space: Discrete(4)
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Deep Learning and Security*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
_____no_output_____
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Installing Atari Emulator```pip install gym[atari]``` Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari BreakoutOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) Figure 12.BREAKOUT shows the Atari Breakout Game.**Figure 12.BREAKOUT: Atari Breakout**(from Wikipedia)Breakout begins with eight rows of bricks, with each two rows a different color. The color order from the bottom up is yellow, green, orange and red. Using a single ball, the player must knock down as many bricks as possible by using the walls and/or the paddle below to ricochet the ball against the bricks and eliminate them. If the player's paddle misses the ball's rebound, he or she will lose a turn. The player has three turns to try to clear two screens of bricks. Yellow bricks earn one point each, green bricks earn three points, orange bricks earn five points and the top-level red bricks score seven points each. The paddle shrinks to one-half its size after the ball has broken through the red row and hit the upper wall. Ball speed increases at specific intervals: after four hits, after twelve hits, and after making contact with the orange and red rows.The highest score achievable for one player is 896; this is done by eliminating two screens of bricks worth 448 points per screen. Once the second screen of bricks is destroyed, the ball in play harmlessly bounces off empty walls until the player restarts the game, as no additional screens are provided. However, a secret way to score beyond the 896 maximum is to play the game in two-player mode. If "Player One" completes the first screen on his or her third and last ball, then immediately and deliberately allows the ball to "drain", Player One's second screen is transferred to "Player Two" as a third screen, allowing Player Two to score a maximum of 1,344 points if he or she is adept enough to keep the third ball in play that long. Once the third screen is eliminated, the game is over.The original arcade cabinet of Breakout featured artwork that revealed the game's plot to be that of a prison escape. According to this release, the player is actually playing as one of a prison's inmates attempting to knock a ball and chain into a wall of their prison cell with a mallet. If the player successfully destroys the wall in-game, their inmate escapes with others following. Breakout (BreakoutDeterministic-v4) Specs:* BreakoutDeterministic-v4* State size (RGB): (210, 160, 3)* Actions: 4 (discrete)The video for this course demonstrated playing Breakout. The following [example code](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py) was used. The following code can be used to probe an environment to see the shape of its states and actions.
###Code
import gym
env = gym.make("BreakoutDeterministic-v4")
print(f"Obesrvation space: {env.observation_space}")
print(f"Action space: {env.action_space}")
###Output
Obesrvation space: Box(210, 160, 3)
Action space: Discrete(4)
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=uwcXWe_Fra0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=Ya1gYt63o3M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=t2yIu6cRa38&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)* Part 12.5: How Alpha Zero used Reinforcement Learning to Master Chess [[Video]](https://www.youtube.com/watch?v=ikDgyD7nVI8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_alpha_zero.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
Note: using Google CoLab
Reading package lists... Done
Building dependency tree
Reading state information... Done
ffmpeg is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
xvfb is already the newest version (2:1.19.6-1ubuntu4.4).
0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded.
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
step = 5000: loss = 0.005372311919927597
step = 10000: loss = 0.029342571273446083
step = 15000: loss = 0.023372460156679153
step = 20000: loss = 0.012967261485755444
step = 25000: loss = 0.03114483878016472
step = 25000: Average Return = -20.0
step = 30000: loss = 0.015883663669228554
step = 35000: loss = 0.022952664643526077
step = 40000: loss = 0.024018988013267517
step = 45000: loss = 0.015258202329277992
step = 50000: loss = 0.01642722450196743
step = 50000: Average Return = -18.399999618530273
step = 55000: loss = 0.024171829223632812
step = 60000: loss = 0.010190263390541077
step = 65000: loss = 0.005736709106713533
step = 70000: loss = 0.01117132231593132
step = 75000: loss = 0.005509796552360058
step = 75000: Average Return = -12.800000190734863
step = 80000: loss = 0.009709298610687256
step = 85000: loss = 0.009705539792776108
step = 90000: loss = 0.006236877758055925
step = 95000: loss = 0.017611663788557053
step = 100000: loss = 0.00873786024749279
step = 100000: Average Return = -10.800000190734863
step = 105000: loss = 0.019388657063245773
step = 110000: loss = 0.0040118759498000145
step = 115000: loss = 0.006819932255893946
step = 120000: loss = 0.028965750709176064
step = 125000: loss = 0.015978489071130753
step = 125000: Average Return = -9.600000381469727
step = 130000: loss = 0.023571692407131195
step = 135000: loss = 0.006761073134839535
step = 140000: loss = 0.005080501548945904
step = 145000: loss = 0.013759403489530087
step = 150000: loss = 0.02108653262257576
step = 150000: Average Return = -5.599999904632568
step = 155000: loss = 0.01754268817603588
step = 160000: loss = 0.008789192885160446
step = 165000: loss = 0.012145541608333588
step = 170000: loss = 0.00911545380949974
step = 175000: loss = 0.008846037089824677
step = 175000: Average Return = -5.199999809265137
step = 180000: loss = 0.020279696211218834
step = 185000: loss = 0.012781327590346336
step = 190000: loss = 0.01562594249844551
step = 195000: loss = 0.015836259350180626
step = 200000: loss = 0.017415495589375496
step = 200000: Average Return = 3.5999999046325684
step = 205000: loss = 0.007518010213971138
step = 210000: loss = 0.028996415436267853
step = 215000: loss = 0.01371004804968834
step = 220000: loss = 0.007023532874882221
step = 225000: loss = 0.004790903069078922
step = 225000: Average Return = -4.400000095367432
step = 230000: loss = 0.006244136951863766
step = 235000: loss = 0.025019707158207893
step = 240000: loss = 0.02555653266608715
step = 245000: loss = 0.012253865599632263
step = 250000: loss = 0.004736536182463169
step = 250000: Average Return = 2.4000000953674316
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (210, 160) to (224, 160) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned.
###Markdown
T81-558: Applications of Deep Neural Networks**Module 12: Reinforcement Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 12 Video Material* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_01_ai_gym.ipynb)* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_02_qlearningreinforcement.ipynb)* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_03_keras_reinforce.ipynb)* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb)* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_05_apply_rl.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
###Code
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
###Output
_____no_output_____
###Markdown
Part 12.4: Atari Games with Keras Neural NetworksThe Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.* [Virtual Atari](http://www.virtualatari.org/listP.html)Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.**Figure 12.ATARI: The Atari 2600** Actual Atari 2600 Specs* CPU: 1.19 MHz MOS Technology 6507* Audio + Video processor: Television Interface Adapter (TIA)* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).* Ball and missile sprites: 1 x 192 pixels (NTSC).* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.* 2 channels of 1-bit monaural sound with 4-bit volume control. OpenAI Lab Atari PongOpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30). This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.We begin by importing the needed Python packages.
###Code
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network, network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
from tf_agents.networks import categorical_q_network
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
###Output
_____no_output_____
###Markdown
HyperparametersThe hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
###Code
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
###Output
_____no_output_____
###Markdown
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. Atari Environment'sYou must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
###Code
! wget http://www.atarimania.com/roms/Roms.rar
! mkdir /content/ROM/
! unrar e /content/Roms.rar /content/ROM/
! python -m atari_py.import_roms /content/ROM/
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
###Output
_____no_output_____
###Markdown
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
###Code
env.reset()
PIL.Image.fromarray(env.render())
###Output
_____no_output_____
###Markdown
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
###Code
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
###Output
_____no_output_____
###Markdown
AgentI used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
###Code
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
class AtariCategoricalQNetwork(network.Network):
"""CategoricalQNetwork subclass that divides observations by 255."""
def __init__(self, input_tensor_spec, action_spec, **kwargs):
super(AtariCategoricalQNetwork, self).__init__(
input_tensor_spec, state_spec=())
input_tensor_spec = tf.TensorSpec(
dtype=tf.float32, shape=input_tensor_spec.shape)
self._categorical_q_network = categorical_q_network.CategoricalQNetwork(
input_tensor_spec, action_spec, **kwargs)
@property
def num_atoms(self):
return self._categorical_q_network.num_atoms
def call(self, observation, step_type=None, network_state=()):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
# TODO(b/129805821): handle the division by 255 for train_eval_atari.py in
# a preprocessing layer instead.
state = state / 255
return self._categorical_q_network(
state, step_type=step_type, network_state=network_state)
###Output
_____no_output_____
###Markdown
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
###Code
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariCategoricalQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
###Output
_____no_output_____
###Markdown
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
###Code
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
agent = categorical_dqn_agent.CategoricalDqnAgent(
time_step_spec,
action_spec,
categorical_q_network=q_net,
optimizer=optimizer,
#epsilon_greedy=epsilon,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False)
"""
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
"""
agent.initialize()
q_net.input_tensor_spec
train_env.observation_spec()
train_py_env.observation_spec()
train_py_env
###Output
_____no_output_____
###Markdown
Metrics and EvaluationThere are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
###Code
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
###Output
_____no_output_____
###Markdown
Replay BufferDQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
###Code
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
###Output
_____no_output_____
###Markdown
Random CollectionThe algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
###Code
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
###Output
_____no_output_____
###Markdown
Training the agentWe are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
###Code
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
###Output
_____no_output_____
###Markdown
VisualizationThe notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
###Code
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
###Output
_____no_output_____
###Markdown
VideosWe now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
###Code
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
###Output
_____no_output_____
###Markdown
First, we will observe the trained agent play the game.
###Code
create_policy_eval_video(agent.policy, "trained-agent")
###Output
_____no_output_____
###Markdown
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
###Code
create_policy_eval_video(random_policy, "random-agent")
###Output
_____no_output_____ |
examples/example_03_preparing_for_LSTM.ipynb | ###Markdown
`LSTMPacker`A simple example about using `LSTMPacker`
###Code
# if you cloned the repository you can do:
import sys
sys.path.append('../')
import logging
import pandas
logging.getLogger().setLevel(logging.DEBUG)
###Output
_____no_output_____
###Markdown
Load some dataMore dataset are available here: https://archive.ics.uci.edu/ml/datasets.html_Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science._
###Code
df = pandas.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/00374/energydata_complete.csv')
df.head()
df.columns
###Output
_____no_output_____
###Markdown
Data RawI'm going to define my target as the total energy used in `Wh` summing `Appliances` and `lights`. The data set will be composed by all columns excluded `date` (dataset is already sorted), `rv1` and `rv2` (see here for more details: https://archive.ics.uci.edu/ml/datasets/Appliances+energy+prediction)
###Code
x = df[['T1', 'RH_1', 'T2', 'RH_2', 'T3',
'RH_3', 'T4', 'RH_4', 'T5', 'RH_5', 'T6', 'RH_6', 'T7', 'RH_7', 'T8',
'RH_8', 'T9', 'RH_9', 'T_out', 'Press_mm_hg', 'RH_out', 'Windspeed',
'Visibility', 'Tdewpoint']]
y = (df['Appliances'] + df['lights']).to_frame()
###Output
_____no_output_____
###Markdown
Create a preprocessing pipeline using `pipesnake` for LSTM
###Code
from pipesnake.pipe import SeriesPipe
from pipesnake.transformers.dropper import DropDuplicates
from pipesnake.transformers.imputer import KnnImputer
from pipesnake.transformers.misc import ColumnRenamer
from pipesnake.transformers.scaler import MadScaler
from pipesnake.transformers.scaler import UnitLenghtScaler
my_pipe = SeriesPipe(transformers=[
ColumnRenamer(), # nomalize columns names
DropDuplicates(), # drop duplicated rows and cols
KnnImputer(x_cols='all'), # impute missing values
MadScaler(x_cols='all', y_cols='all'), # scale by feature (cols)
UnitLenghtScaler(x_cols='all'), # scale by feature vector (rows)
])
x_new, y_new = my_pipe.fit_transform(x, y)
x_new.head()
y_new.head()
_, y_org = my_pipe.inverse_transform(x=None, y=y_new)
y_org.head()
from pipesnake.transformers.misc import ToNumpy
from pipesnake.transformers.deeplearning import LSTMPacker
my_pipe.extend([
ToNumpy(), # returns x and y as numpy matrix
LSTMPacker(sequence_len=5),
])
x_new, y_new = my_pipe.fit_transform(x, y)
x_new
y_new
###Output
_____no_output_____ |
Exploring Hacker News Posts/Basics.ipynb | ###Markdown
Exploring Hacker News PostIn this project we will explore posts that were posted on Hacker News. Hacker News is a site started by the startup incubator Y Combinator, where user-submitted stories (known as "posts") are voted and commented upon, similar to reddit. Hacker News is extremely popular in technology and startup circles, and posts that make it to the top of Hacker News' listings can get hundreds of thousands of visitors as a result. DataThe data can be found [here](https://www.kaggle.com/hacker-news/hacker-news-posts). It contains almost 300,000 rows, each row representing a post. However we use of a version that been reduced to approximately 20,000 rows by removing all submissions that did not receive any comments, and then randomly sampling from the remaining submissions. Descriptions of the columns:- `id`: The unique identifier from Hacker News for the post- `title`: The title of the post- `url`: The URL that the posts links to, if it the post has a URL- `num_points`: The number of points the post acquired, calculated as the total number of upvotes minus the total number of downvotes- `num_comments`: The number of comments that were made on the post- `author`: The username of the person who submitted the post- `created_at`: The date and time at which the post was submitted In this project, we are more interested in posts whose titles begin with either Ask HN or Show HN. Users submit Ask HN to ask the Hacker News community a question. Below is an example of Ask HN Ask HN: How to improve my personal website? Ask HN: Am I the only one outraged by Twitter shutting down share counts? Ask HN: Aby recent changes to CSS that broke mobile?Users submit Show HN to show the community a project, product, or something interesting. Below is an example: Show HN: Wio Link ESP8266 Based Web of Things Hardware Development Platform' Show HN: Something pointless I made Show HN: Shanhu.io, a programming playground powered by e8vmOur goal is to compare the 2 types of posts to determine: Do Ask HN or Show HN receive more comments on average? Do posts created at a certain time receive more comments on average? Read data and print first five rows
###Code
import pprint
pp = pprint.PrettyPrinter()
from csv import reader
with open('hacker_news.csv') as f:
read_file = reader(f)
hn = list(read_file)
pp.pprint(hn[:5])
###Output
[['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at'],
['12224879',
'Interactive Dynamic Video',
'http://www.interactivedynamicvideo.com/',
'386',
'52',
'ne0phyte',
'8/4/2016 11:52'],
['10975351',
'How to Use Open Source and Shut the Fuck Up at the Same Time',
'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/',
'39',
'10',
'josep2',
'1/26/2016 19:30'],
['11964716',
"Florida DJs May Face Felony for April Fools' Water Joke",
'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/',
'2',
'1',
'vezycash',
'6/23/2016 22:20'],
['11919867',
'Technology ventures: From Idea to Enterprise',
'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429',
'3',
'1',
'hswarna',
'6/17/2016 0:01']]
###Markdown
Removing Headers from a List of Lists
###Code
headers = hn[0]
hn = hn[1:]
pp.pprint(headers)
pp.pprint(hn[:5])
###Output
['id', 'title', 'url', 'num_points', 'num_comments', 'author', 'created_at']
[['12224879',
'Interactive Dynamic Video',
'http://www.interactivedynamicvideo.com/',
'386',
'52',
'ne0phyte',
'8/4/2016 11:52'],
['10975351',
'How to Use Open Source and Shut the Fuck Up at the Same Time',
'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/',
'39',
'10',
'josep2',
'1/26/2016 19:30'],
['11964716',
"Florida DJs May Face Felony for April Fools' Water Joke",
'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/',
'2',
'1',
'vezycash',
'6/23/2016 22:20'],
['11919867',
'Technology ventures: From Idea to Enterprise',
'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429',
'3',
'1',
'hswarna',
'6/17/2016 0:01'],
['10301696',
'Note by Note: The Making of Steinway L1037 (2007)',
'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0',
'8',
'2',
'walterbell',
'9/30/2015 4:12']]
###Markdown
Extracting Ask HN and Show HN Posts
###Code
ask_posts = []
show_posts = []
other_posts = []
for post in hn:
title = post[1].lower()
if title.startswith('ask hn'):
ask_posts.append(post)
elif title.startswith('show hn'):
show_posts.append(post)
else:
other_posts.append(post)
print("Number of ask hn post {}".format(len(ask_posts)))
print("Number of show hn post {}".format(len(show_posts)))
print("Number of other post {}".format(len(other_posts)))
###Output
Number of ask hn post 1744
Number of show hn post 1162
Number of other post 17194
###Markdown
We separated the `ask posts`, `show posts` and `other posts` into 3 list of lists. You can see that we have 1744 ask posts, 1162 show posts and 17194 other posts. Below is the first five rows of the each posts type
###Code
print('ASK POSTS\n=====================')
pp.pprint(ask_posts[:5])
print('SHOW POSTS\n=====================')
pp.pprint(show_posts[:5])
print('OTHER POSTS\n=====================')
pp.pprint(other_posts[:5])
###Output
ASK POSTS
=====================
[['12296411',
'Ask HN: How to improve my personal website?',
'',
'2',
'6',
'ahmedbaracat',
'8/16/2016 9:55'],
['10610020',
'Ask HN: Am I the only one outraged by Twitter shutting down share counts?',
'',
'28',
'29',
'tkfx',
'11/22/2015 13:43'],
['11610310',
'Ask HN: Aby recent changes to CSS that broke mobile?',
'',
'1',
'1',
'polskibus',
'5/2/2016 10:14'],
['12210105',
'Ask HN: Looking for Employee #3 How do I do it?',
'',
'1',
'3',
'sph130',
'8/2/2016 14:20'],
['10394168',
'Ask HN: Someone offered to buy my browser extension from me. What now?',
'',
'28',
'17',
'roykolak',
'10/15/2015 16:38']]
SHOW POSTS
=====================
[['10627194',
'Show HN: Wio Link ESP8266 Based Web of Things Hardware Development '
'Platform',
'https://iot.seeed.cc',
'26',
'22',
'kfihihc',
'11/25/2015 14:03'],
['10646440',
'Show HN: Something pointless I made',
'http://dn.ht/picklecat/',
'747',
'102',
'dhotson',
'11/29/2015 22:46'],
['11590768',
'Show HN: Shanhu.io, a programming playground powered by e8vm',
'https://shanhu.io',
'1',
'1',
'h8liu',
'4/28/2016 18:05'],
['12178806',
'Show HN: Webscope Easy way for web developers to communicate with '
'Clients',
'http://webscopeapp.com',
'3',
'3',
'fastbrick',
'7/28/2016 7:11'],
['10872799',
'Show HN: GeoScreenshot Easily test Geo-IP based web pages',
'https://www.geoscreenshot.com/',
'1',
'9',
'kpsychwave',
'1/9/2016 20:45']]
OTHER POSTS
=====================
[['12224879',
'Interactive Dynamic Video',
'http://www.interactivedynamicvideo.com/',
'386',
'52',
'ne0phyte',
'8/4/2016 11:52'],
['10975351',
'How to Use Open Source and Shut the Fuck Up at the Same Time',
'http://hueniverse.com/2016/01/26/how-to-use-open-source-and-shut-the-fuck-up-at-the-same-time/',
'39',
'10',
'josep2',
'1/26/2016 19:30'],
['11964716',
"Florida DJs May Face Felony for April Fools' Water Joke",
'http://www.thewire.com/entertainment/2013/04/florida-djs-april-fools-water-joke/63798/',
'2',
'1',
'vezycash',
'6/23/2016 22:20'],
['11919867',
'Technology ventures: From Idea to Enterprise',
'https://www.amazon.com/Technology-Ventures-Enterprise-Thomas-Byers/dp/0073523429',
'3',
'1',
'hswarna',
'6/17/2016 0:01'],
['10301696',
'Note by Note: The Making of Steinway L1037 (2007)',
'http://www.nytimes.com/2007/11/07/movies/07stein.html?_r=0',
'8',
'2',
'walterbell',
'9/30/2015 4:12']]
###Markdown
Calculating the Average Number of Comments for Ask HN and Show HN Posts
###Code
total_ask_comments = 0
for post in ask_posts:
total_ask_comments += int(post[4])
avg_ask_comments = total_ask_comments/len(ask_posts)
print ('Average number of comments for ask posts: {:.2f}'.format(avg_ask_comments))
total_show_comments = 0
for post in show_posts:
total_show_comments += int(post[4])
avg_show_comments = total_show_comments/len(show_posts)
print ('Average number of comments for show posts: {:.2f}'.format(avg_show_comments))
###Output
Average number of comments for ask posts: 14.04
Average number of comments for show posts: 10.32
###Markdown
On average the ask posts receive more comments than the show posts.Ask posts has more comments on average 14 comments than show posts with 10 comments. People are like to answer a question than to comment on a show post. That's why ask post are more likely to receive comments. Finding the Amount of Ask Posts and Comments by Hour Created
###Code
import datetime as dt
result_list = []
for post in ask_posts:
created_at = post[6]
num_comments = int(post[4])
result_list.append([created_at, num_comments])
counts_by_hour = {}
comments_by_hour = {}
date_format = '%m/%d/%Y %H:%M'
for row in result_list:
created_at = dt.datetime.strptime(row[0], date_format)
hour = created_at.strftime('%H')
if hour not in counts_by_hour:
counts_by_hour[hour] = 1
comments_by_hour[hour] = row[1]
else:
counts_by_hour[hour] += 1
comments_by_hour[hour] += row[1]
print('Posts created by hour:')
pp.pprint(counts_by_hour)
print('======================================')
print('Comments posted by hour:')
pp.pprint(comments_by_hour)
###Output
Posts created by hour:
{'00': 55,
'01': 60,
'02': 58,
'03': 54,
'04': 47,
'05': 46,
'06': 44,
'07': 34,
'08': 48,
'09': 45,
'10': 59,
'11': 58,
'12': 73,
'13': 85,
'14': 107,
'15': 116,
'16': 108,
'17': 100,
'18': 109,
'19': 110,
'20': 80,
'21': 109,
'22': 71,
'23': 68}
======================================
Comments posted by hour:
{'00': 447,
'01': 683,
'02': 1381,
'03': 421,
'04': 337,
'05': 464,
'06': 397,
'07': 267,
'08': 492,
'09': 251,
'10': 793,
'11': 641,
'12': 687,
'13': 1253,
'14': 1416,
'15': 4477,
'16': 1814,
'17': 1146,
'18': 1439,
'19': 1188,
'20': 1722,
'21': 1745,
'22': 479,
'23': 543}
###Markdown
Above, we created 2 dictionaries: `counts_by_hour` for the posts created per hour and `comments_by_hour` for the comments created by hour. The hours are in 24h format. For example you can see that at `17(5pm)` there were `100` posts and `1146` comments created. Calculating the Average Number of Comments for Ask HN Posts by HourNow let's calculate the average number of comments for posts created during each hour of the day. We'll use the counts_by_hour and comments_by_hour dictionaries.
###Code
avg_by_hour = []
for comment in comments_by_hour:
avg_by_hour.append([comment, comments_by_hour[comment]/counts_by_hour[comment]])
print("Average no's of comments per post:")
pp.pprint(avg_by_hour)
###Output
Average no's of comments per post:
[['00', 8.127272727272727],
['11', 11.051724137931034],
['22', 6.746478873239437],
['06', 9.022727272727273],
['18', 13.20183486238532],
['14', 13.233644859813085],
['05', 10.08695652173913],
['07', 7.852941176470588],
['15', 38.5948275862069],
['23', 7.985294117647059],
['04', 7.170212765957447],
['20', 21.525],
['19', 10.8],
['16', 16.796296296296298],
['01', 11.383333333333333],
['12', 9.41095890410959],
['10', 13.440677966101696],
['02', 23.810344827586206],
['21', 16.009174311926607],
['03', 7.796296296296297],
['17', 11.46],
['08', 10.25],
['13', 14.741176470588234],
['09', 5.5777777777777775]]
###Markdown
Sorting and Printing Values from a List of Lists
###Code
swap_avg_by_hour = []
for h, c in avg_by_hour:
swap_avg_by_hour.append([c,h])
pp.pprint(swap_avg_by_hour)
# sort by the average number of comments
sorted_swap = sorted(swap_avg_by_hour, reverse = True)
pp.pprint(sorted_swap[:5])
###Output
[[38.5948275862069, '15'],
[23.810344827586206, '02'],
[21.525, '20'],
[16.796296296296298, '16'],
[16.009174311926607, '21']]
###Markdown
As you can see above we sorted through our swapped list and printed the top 5 hours for Ask posts comments. 15(3pm) has the most comments per hour with 38.5 followed by 2am with 23.8
###Code
print ('Top 5 Hours for Ask Posts Comments', '\n')
for comment, hour in sorted_swap[:5]:
each_hour = dt.datetime.strptime(hour, '%H').strftime('%H:%M')
comment_per_hour = '{h}: {c:.2f} average comments per post'.format(h = each_hour, c = comment)
print(comment_per_hour)
###Output
Top 5 Hours for Ask Posts Comments
15:00: 38.59 average comments per post
02:00: 23.81 average comments per post
20:00: 21.52 average comments per post
16:00: 16.80 average comments per post
21:00: 16.01 average comments per post
|
Jupyter Notebooks/Tables.ipynb | ###Markdown
@author: Marcos Tulio Fermin Lopez
###Code
import pygame
import Data_Manager
###Output
_____no_output_____
###Markdown
This module generates the tables and displays them in a Pygame window.
###Code
def show_AWT_Table():
pygame.init()
data = Data_Manager.get_data()
# window properties
WIDTH = 1400
HEIGHT = 922
WHITE_COLOR = (255, 255, 255)
screen = pygame.display.set_mode((WIDTH, HEIGHT))
# frame rate
Clock = pygame.time.Clock()
# convert table to desired size and remove bg
tab1 = pygame.image.load('tables/table_1.PNG')
white = (255, 255, 255)
tab1.set_colorkey(white)
tab1.convert_alpha()
tab1 = pygame.transform.smoothscale(tab1, (600, 150))
tab2 = pygame.image.load('tables/table_2.PNG')
white = (255, 255, 255)
tab2.set_colorkey(white)
tab2.convert_alpha()
tab2 = pygame.transform.smoothscale(tab2, (600, 150))
title_font = pygame.font.SysFont('timesnewroman', 35)
table_font = pygame.font.SysFont('timesnewroman', 25)
table_font2 = pygame.font.SysFont('timesnewroman', 20)
Title_surf1 = title_font.render("Average Waiting Time", True, (0, 0, 0))
Title_surf2 = title_font.render("Cars Serviced", True, (0, 0, 0))
Title_surf3 = title_font.render("Average Waiting Time (%)", True, (0, 0, 0))
Title_surf4 = title_font.render("Cars Serviced (%)", True, (0, 0, 0))
# texts
time = table_font.render("Time (Sec)", True, (0, 0, 0))
carsServed = table_font.render("Cars Served", True, (0, 0, 0))
antenna = table_font.render("Antenna", True, (0, 0, 0))
camera = table_font.render("Camera", True, (0, 0, 0))
pir = table_font.render("PIR", True, (0, 0, 0))
antennaT = table_font.render(f"{str(data['Antenna']['AWT'])}", True, (0, 0, 0))
cameraT = table_font.render(f"{str(data['Camera']['AWT'])}", True, (0, 0, 0))
pirT = table_font.render(f"{str(data['PIR']['AWT'])}", True, (0, 0, 0))
antennaC = table_font.render(f"{str(data['Antenna']['carsServiced'])}", True, (0, 0, 0))
cameraC = table_font.render(f"{str(data['Camera']['carsServiced'])}", True, (0, 0, 0))
pirC = table_font.render(f"{str(data['PIR']['carsServiced'])}", True, (0, 0, 0))
antVsCam = table_font2.render("Antenna Vs Camera", True, (0, 0, 0))
antVsPir = table_font2.render("Antenna Vs PIR", True, (0, 0, 0))
Efficiency = table_font.render("Efficiency", True, (0, 0, 0))
# ****AWT Eff****
# camera and pir
awtAnt = (int(data['Antenna']['AWT']))
awtCam = (int(data['Camera']['AWT']))
awtPir = (int(data['PIR']['AWT']))
efficiencyVsCamAWTstr = str(round(((abs(awtAnt - awtCam))/(awtCam) * 100), 2))
efficiencyVsPirAWTstr = str(round(((abs(awtPir - awtAnt)/(awtPir)) * 100), 2))
effCamAntAwt = table_font.render(efficiencyVsCamAWTstr, True, (0, 0, 0))
effPirAntAwt = table_font.render(efficiencyVsPirAWTstr, True, (0, 0, 0))
# ****Cars Serviced Eff****
# camera and pir
carsAnt = (int(data['Antenna']['carsServiced']))
carsCam = (int(data['Camera']['carsServiced']))
carsPir = (int(data['PIR']['carsServiced']))
efficiencyVsCamCARSstr = str(round(((abs(carsAnt - carsCam)/(carsCam)) * 100), 2))
efficiencyVsPirCARSstr = str(round(((abs(carsAnt - carsPir)/(carsPir)) * 100), 2))
effCamAntCars = table_font.render(efficiencyVsCamCARSstr, True, (0, 0, 0))
effPirAntCars = table_font.render(efficiencyVsPirCARSstr, True, (0, 0, 0))
run = True
# game starts
while run:
# Display screen
screen.fill((WHITE_COLOR))
# Display table
screen.blit(tab1, (70, 200))
screen.blit(tab1, (730, 200))
screen.blit(Title_surf1, (200, 150))
screen.blit(Title_surf2, (930, 150))
screen.blit(tab2, (70, 650))
screen.blit(tab2, (730, 650))
screen.blit(Title_surf3, (200, 600))
screen.blit(Title_surf4, (930, 600))
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
run = False
# print(pygame.mouse.get_pos())
'''*******Display table texts*****'''
# ------------AWT----------------
screen.blit(antenna, (252, 230))
screen.blit(camera, (405, 230))
screen.blit(pir, (570, 230))
# -------Cars Serviced------------
screen.blit(antenna, (910, 230))
screen.blit(camera, (1067, 230))
screen.blit(pir, (1230, 230))
'''************Data*************'''
# ----------AWT-------------------
screen.blit(antennaT, (270, 294))
screen.blit(cameraT, (410, 294))
screen.blit(pirT, (570, 294))
screen.blit(time, (93, 294))
# -------Cars Serviced-------------
screen.blit(antennaC, (950, 294))
screen.blit(cameraC, (1090, 294))
screen.blit(pirC, (1240, 294))
screen.blit(carsServed, (750, 294))
# -------Efficiency-----------------
screen.blit(antVsCam, (284, 687))
screen.blit(antVsPir, (495, 687))
screen.blit(effCamAntAwt, (346, 748))
screen.blit(effPirAntAwt, (555, 748))
screen.blit(Efficiency, (120, 748))
screen.blit(antVsCam, (950, 687))
screen.blit(antVsPir, (1155, 687))
screen.blit(effCamAntCars, (1000, 748))
screen.blit(effPirAntCars, (1210, 748))
screen.blit(Efficiency, (782, 748))
pygame.display.flip()
Clock.tick(10)
pygame.display.set_caption(
"Marcos Fermin's Dynamic Traffic Lights Simulator - EE Capstone Project - Fall 2021")
if __name__ == '__main__':
show_AWT_Table()
show_AWT_Table()
###Output
_____no_output_____ |
EDA_and_model.ipynb | ###Markdown
Exploratory Data Analysis US Bank Wages dataset
###Code
# Import relevant libraries
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Read in data and convert column names to lowercase
us_bank_wages = pd.read_csv("us_bank_wages/us_bank_wages.txt", sep="\t", index_col=0)
us_bank_wages.columns = us_bank_wages.columns.str.lower()
###Output
_____no_output_____
###Markdown
Dataset Characteristics ***Column Names and descriptions**** **SALARY**: current yearly salary in dollar [\$]* **EDUC**: education (number of finished years)* **SALBEGIN**: yearly salary at employee's first position at same bank in dollars [\$]* **GENDER**: gender variable (0 for females, 1 for males)* **MINORITY**: minority variable (0 for non-minorities, 1 for minorities)* **JOBCAT**: - job category (1 for administrative jobs, 2 for custodial jobs, 3 for management jobs) **JOBCAT*** *administrative*: broad range of different duties, including answering phones, speaking with clients, clerical work (including maintaining records and entering data)* *custodial*: e.g.asset/investment manager* *management*: e.g. branch manager
###Code
# Top 5 rows of dataset
us_bank_wages.head()
# Bottom 5 rows of dataset
us_bank_wages.tail()
# Index and rows/columns, data type and non-null values
us_bank_wages.info()
# Overview of key statistical data
us_bank_wages.describe()
# No. of unique values for rows educ, gender, minority, jobcat
print(f'Unique values for educ column: {us_bank_wages.educ.nunique()}')
print(f'Unique values for gender column: {us_bank_wages.gender.nunique()}')
print(f'Unique values for minority column: {us_bank_wages.minority.nunique()}')
print(f'Unique values for jobcat column: {us_bank_wages.jobcat.nunique()}')
# Actual values for rows educ, gender, minority, jobcat
print(f'Values for educ: {us_bank_wages.educ.unique()}')
print(f'Values for gender: {us_bank_wages.gender.unique()}')
print(f'Values for minority: {us_bank_wages.minority.unique()}')
print(f'Vlues for jobcat: {us_bank_wages.jobcat.unique()}')
###Output
Values for educ: [15 16 12 8 19 17 18 14 20 21]
Values for gender: [1 0]
Values for minority: [0 1]
Vlues for jobcat: [3 1 2]
###Markdown
***NOTE:**** educ = category* educ and jobcat need to be converted to dummy variables
###Code
# Quick look at histogrammes to confirm our findings and double check if something was missed
us_bank_wages.hist(bins=15, figsize=(15, 10))
# Outlier check - compute absolute Z-score of each value in the column, relative to column mean and SD
us_bank_wages[(np.abs(stats.zscore(us_bank_wages)) < 3).all(axis=1)]
us_bank_wages.shape
###Output
_____no_output_____
###Markdown
Summary (a) * 6 columns, 474 instances * 4 categorical columns * all integers * no null values * salary: A) min 15750 Dollars, B) max 135000 Dollars, C) mean 34419 * salbegin similar pattern to salary * high salaries -> right skewed -> convert salary and salbegin to log scale for regression Correlations
###Code
# Heatmap to give a quick overview
corr = us_bank_wages.corr()
sns.heatmap(corr, cmap = 'coolwarm', annot= True);
###Output
_____no_output_____
###Markdown
Summary (b)FOR STARTING SALARY* STRONG * Starting salary (salbegin) appears to have the strongest positive relationship with salary. * The job description (jobcategory) also shows a strong relationship with the starting salary.* MEDIUM * Education also appears to correlate well with the starting salary.* LOW * There are appears to be little to no correlation between gender/minority and starting salary.-> investigate furtherFOR OTHER FEATURES* Minority does not appear to correlate well with any parameter -> check how many minorities* education appears to correlate somewhat with jobcat
###Code
# Count number of minorities in dataset
us_bank_wages['minority'].value_counts()
###Output
_____no_output_____
###Markdown
**Note**: Number of minorities: 104 out of a total of 474 -> not too low
###Code
# Scatter plot for the continuous variables
us_bank_wages.plot.scatter(x='salary', y='salbegin')
###Output
_____no_output_____
###Markdown
Summary (c)- Strong relationship between current salary and starting salary for low wages.- More variability for medium wages.- Not enough data for high wages.- CAUTION: As there is no information regarding the time-lapse between starting salary and current salary, this relationship can only be used qualitatively (not quantitatively) i.e. ***high earners stay high earners & salaries increases over time***.
###Code
# Take a closer look at the categorical features side-by-side
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(25,3))
for xcol, ax in zip(['educ', 'gender', 'minority', 'jobcat'], axes):
us_bank_wages.plot(kind='scatter', x=xcol, y='salary', ax=ax, alpha=0.4)
# Further investigate education
sns.catplot(x='educ', y="salary", data=us_bank_wages)
###Output
_____no_output_____
###Markdown
Summary (d)Relationship between salary and categorical variables is given, but requires further invstigation to check for cross interactions. Explore interaction effects
###Code
# Prepare dataframe, i.e.insert labels for catergorical values
us_bank_wages_2 = us_bank_wages.copy()
us_bank_wages_2['gender'].replace(0, 'Female',inplace=True)
us_bank_wages_2['gender'].replace(1, 'Male',inplace=True)
us_bank_wages_2['minority'].replace(0, 'Non-Minority',inplace=True)
us_bank_wages_2['minority'].replace(1, 'Minority',inplace=True)
us_bank_wages_2['jobcat'].replace(1, 'Adim',inplace=True)
us_bank_wages_2['jobcat'].replace(2, 'Custodial',inplace=True)
us_bank_wages_2['jobcat'].replace(3, 'Management',inplace=True)
# Group/bin education years according to education level, i.e. high school/training, BSc, MSc, PhD
us_bank_wages_3 = us_bank_wages_2.copy()
us_bank_wages_3['education'] = pd.cut(us_bank_wages_3['educ'], [0,8,16,17,19],
labels=['school/training', 'BSc', 'MSc', 'PhD'])
us_bank_wages_3.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 474 entries, 0 to 473
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 salary 474 non-null int64
1 educ 474 non-null int64
2 salbegin 474 non-null int64
3 gender 474 non-null object
4 minority 474 non-null object
5 jobcat 474 non-null object
6 education 471 non-null category
dtypes: category(1), int64(3), object(3)
memory usage: 42.7+ KB
###Markdown
Binning generated a couple of NaNs -> needs to be fixed Part 1: Effect of Education and Jobtype
###Code
# Boxplot for salary versus jobtype by education
plt.figure(figsize=(10,7))
sns.boxplot(
x='jobcat',
y='salary',
hue='education',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Jobtype',
ylabel='Salary'
)
plt.title('Annual Salary ($) versus Jobtype by Education');
###Output
_____no_output_____
###Markdown
Part 2: Gender Differences
###Code
# Salary male versus female
plt.figure(figsize=(4,4))
sns.boxplot(
x='gender',
y='salary',
data=us_bank_wages_3,palette='Blues'
).set(
xlabel='Gender',
ylabel='Salary'
)
plt.title('Annual Salary ($) by Gender');
# Boxplot for salary versus jobtype by gender
plt.figure(figsize=(10,7))
sns.boxplot(
x='jobcat',
y='salary',
hue='gender',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Jobtype',
ylabel='Salary'
)
plt.title('Annual Salary ($) for Jobtype by Gender');
# Boxplot for salary versus education by gender
plt.figure(figsize=(10,7))
sns.boxplot(
x='education',
y='salary',
hue='gender',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Education',
ylabel='Salary'
)
plt.title('Annual Salary ($) for Education by Gender');
###Output
_____no_output_____
###Markdown
Part 3: (Non)-Minorities
###Code
# Salary for (non)-minorities
plt.figure(figsize=(4,4))
sns.boxplot(
x='minority',
y='salary',
data=us_bank_wages_3,palette='Blues'
).set(
xlabel='Ethnicity',
ylabel='Salary'
)
plt.title('Annual Salary ($) by (Non)-Minorities');
# Boxplot for salary versus minority by gender
plt.figure(figsize=(10,7))
sns.boxplot(
x='minority',
y='salary',
hue='gender',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Ethnicity',
ylabel='Salary'
)
plt.title('Annual Salary ($) for (Non)-Minority by Gender');
# Boxplot for salary versus minority by jobcat
plt.figure(figsize=(10,7))
sns.boxplot(
x='minority',
y='salary',
hue='jobcat',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Ethnicity',
ylabel='Salary'
)
plt.title('Annual Salary ($) for (Non)-Monority by Jobtype');
# Boxplot for salary versus minority by education
plt.figure(figsize=(10,7))
sns.boxplot(
x='minority',
y='salary',
hue='education',
data=us_bank_wages_3,palette='icefire'
).set(
xlabel='Ethnicity',
ylabel='Salary'
)
plt.title('Annual Salary ($) for (Non)-Monority by Education');
###Output
_____no_output_____
###Markdown
Conclusion: Part 1:* A clear positive relationship between education jobtype salary Part 2:* A clear positive relationship between gender salary, influenced by the positive relationships between gender education and gender jobtype, resulting in lower net salary for women Part 3:* A divergent relationship between minority salary * Trend towards lower net salary for women seen for minorities and non-minorities * Difference between salaries of non-minorities and minorities increases with increasing education * Admin positions: minorities earn less * Management positions: reverse, i.e. they earcn more than non-minorities NoteCustodians appear to be a special type of employee, as their salary has less variations and they are restricted to male employees of a specific education level Addendum Salary mean, min/max & count values per feature
###Code
# Drop NaN rows for education evaluations
us_bank_wages_4 = us_bank_wages_3.copy()
us_bank_wages_4.dropna(subset = ['education'], inplace=True)
# Salary mean, min/max & count per education
for i in us_bank_wages_4.education.unique():
print(i, us_bank_wages_4[(us_bank_wages_4['education'] == i)].salary.mean())
print(i, us_bank_wages_4[(us_bank_wages_4['education'] == i)].salary.min())
print(i, us_bank_wages_4[(us_bank_wages_4['education'] == i)].salary.max())
print(i, us_bank_wages_4[(us_bank_wages_4['education'] == i)].education.count())
# Salary mean, min/max & count per jobcat
for i in us_bank_wages_3.jobcat.unique():
print(i, us_bank_wages_3[(us_bank_wages_3['jobcat'] == i)].salary.mean())
print(i, us_bank_wages_3[(us_bank_wages_3['jobcat'] == i)].salary.min())
print(i, us_bank_wages_3[(us_bank_wages_3['jobcat'] == i)].salary.max())
print(i, us_bank_wages_3[(us_bank_wages_3['jobcat'] == i)].education.count())
# Salary mean, min/max & count per gender
for i in us_bank_wages_3.gender.unique():
print(i, us_bank_wages_3[(us_bank_wages_3['gender'] == i)].salary.mean())
print(i, us_bank_wages_3[(us_bank_wages_3['gender'] == i)].salary.min())
print(i, us_bank_wages_3[(us_bank_wages_3['gender'] == i)].salary.max())
print(i, us_bank_wages_3[(us_bank_wages_3['gender'] == i)].education.count())
# Salary mean, min/max & count per minority
for i in us_bank_wages_3.minority.unique():
print(i, us_bank_wages_3[(us_bank_wages_3['minority'] == i)].salary.mean())
print(i, us_bank_wages_3[(us_bank_wages_3['minority'] == i)].salary.min())
print(i, us_bank_wages_3[(us_bank_wages_3['minority'] == i)].salary.max())
print(i, us_bank_wages_3[(us_bank_wages_3['minority'] == i)].education.count())
###Output
Non-Minority 36023.31081081081
Non-Minority 15750
Non-Minority 135000
Non-Minority 367
Minority 28713.94230769231
Minority 16350
Minority 100000
Minority 104
|
Week06/PandasExample.ipynb | ###Markdown
Pandas Example These tutorials are also available through an email course, please visit http://www.hedaro.com/pandas-tutorial to sign up today. **Create Data** - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file. **Get Data** - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880. **Prepare Data** - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records. **Analyze Data** - We will simply find the most popular name in a specific year. **Present Data** - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year. > The ***pandas*** library is used for all the data analysis excluding a small piece of the data presentation section. The ***matplotlib*** library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.
###Code
# Import all libraries needed for the tutorial
# General syntax to import specific functions in a library:
##from (library) import (specific library function)
from pandas import DataFrame, read_csv
# General syntax to import a library but no functions:
##import (library) as (give the library a nickname/alias)
import matplotlib.pyplot as plt
import pandas as pd #this is how I usually import pandas
import sys #only needed to determine Python version number
import matplotlib #only needed to determine Matplotlib version number
# Enable inline plotting
%matplotlib inline
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
###Output
_____no_output_____
###Markdown
Create Data The data set will consist of 5 baby names and the number of births recorded for that year (1880).
###Code
# The inital set of baby names and birth rates
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
print (births[3])
###Output
_____no_output_____
###Markdown
To merge these two lists together we will use the ***zip*** function.
###Code
zip?
BabyDataSet = list(zip(names,births))
BabyDataSet
###Output
_____no_output_____
###Markdown
We are basically done creating the data set. We now will use the ***pandas*** library to export this data set into a csv file. ***df*** will be a ***DataFrame*** object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside ***df***.
###Code
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df
###Output
_____no_output_____
###Markdown
Export the dataframe to a ***csv*** file. We can name the file ***births1880.csv***. The function ***to_csv*** will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.
###Code
df.to_csv?
###Output
_____no_output_____
###Markdown
The only parameters we will use is ***index*** and ***header***. Setting these parameters to False will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
###Code
df.to_csv('births1880.csv',index=True,header=True)
###Output
_____no_output_____
###Markdown
Get Data To pull in the csv file, we will use the pandas function *read_csv*. Let us take a look at this function and what inputs it takes.
###Code
read_csv?
###Output
_____no_output_____
###Markdown
Even though this functions has many parameters, we will simply pass it the location of the text file. Location = births1880.csv ***Note:*** Depending on where you save your notebooks, you may need to modify the location above.
###Code
Location = r'births1880.csv'
df2 = pd.read_csv(Location,index_col=0)
###Output
_____no_output_____
###Markdown
Notice the ***r*** before the string. Since the slashes are special characters but are used to separate folders in a path on Windows, prefixing the string with a ***r*** will ensure that the file name is read correctly on any OS
###Code
df2
###Output
_____no_output_____
###Markdown
The ***read_csv*** function treated the first record in the csv file as the header names. If the file has no header, we can pass the ***header*** parameter to the *read_csv* function and set it to ***None*** (means null in python). You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the ***index*** of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates. ***[Names, Births]*** can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database. You can delete the CSV file now
###Code
import os
os.remove(Location)
###Output
_____no_output_____
###Markdown
Prepare Data The data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values). The ***Names*** column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The ***Births*** column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis. Realize that aside from the check we did on the "Names" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.
###Code
# Check data type of the columns
df.dtypes
# Check data type of Births column
df.Births.dtype
###Output
_____no_output_____
###Markdown
As you can see the *Births* column is of type ***int64***, thus no floats (decimal numbers) or alpha numeric characters will be present in this column. Analyze Data To find the most popular name or the baby name with the higest birth rate, we can do one of the following. * Sort the dataframe and select the top row* Use the ***max()*** attribute to find the maximum value
###Code
# Method 1:
Sorted = df.sort_values(['Births'], ascending=False)
Sorted.head(2)
# Method 2:
print (df['Births'].max(),df['Births'].min())
###Output
_____no_output_____
###Markdown
Present Data Here we can plot the ***Births*** column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that **Mel** is the most popular baby name in the data set. ***plot()*** is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it. **Explain the pieces:** *df['Names']* - This is the entire list of baby names, the entire Names column *df['Births']* - This is the entire list of Births in the year 1880, the entire Births column *df['Births'].max()* - This is the maximum value found in the Births column [df['Births'] == df['Births'].max()] **IS EQUAL TO** [Find all of the records in the Births column where it is equal to 973] df['Names'][df['Births'] == df['Births'].max()] **IS EQUAL TO** Select all of the records in the Names column **WHERE** [The Births column is equal to 973] An alternative way could have been to use the ***Sorted*** dataframe: Sorted['Names'].head(1).value The ***str()*** function simply converts an object into a string.
###Code
# Create graph
df['Births'].plot()
# Maximum value in the data set
MaxValue = df['Births'].max()
# Name associated with the maximum value
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# Text to display on graph
Text = str(MaxValue) + " - " + MaxName
# Add text to graph
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
print("The most popular name")
df[df['Births'] == df['Births'].max()]
#Sorted.head(1) can also be used
###Output
_____no_output_____ |
Notebook-Class-exercises/Hello_world.ipynb | ###Markdown
Hello World
###Code
import os
your_name = "Arvind Sathi"
print("Hello ", your_name)
current_dir = os.getcwd()
current_dir
###Output
_____no_output_____ |
src/exp_hyperparams/visualization.ipynb | ###Markdown
Load and parse data
###Code
def load_results():
results = {}
results_dir = Path('results')
for dataset_results_dir in results_dir.iterdir():
dataset_results = {}
for dataset_results_path in dataset_results_dir.iterdir():
with open(dataset_results_path, 'rb') as handle:
dataset_results = {**dataset_results, **pickle.load(handle)}
results[dataset_results_dir.stem] = dataset_results
return results
ALPHAS = np.array([0.9, 0.99, 0.999, 0.9999, 0.99999])
BETAS = np.array([0.1, 0.3, 0.5, 0.7, 0.9])
ETAS = np.array([0.1, 0.3, 0.5, 0.7, 0.9])
RESULTS = load_results()
def parse_results(dataset_name, beta, metric):
values = np.zeros((ALPHAS.size, ETAS.size), dtype=float)
data = {(alpha, eta): d for (alpha, b, eta), d in RESULTS[dataset_name].items() if b == beta}
for (alpha, eta), res in data.items():
j = np.argwhere(ALPHAS==alpha)[0,0]
i = np.argwhere(ETAS==eta)[0,0]
if metric == 'min':
min_losses = [min(d['losses']) for d in res.values()]
loss = np.log(np.mean(min_losses))
elif metric == 'last':
last_losses = [d['losses'][-1] for d in res.values()]
loss = np.log(np.mean(last_losses))
values[i, j] = loss
return values
###Output
_____no_output_____
###Markdown
Find optimal configuration for each dataset
###Code
for dataset_name, data in RESULTS.items():
min_loss = np.inf
min_loss_std = None
last_loss = None
last_loss_std = None
opt_conf = None
for c, res in data.items():
c_min_losses = [min(d['losses']) for d in res.values()]
c_min_loss = np.mean(c_min_losses)
if c_min_loss < min_loss:
opt_conf = c
min_loss = c_min_loss
min_loss_std = np.std(c_min_losses)
c_last_losses = [d['losses'][-1] for d in res.values()]
last_loss = np.mean(c_last_losses)
last_loss_std = np.std(c_last_losses)
print(f'\n{dataset_name}')
print(f'Opt conf: {opt_conf}')
print(f'Min loss: {round(min_loss, 3)} +- {round(min_loss_std, 3)}')
print(f'Last loss: {round(last_loss, 3)} +- {round(last_loss_std, 3)}')
###Output
_____no_output_____
###Markdown
Plot data
###Code
# Matplotlib font configuration
rcParams['font.size'] = 16
rcParams['font.weight'] = 'normal'
rcParams['font.family'] = 'serif'
rcParams['text.usetex'] = True
# Matplotlib color map
cmap = LinearSegmentedColormap.from_list('mycmap', ['#00429d', '#4771b2', '#73a2c6', '#a5d5d8', '#ffffe0',
'#ffbcaf', '#f4777f', '#cf3759', '#93003a'])
def plot(dataset_name, beta, metric):
"""Plot heatmap"""
values = parse_results(dataset_name, beta, metric)
fig, ax = plt.subplots(figsize=(4.5, 3.5))
mappable = ax.imshow(values, cmap=cmap, aspect='auto', interpolation='none', origin='lower')
# Format color bar
cbar = fig.colorbar(mappable=mappable, ax=ax, pad=0.02)
cbar.ax.yaxis.set_label_text(r'$\ln{(L_{I})}$')
cbar.ax.yaxis.set_label_coords(5, 0.51)
cbar.ax.yaxis.set_major_formatter(StrMethodFormatter('{x:.1f}'))
# Format x-axis
ax.xaxis.set_ticks(np.arange(0, ALPHAS.shape[0]))
x_labels = ALPHAS
ax.xaxis.set_ticklabels(x_labels, rotation=30)
ax.xaxis.set_label_text(r'$\alpha$')
ax.xaxis.set_label_coords(0.5, -0.22)
# Format y-axis
ax.yaxis.set_ticks(np.arange(0, ETAS.shape[0]))
y_labels = ETAS
ax.yaxis.set_ticklabels(y_labels)
ax.yaxis.set_label_text(r'$\eta$', rotation=0)
ax.yaxis.set_label_coords(-0.2, 0.47)
# Annotate heatmap
kw = {'horizontalalignment': 'center', 'verticalalignment': 'center'}
intvl = values.max() - values.min()
max_threshold = values.min() + 0.85 * intvl
min_threshold = values.min() + 0.15 * intvl
for i in range(values.shape[0]):
for j in range(values.shape[1]):
white = values[i, j] > max_threshold or values[i, j] < min_threshold
textcolor = 'w' if white else 'k'
kw.update(color=textcolor)
value = '{0:.2f}'.format(values[i, j])
ax.text(j, i, value, **kw)
# Save figure as pdf
title = f'{dataset_name}-{metric}-{beta}'
fig.savefig(f"figs/{title}.pdf".lower(), bbox_inches='tight')
ax.set_title(title)
plt.show()
for dataset_name in RESULTS.keys():
for beta in BETAS:
plot(dataset_name, beta, 'min')
plot(dataset_name, beta, 'last')
###Output
_____no_output_____ |
cso_v_snodas.ipynb | ###Markdown
crate flags for the CSO points that fall outside the climatological min and max values of corresponding
###Code
flag = []
for i in range(len(csodf)):
if
flag.append()
csodf['qcflag']=(csodf.depth*10>csodf.snodas_min-50) & (csodf.depth*10<csodf.snodas_max+50)
csodf['cso_hs']=csodf.depth*10
csodf[csodf.qcflag==False]
import matplotlib.pyplot as plt
# create data
x = csodf.depth[csodf.qcflag==True]*10
y = csodf.snodas_hs[csodf.qcflag==True]
x_flag = csodf.depth[csodf.qcflag==False]*10
y_flag = csodf.snodas_hs[csodf.qcflag==False]
plt.figure(1, figsize=[7.5,6])
plt.plot(np.arange(0,5000), np.arange(0,5000),'r')
plt.scatter(x, y,c='turquoise',label='within climatological min/max')
plt.scatter(x_flag, y_flag,c='orange',label='outlier')
plt.ylabel('SNODAS Hs [mm]')
plt.ylim([0,5000])
plt.xlabel('CSO Hs [mm]')
plt.xlim([0,5000])
plt.legend()
csodf
csodf = gpd.read_file('CSO_SNODAS.geojson')
csodf
stats = zonal_stats(cso_data_path, output_tif)
pts = point_query(cso_data_path, output_tif)
###Output
_____no_output_____ |
4th_100/problem313.ipynb | ###Markdown
Given $n \geq m$, we have$$S(m,n) = 6n+2m-13$$Why? The following are a few observations for choosing the optimal moves,- Assume that $n >= m$, i.e. the width $n$ is not smaller than the height $m$.- For each move, we want to move the colored block either rightward or downward.- If the blank block is on the left of the colored block, then moving the colored block rightward would take 5 moves, while moving the colored block downward would take 3 moves.- Similarly, if the blank block is on the top of the colored block, then moving the colored block rightward would take 3 moves, while moving the colored block downward would take 5 moves.- So, the optimal sequence of moves would be one of (rightward, downward, rightward, downward, rightward, ...) and (downward, rightward, downward, rightward, downward, ...) where rightward means moving the colored block to the right and downward means moving the colored block to the block below it. If the colored block is in the bottom row, then the only option is moving to the right blocks (this is always true since we assume width $\geq$ height).- Note that, since width $\geq$ height, we prefer to move the colored block to the right first (moving the block downward first would result in having one more 5-move). For example, in the $5\times4$ board example, moving downward first would take 27 moves to reach the target, while moving rightward first would take 25 moves.So, the optimal sequence of moves is,- The initial step is to move the blank block to the top left corner, which takes $n+m-2$ moves.- After that, we need $2\times(m-1)$ pairs of (rightward, downward) for the colored block to reach the bottom row, each step takes $3$ moves.- Finally, we move rightward $n-m-1$ times, each step takes $5$ moves.So,$$\begin{array}{ll}S(m,n) &= n + m - 2 + 3 \times \left[2\times(m-1)\right] + 5 \times(n-m-1) \\ &= 6n + 2m - 13\end{array}$$  We want $S(m,n) = 6n+2m-13 = p^2$, where $p$ is a prime number and $n \geq m$. Obviously, $p \neq 2$ since $6n+2m-13$ is odd.Let $\alpha = \frac{p^2-13}{2}$. We have that $p$ is odd, which means $p^2 - 13$ is an even number, so $\alpha$ is an integer.Also, $\alpha = \frac{p^2-13}{2} = 3n + m$, which means,$$n = \frac{\alpha - m}{3}$$Also, we have $n$ is an integer, so $\alpha (mod 3) = m (mod 3)$.And also, $n \geq m \Rightarrow 3n \geq 3m \Rightarrow 3n+m \geq 4m \Rightarrow \alpha \geq 4m \Rightarrow m \leq \frac{\alpha}{4}$.So, for a fixed $\alpha$, we can count the number of pair of $(n,m)$ by counting the number of possible value of $m$, which can be done by counting how many values of $m$ such that $\alpha (mod 3) = m (mod 3)$ and $m \leq \frac{\alpha}{4}$. Well, if $x (mod 3) = k$, then so is $x+3$ and $x+6$ and $x+9$ and so on.
###Code
N = 10**6
primes = []
with open("../inputs/primes_1e6.txt", "r") as f:
for p in f.readlines():
next_prime = int(p.strip())
if next_prime > N:
break
primes.append(next_prime)
def count_n_mod_3_equal_m(n, m):
while not (n % 3) == m:
n -= 1
return (n+(3-m))//3
ans = 0
for prime in primes:
if prime == 2:
continue
alpha = (prime**2 + 13) // 2
count = count_n_mod_3_equal_m(alpha // 4, alpha % 3) * 2
if alpha % 3 == 1:
count -= 2
ans += count
print(ans)
###Output
2057774861813004
|
_notebooks/2020-12-23- Descriptive-and-Inferential-Statistics.ipynb | ###Markdown
"Tutorial 03: Descriptive and Inferential Statistics"> "Introduction to descriptive and inferential statistics"- toc: true - badges: true- comments: true- categories: [basic-stats]- sticky_rank: 3
###Code
#collapse-hide
## required packages/modules
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.path import Path
from matplotlib import rcParams
from IPython.display import display, HTML
## default fontstyle
rcParams["font.family"] = "Ubuntu"
def get_patch(verts):
codes = [Path.MOVETO] + [Path.CURVE4] * (len(verts) - 1)
path = Path(verts, codes)
patch = patches.PathPatch(path, facecolor='none', lw=1.5, edgecolor="#F2F2F2", alpha=0.7)
return patch
## create subplot
fig, ax = plt.subplots(facecolor="#121212", figsize=(12,8))
ax.set_facecolor("#121212")
## props for text
props = dict(facecolor="none", edgecolor="#D3D3D3", boxstyle="round,pad=0.6", zorder=3)
## coordinates to plot line
line_coords = [
[(5, 9.5), (5, 8.5), (4.5, 8.5), (1, 8)],
[(5, 9.5), (5, 8.5), (5.5, 8.5), (9, 8)],
[(1, 6.5), (1, 5.9), (0.5, 5.9), (-2, 5.4)],
[(1, 6.5), (1, 5.9), (1.5, 5.9), (4, 5.4)],
[(-2, 4), (-2, 3.5), (-2.5, 3.5), (-4, 3)],
[(-2, 4), (-2, 3.5), (-2, 3.4), (-2, 3)],
[(-2, 4), (-2, 3.5), (-1.5, 3.5), (0, 3)],
[(4, 4), (4, 3.5), (3.5, 3.5), (2, 3)],
[(4, 4), (4, 3.5), (4, 3.4), (4, 3)],
[(4, 4), (4, 3.5), (4.5, 3.5), (6, 3)],
]
## add lines
for verts in line_coords:
ax.add_patch(get_patch(verts))
## text coordinates
text_coord = [
(5, 10.1), (1, 7.23), (9, 7.23),
(-2, 4.7), (4, 4.7),
(-4, 2.55), (-2, 2.55), (0, 2.55),
(2, 2.55), (4, 2.55), (6, 2.55)
]
## text label and size
text_label = [
("Types of Statistics", 18), ("Descriptive\nStatistics", 16.5), ("Inferential\nStatistics", 16.5),
("Measure of\nCentral Tendency", 14.5), ("Measure of\nVariability", 14.5),
("Mean", 13.5), ("Median", 13.5), ("Mode", 13.5),
("Variance", 13.5), ("Range", 13.5), ("Dispersion", 13.5)
]
## add text
for i in range(len(text_coord)):
text = ax.text(
text_coord[i][0], text_coord[i][1], text_label[i][0], color="#F2F2F2", size=text_label[i][1],
bbox=dict(facecolor="none", edgecolor="#D3D3D3", boxstyle="round,pad=1"), zorder=2,
ha="center", va="center"
)
## credit
ax.text(
10.3, 1.7, "graphic: @slothfulwave612", fontstyle="italic",
color="#F2F2F2", size=10, ha="right", va="center", alpha=0.8
)
## set axis
ax.set(xlim=(-4.75,10.35), ylim=(1.5,11))
## tidy axis
ax.axis("off")
plt.show()
###Output
_____no_output_____ |
qiskit/chemistry/dissociation_profile_of_molecule.ipynb | ###Markdown
Trusted Notebook" width="500 px" align="left"> _*Qiskit Chemistry: Computing a Molecule's Dissociation Profile Using the Variational Quantum Eigensolver (VQE) Algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorials.*** ContributorsAntonio Mezzacapo[1], Richard Chen[1], Marco Pistoia[1], Shaohan Hu[1], Peng Liu[1], Stephen Wood[1], Jay Gambetta[1] Affiliation- [1]IBMQ IntroductionOne of the most compelling possibilities of quantum computation is the the simulation of other quantum systems. Quantum simulation of quantum systems encompasses a wide range of tasks, including most significantly: 1. Simulation of the time evolution of quantum systems.2. Computation of ground state properties. These applications are especially useful when considering systems of interacting fermions, such as molecules and strongly correlated materials. The computation of ground state properties of fermionic systems is the starting point for mapping out the phase diagram of condensed matter Hamiltonians. It also gives access to the key question of electronic structure problems in quantum chemistry - namely, reaction rates. The focus of this notebook is on molecular systems, which are considered to be the ideal bench test for early-stage quantum computers, due to their relevance in chemical applications despite relatively modest sizes. Formally, the ground state problem asks the following:For some physical Hamiltonian *H*, find the smallest eigenvalue $E_G$, such that $H|\psi_G\rangle=E_G|\psi_G\rangle$, where $|\Psi_G\rangle$ is the eigenvector corresponding to $E_G$. It is known that in general this problem is intractable, even on a quantum computer. This means that we cannot expect an efficient quantum algorithm that prepares the ground state of general local Hamiltonians. Despite this limitation, for specific Hamiltonians of interest it might be possible, given physical constraints on the interactions, to solve the above problem efficiently. Currently, at least four different methods exist to approach this problem:1. Quantum phase estimation: Assuming that we can approximately prepare the state $|\psi_G\rangle$, this routine uses controlled implementations of the Hamiltonian to find its smallest eigenvalue. 2. Adiabatic theorem of quantum mechanics: The quantum system is adiabatically dragged from being the ground state of a trivial Hamiltonian to the one of the target problem, via slow modulation of the Hamiltonian terms. 3. Dissipative (non-unitary) quantum operation: The ground state of the target system is a fixed point. The non-trivial assumption here is the implementation of the dissipation map on quantum hardware. 4. Variational quantum eigensolvers: Here we assume that the ground state can be represented by a parameterization containing a relatively small number of parameters.In this notebook we focus on the last method, as this is most likely the simplest to be realized on near-term devices. The general idea is to define a parameterization $|\psi(\boldsymbol\theta)\rangle$ of quantum states, and minimize the energy $$E(\boldsymbol\theta) = \langle \psi(\boldsymbol\theta)| H |\psi(\boldsymbol\theta)\rangle,$$ The key ansatz is that the number of parameters $|\boldsymbol\theta^*|$ that minimizes the energy function scales polynomially with the size (e.g., number of qubits) of the target problem. Then, any local fermionic Hamiltonian can be mapped into a sum over Pauli operators $P_i$, $$H\rightarrow H_P = \sum_i^M w_i P_i,$$ and the energy corresponding to the state $|\psi(\boldsymbol\theta\rangle$, $E(\boldsymbol\theta)$, can be estimated by sampling the individual Pauli terms $P_i$ (or sets of them that can be measured at the same time) on a quantum computer: $$E(\boldsymbol\theta) = \sum_i^M w_i \langle \psi(\boldsymbol\theta)| P_i |\psi(\boldsymbol\theta)\rangle.$$ Last, some optimization technique must be devised in order to find the optimal value of parameters $\boldsymbol\theta^*$, such that $|\psi(\boldsymbol\theta^*)\rangle\equiv|\psi_G\rangle$. Fermionic HamiltoniansThe Hamiltonians describing systems of interacting fermions can be expressed in second quantization language, considering fermionic creation (annihilation) operators $a^\dagger_\alpha(a_\alpha)$, relative to the $\alpha$-th fermionic mode. In the case of molecules, the $\alpha$ labels stand for the different atomic or molecular orbitals. Within the second-quantization framework, a generic molecular Hamiltonian with $M$ orbitals can be written as $$H =H_1+H_2=\sum_{\alpha, \beta=0}^{M-1} t_{\alpha \beta} \, a^\dagger_{\alpha} a_{\beta} +\frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta = 0}^{M-1} u_{\alpha \beta \gamma \delta}\, a^\dagger_{\alpha} a^\dagger_{\gamma} a_{\delta} a_{\beta},$$with the one-body terms representing the kinetic energy of the electrons and the potential energy that they experience in the presence of the nuclei, $$ t_{\alpha\beta}=\int d\boldsymbol x_1\Psi_\alpha(\boldsymbol{x}_1) \left(-\frac{\boldsymbol\nabla_1^2}{2}+\sum_{i} \frac{Z_i}{|\boldsymbol{r}_{1i}|}\right)\Psi_\beta (\boldsymbol{x}_1),$$and their interactions via Coulomb forces $$ u_{\alpha\beta\gamma\delta}=\int\int d \boldsymbol{x}_1 d \boldsymbol{x}_2 \Psi_\alpha^*(\boldsymbol{x}_1)\Psi_\beta(\boldsymbol{x}_1)\frac{1}{|\boldsymbol{r}_{12}|}\Psi_\gamma^*(\boldsymbol{x}_2)\Psi_\delta(\boldsymbol{x}_2),$$where we have defined the nuclei charges $Z_i$, the nuclei-electron and electron-electron separations $\boldsymbol{r}_{1i}$ and $\boldsymbol{r}_{12}$, the $\alpha$-th orbital wavefunction $\Psi_\alpha(\boldsymbol{x}_1)$, and we have assumed that the spin is conserved in the spin-orbital indices $\alpha,\beta$ and $\alpha,\beta,\gamma,\delta$. Molecules considered in this notebook and mapping to qubitsWe consider in this notebook the optimization of two potential energy surfaces, for the hydrogen and lithium hydride molecules, obtained using the STO-3G basis. The molecular Hamiltonians are computed as a function of their interatomic distance, then mapped to two-(H$_2$) and four-(LiH$_2$) qubit problems, via elimination of core and high-energy orbitals and removal of $Z_2$ symmetries. Approximate universal quantum computing for quantum chemistry problemsIn order to find the optimal parameters $\boldsymbol\theta^*$, we set up a closed optimization loop with a quantum computer, based on some stochastic optimization routine. Our choice for the variational ansatz is a deformation of the one used for the optimization of classical combinatorial problems, with the inclusion of $Z$ rotation together with the $Y$ ones. The optimization algorithm for fermionic Hamiltonians is similar to the one for combinatorial problems, and can be summarized as follows: 1. Map the fermionic Hamiltonian $H$ to a qubit Hamiltonian $H_P$.2. Choose the maximum depth of the quantum circuit (this could be done adaptively).3. Choose a set of controls $\boldsymbol\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$. The difference with the combinatorial problems is the insertion of additional parameterized $Z$ single-qubit rotations.4. Evaluate the energy $E(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H_P|~\psi(\boldsymbol\theta)\rangle$ by sampling each Pauli term individually, or sets of Pauli terms that can be measured in the same tensor product basis.5. Use a classical optimizer to choose a new set of controls.6. Continue until the energy has converged, hopefully close to the real solution $\boldsymbol\theta^*$, and return the last value of $E(\boldsymbol\theta)$. Note that, as opposed to the classical case, in the case of a quantum chemistry Hamiltonian one has to sample over non-computational states that are superpositions, and therefore take advantage of using a quantum computer in the sampling part of the algorithm. Motivated by the quantum nature of the answer, we also define a variational trial ansatz in this way: $$|\psi(\boldsymbol\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of cPhase gates (fully entangling gates), $U_\mathrm{single}(\boldsymbol\theta) = \prod_{i=1}^n Y(\theta_{i})Z(\theta_{n+i})$ are single-qubit $Y$ and $Z$ rotation, $n$ is the number of qubits and $m$ is the depth of the quantum circuit. References and additional details:[1] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, *Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets*, Nature 549, 242 (2017), and references therein.
###Code
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer
from qiskit.chemistry import QiskitChemistry
import warnings
warnings.filterwarnings('ignore')
# setup qiskit.chemistry logging
import logging
from qiskit.chemistry import set_qiskit_chemistry_logging
set_qiskit_chemistry_logging(logging.ERROR) # choose among DEBUG, INFO, WARNING, ERROR, CRITICAL and NOTSET
###Output
_____no_output_____
###Markdown
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiement on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
###Code
# from qiskit import IBMQ
# IBMQ.load_accounts()
###Output
_____no_output_____
###Markdown
Optimization of H$_2$ at bond lengthIn this first part of the notebook, we show the optimization of the H$_2$ Hamiltonian in the `STO-3G` basis at the bond length of 0.735 Angstrom. After mapping it to a four-qubit system with a parity transformation, two spin-parity symmetries are modded out, leading to a two-qubit Hamiltonian. The energy of the mapped Hamiltonian obtained is then minimized using the variational ansatz described in the introduction, and a stochastic perturbation simultaneous approximation (SPSA) gradient descent method. We stored the precomputed one- and two-body integrals and other molecular information in the `hdf5` file.Here we use the [*declarative approach*](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/chemistry/declarative_approach.ipynb) to run our experiement, but the same is doable in a [fully programmatic way](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/chemistry/programmatic_approach.ipynb), especially for those users who are interested in learning the Qiskit Aqua and Qiskit Chemistry APIs as well as contributing new algorithmic components.
###Code
# First, we use classical eigendecomposition to get ground state energy (including nuclear repulsion energy) as reference.
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': 'H2/H2_equilibrium_0.735_sto-3g.hdf5'},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'ExactEigensolver'}
}
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict)
print('Ground state energy (classical): {:.12f}'.format(result['energy']))
# Second, we use variational quantum eigensolver (VQE)
qiskit_chemistry_dict['algorithm']['name'] = 'VQE'
qiskit_chemistry_dict['optimizer'] = {'name': 'SPSA', 'max_trials': 350}
qiskit_chemistry_dict['variational_form'] = {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
backend = Aer.get_backend('statevector_simulator')
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict, backend=backend)
print('Ground state energy (quantum) : {:.12f}'.format(result['energy']))
print("====================================================")
# You can also print out other info in the field 'printable'
for line in result['printable']:
print(line)
###Output
Ground state energy (classical): -1.137306035753
Ground state energy (quantum) : -1.137287121511
====================================================
=== GROUND STATE ENERGY ===
* Electronic ground state energy (Hartree): -1.85725611279
- computed part: -1.85725611279
- frozen energy part: 0.0
- particle hole part: 0.0
~ Nuclear repulsion energy (Hartree): 0.719968991279
> Total ground state energy (Hartree): -1.137287121511
Measured:: Num particles: 2.000, S: 0.000, M: 0.00000
=== DIPOLE MOMENT ===
* Electronic dipole moment (a.u.): [0.0 0.0 -0.00514828]
- computed part: [0.0 0.0 -0.00514828]
- frozen energy part: [0.0 0.0 0.0]
- particle hole part: [0.0 0.0 0.0]
~ Nuclear dipole moment (a.u.): [0.0 0.0 0.0]
> Dipole moment (a.u.): [0.0 0.0 -0.00514828] Total: 0.00514828
(debye): [0.0 0.0 -0.01308562] Total: 0.01308562
###Markdown
Optimizing the potential energy surface The optimization considered previously is now performed for two molecules, H$_2$ and LiH, for different interatomic distances, and the corresponding nuclei Coulomb repulsion is added in order to obtain a potential energy surface.
###Code
# select H2 or LiH to experiment with
molecule='H2'
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': ''},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': ''},
'optimizer': {'name': 'SPSA', 'max_trials': 350},
'variational_form': {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
}
# choose which backend want to use
# backend = Aer.get_backend('statevector_simulator')
backend = Aer.get_backend('qasm_simulator')
backend_cfg = {'shots': 1024}
algos = ['ExactEigensolver', 'VQE']
if molecule == 'LiH':
mol_distances = np.arange(0.6, 5.1, 0.1)
qiskit_chemistry_dict['operator']['freeze_core'] = True
qiskit_chemistry_dict['operator']['orbital_reduction'] = [-3, -2]
qiskit_chemistry_dict['optimizer']['max_trials'] = 2500
qiskit_chemistry_dict['variational_form']['depth'] = 5
else:
mol_distances = np.arange(0.2, 4.1, 0.1)
energy = np.zeros((len(algos), len(mol_distances)))
for j, algo in enumerate(algos):
qiskit_chemistry_dict['algorithm']['name'] = algo
if algo == 'ExactEigensolver':
qiskit_chemistry_dict.pop('backend', None)
elif algo == 'VQE':
qiskit_chemistry_dict['backend'] = backend_cfg
print("Using {}".format(algo))
for i, dis in enumerate(mol_distances):
print("Processing atomic distance: {:1.1f} Angstrom".format(dis), end='\r')
qiskit_chemistry_dict['HDF5']['hdf5_input'] = "{}/{:1.1f}_sto-3g.hdf5".format(molecule, dis)
result = solver.run(qiskit_chemistry_dict, backend=backend if algo == 'VQE' else None)
energy[j][i] = result['energy']
print("\n")
for i, algo in enumerate(algos):
plt.plot(mol_distances, energy[i], label=algo)
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Trusted Notebook" width="500 px" align="left"> _*Qiskit Chemistry: Computing a Molecule's Dissociation Profile Using the Variational Quantum Eigensolver (VQE) Algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorials.*** ContributorsAntonio Mezzacapo[1], Richard Chen[1], Marco Pistoia[1], Shaohan Hu[1], Peng Liu[1], Stephen Wood[1], Jay Gambetta[1] Affiliation- [1]IBMQ IntroductionOne of the most compelling possibilities of quantum computation is the the simulation of other quantum systems. Quantum simulation of quantum systems encompasses a wide range of tasks, including most significantly: 1. Simulation of the time evolution of quantum systems.2. Computation of ground state properties. These applications are especially useful when considering systems of interacting fermions, such as molecules and strongly correlated materials. The computation of ground state properties of fermionic systems is the starting point for mapping out the phase diagram of condensed matter Hamiltonians. It also gives access to the key question of electronic structure problems in quantum chemistry - namely, reaction rates. The focus of this notebook is on molecular systems, which are considered to be the ideal bench test for early-stage quantum computers, due to their relevance in chemical applications despite relatively modest sizes. Formally, the ground state problem asks the following:For some physical Hamiltonian *H*, find the smallest eigenvalue $E_G$, such that $H|\psi_G\rangle=E_G|\psi_G\rangle$, where $|\Psi_G\rangle$ is the eigenvector corresponding to $E_G$. It is known that in general this problem is intractable, even on a quantum computer. This means that we cannot expect an efficient quantum algorithm that prepares the ground state of general local Hamiltonians. Despite this limitation, for specific Hamiltonians of interest it might be possible, given physical constraints on the interactions, to solve the above problem efficiently. Currently, at least four different methods exist to approach this problem:1. Quantum phase estimation: Assuming that we can approximately prepare the state $|\psi_G\rangle$, this routine uses controlled implementations of the Hamiltonian to find its smallest eigenvalue. 2. Adiabatic theorem of quantum mechanics: The quantum system is adiabatically dragged from being the ground state of a trivial Hamiltonian to the one of the target problem, via slow modulation of the Hamiltonian terms. 3. Dissipative (non-unitary) quantum operation: The ground state of the target system is a fixed point. The non-trivial assumption here is the implementation of the dissipation map on quantum hardware. 4. Variational quantum eigensolvers: Here we assume that the ground state can be represented by a parameterization containing a relatively small number of parameters.In this notebook we focus on the last method, as this is most likely the simplest to be realized on near-term devices. The general idea is to define a parameterization $|\psi(\boldsymbol\theta)\rangle$ of quantum states, and minimize the energy $$E(\boldsymbol\theta) = \langle \psi(\boldsymbol\theta)| H |\psi(\boldsymbol\theta)\rangle,$$ The key ansatz is that the number of parameters $|\boldsymbol\theta^*|$ that minimizes the energy function scales polynomially with the size (e.g., number of qubits) of the target problem. Then, any local fermionic Hamiltonian can be mapped into a sum over Pauli operators $P_i$, $$H\rightarrow H_P = \sum_i^M w_i P_i,$$ and the energy corresponding to the state $|\psi(\boldsymbol\theta\rangle$, $E(\boldsymbol\theta)$, can be estimated by sampling the individual Pauli terms $P_i$ (or sets of them that can be measured at the same time) on a quantum computer: $$E(\boldsymbol\theta) = \sum_i^M w_i \langle \psi(\boldsymbol\theta)| P_i |\psi(\boldsymbol\theta)\rangle.$$ Last, some optimization technique must be devised in order to find the optimal value of parameters $\boldsymbol\theta^*$, such that $|\psi(\boldsymbol\theta^*)\rangle\equiv|\psi_G\rangle$. Fermionic HamiltoniansThe Hamiltonians describing systems of interacting fermions can be expressed in second quantization language, considering fermionic creation (annihilation) operators $a^\dagger_\alpha(a_\alpha)$, relative to the $\alpha$-th fermionic mode. In the case of molecules, the $\alpha$ labels stand for the different atomic or molecular orbitals. Within the second-quantization framework, a generic molecular Hamiltonian with $M$ orbitals can be written as $$H =H_1+H_2=\sum_{\alpha, \beta=0}^{M-1} t_{\alpha \beta} \, a^\dagger_{\alpha} a_{\beta} +\frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta = 0}^{M-1} u_{\alpha \beta \gamma \delta}\, a^\dagger_{\alpha} a^\dagger_{\gamma} a_{\delta} a_{\beta},$$with the one-body terms representing the kinetic energy of the electrons and the potential energy that they experience in the presence of the nuclei, $$ t_{\alpha\beta}=\int d\boldsymbol x_1\Psi_\alpha(\boldsymbol{x}_1) \left(-\frac{\boldsymbol\nabla_1^2}{2}+\sum_{i} \frac{Z_i}{|\boldsymbol{r}_{1i}|}\right)\Psi_\beta (\boldsymbol{x}_1),$$and their interactions via Coulomb forces $$ u_{\alpha\beta\gamma\delta}=\int\int d \boldsymbol{x}_1 d \boldsymbol{x}_2 \Psi_\alpha^*(\boldsymbol{x}_1)\Psi_\beta(\boldsymbol{x}_1)\frac{1}{|\boldsymbol{r}_{12}|}\Psi_\gamma^*(\boldsymbol{x}_2)\Psi_\delta(\boldsymbol{x}_2),$$where we have defined the nuclei charges $Z_i$, the nuclei-electron and electron-electron separations $\boldsymbol{r}_{1i}$ and $\boldsymbol{r}_{12}$, the $\alpha$-th orbital wavefunction $\Psi_\alpha(\boldsymbol{x}_1)$, and we have assumed that the spin is conserved in the spin-orbital indices $\alpha,\beta$ and $\alpha,\beta,\gamma,\delta$. Molecules considered in this notebook and mapping to qubitsWe consider in this notebook the optimization of two potential energy surfaces, for the hydrogen and lithium hydride molecules, obtained using the STO-3G basis. The molecular Hamiltonians are computed as a function of their interatomic distance, then mapped to two-(H$_2$) and four-(LiH$_2$) qubit problems, via elimination of core and high-energy orbitals and removal of $Z_2$ symmetries. Approximate universal quantum computing for quantum chemistry problemsIn order to find the optimal parameters $\boldsymbol\theta^*$, we set up a closed optimization loop with a quantum computer, based on some stochastic optimization routine. Our choice for the variational ansatz is a deformation of the one used for the optimization of classical combinatorial problems, with the inclusion of $Z$ rotation together with the $Y$ ones. The optimization algorithm for fermionic Hamiltonians is similar to the one for combinatorial problems, and can be summarized as follows: 1. Map the fermionic Hamiltonian $H$ to a qubit Hamiltonian $H_P$.2. Choose the maximum depth of the quantum circuit (this could be done adaptively).3. Choose a set of controls $\boldsymbol\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$. The difference with the combinatorial problems is the insertion of additional parameterized $Z$ single-qubit rotations.4. Evaluate the energy $E(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H_P|~\psi(\boldsymbol\theta)\rangle$ by sampling each Pauli term individually, or sets of Pauli terms that can be measured in the same tensor product basis.5. Use a classical optimizer to choose a new set of controls.6. Continue until the energy has converged, hopefully close to the real solution $\boldsymbol\theta^*$, and return the last value of $E(\boldsymbol\theta)$. Note that, as opposed to the classical case, in the case of a quantum chemistry Hamiltonian one has to sample over non-computational states that are superpositions, and therefore take advantage of using a quantum computer in the sampling part of the algorithm. Motivated by the quantum nature of the answer, we also define a variational trial ansatz in this way: $$|\psi(\boldsymbol\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of cPhase gates (fully entangling gates), $U_\mathrm{single}(\boldsymbol\theta) = \prod_{i=1}^n Y(\theta_{i})Z(\theta_{n+i})$ are single-qubit $Y$ and $Z$ rotation, $n$ is the number of qubits and $m$ is the depth of the quantum circuit. References and additional details:[1] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, *Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets*, Nature 549, 242 (2017), and references therein.
###Code
# useful additional packages
import matplotlib.pyplot as plt
import copy
%matplotlib inline
import numpy as np
from qiskit import Aer
from qiskit.chemistry import QiskitChemistry
import warnings
warnings.filterwarnings('ignore')
# setup qiskit.chemistry logging
import logging
from qiskit.chemistry import set_qiskit_chemistry_logging
set_qiskit_chemistry_logging(logging.ERROR) # choose among DEBUG, INFO, WARNING, ERROR, CRITICAL and NOTSET
###Output
_____no_output_____
###Markdown
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiement on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
###Code
# from qiskit import IBMQ
# IBMQ.load_accounts()
###Output
_____no_output_____
###Markdown
Optimization of H$_2$ at bond lengthIn this first part of the notebook, we show the optimization of the H$_2$ Hamiltonian in the `STO-3G` basis at the bond length of 0.735 Angstrom. After mapping it to a four-qubit system with a parity transformation, two spin-parity symmetries are modded out, leading to a two-qubit Hamiltonian. The energy of the mapped Hamiltonian obtained is then minimized using the variational ansatz described in the introduction, and a stochastic perturbation simultaneous approximation (SPSA) gradient descent method. We stored the precomputed one- and two-body integrals and other molecular information in the `hdf5` file.Here we use the [*declarative approach*](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/chemistry/declarative_approach.ipynb) to run our experiement, but the same is doable in a [fully programmatic way](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/chemistry/programmatic_approach.ipynb), especially for those users who are interested in learning the Qiskit Aqua and Qiskit Chemistry APIs as well as contributing new algorithmic components.
###Code
# First, we use classical eigendecomposition to get ground state energy (including nuclear repulsion energy) as reference.
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': 'H2/H2_equilibrium_0.735_sto-3g.hdf5'},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'ExactEigensolver'}
}
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict)
print('Ground state energy (classical): {:.12f}'.format(result['energy']))
# Second, we use variational quantum eigensolver (VQE)
qiskit_chemistry_dict['algorithm']['name'] = 'VQE'
qiskit_chemistry_dict['optimizer'] = {'name': 'SPSA', 'max_trials': 350}
qiskit_chemistry_dict['variational_form'] = {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
backend = Aer.get_backend('statevector_simulator')
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict, backend=backend)
print('Ground state energy (quantum) : {:.12f}'.format(result['energy']))
print("====================================================")
# You can also print out other info in the field 'printable'
for line in result['printable']:
print(line)
###Output
Ground state energy (classical): -1.137306035753
Ground state energy (quantum) : -1.137304610765
====================================================
=== GROUND STATE ENERGY ===
* Electronic ground state energy (Hartree): -1.857273602044
- computed part: -1.857273602044
- frozen energy part: 0.0
- particle hole part: 0.0
~ Nuclear repulsion energy (Hartree): 0.719968991279
> Total ground state energy (Hartree): -1.137304610765
Measured:: Num particles: 2.000, S: 0.000, M: 0.00000
=== DIPOLE MOMENT ===
* Electronic dipole moment (a.u.): [0.0 0.0 0.00070479]
- computed part: [0.0 0.0 0.00070479]
- frozen energy part: [0.0 0.0 0.0]
- particle hole part: [0.0 0.0 0.0]
~ Nuclear dipole moment (a.u.): [0.0 0.0 0.0]
> Dipole moment (a.u.): [0.0 0.0 0.00070479] Total: 0.00070479
(debye): [0.0 0.0 0.0017914] Total: 0.0017914
###Markdown
Optimizing the potential energy surface The optimization considered previously is now performed for two molecules, H$_2$ and LiH, for different interatomic distances, and the corresponding nuclei Coulomb repulsion is added in order to obtain a potential energy surface.
###Code
# select H2 or LiH to experiment with
molecule='H2'
qiskit_chemistry_dict_ee = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': ''},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'ExactEigensolver'}
}
# choose which backend want to use
# backend = Aer.get_backend('statevector_simulator')
backend = Aer.get_backend('qasm_simulator')
qiskit_chemistry_dict_vqe = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': ''},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'VQE'},
'optimizer': {'name': 'SPSA', 'max_trials': 350},
'variational_form': {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'},
'backend': {'shots': 1024}
}
if molecule == 'LiH':
mol_distances = np.arange(0.6, 5.1, 0.1)
qiskit_chemistry_dict_vqe['operator']['freeze_core'] = True
qiskit_chemistry_dict_vqe['operator']['orbital_reduction'] = [-3, -2]
qiskit_chemistry_dict_vqe['optimizer']['max_trials'] = 2500
qiskit_chemistry_dict_vqe['variational_form']['depth'] = 5
else:
mol_distances = np.arange(0.2, 4.1, 0.1)
algos = ['ExactEigensolver', 'VQE']
energy = np.zeros((len(algos), len(mol_distances)))
for j, algo in enumerate([qiskit_chemistry_dict_ee, qiskit_chemistry_dict_vqe]):
algo_name = algo['algorithm']['name']
print("Using {}".format(algo_name))
for i, dis in enumerate(mol_distances):
print("Processing atomic distance: {:1.1f} Angstrom".format(dis), end='\r')
algo['HDF5']['hdf5_input'] = "{}/{:1.1f}_sto-3g.hdf5".format(molecule, dis)
result = solver.run(algo, backend=backend if algo_name == 'VQE' else None)
energy[j][i] = result['energy']
print("\n")
for i, algo in enumerate(algos):
plt.plot(mol_distances, energy[i], label=algo)
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Trusted Notebook" width="500 px" align="left"> _*Qiskit Chemistry: Computing a Molecule's Dissociation Profile Using the Variational Quantum Eigensolver (VQE) Algorithm*_ The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorials.*** ContributorsAntonio Mezzacapo[1], Richard Chen[1], Marco Pistoia[1], Shaohan Hu[1], Peng Liu[1], Stephen Wood[1], Jay Gambetta[1] Affiliation- [1]IBMQ IntroductionOne of the most compelling possibilities of quantum computation is the the simulation of other quantum systems. Quantum simulation of quantum systems encompasses a wide range of tasks, including most significantly: 1. Simulation of the time evolution of quantum systems.2. Computation of ground state properties. These applications are especially useful when considering systems of interacting fermions, such as molecules and strongly correlated materials. The computation of ground state properties of fermionic systems is the starting point for mapping out the phase diagram of condensed matter Hamiltonians. It also gives access to the key question of electronic structure problems in quantum chemistry - namely, reaction rates. The focus of this notebook is on molecular systems, which are considered to be the ideal bench test for early-stage quantum computers, due to their relevance in chemical applications despite relatively modest sizes. Formally, the ground state problem asks the following:For some physical Hamiltonian *H*, find the smallest eigenvalue $E_G$, such that $H|\psi_G\rangle=E_G|\psi_G\rangle$, where $|\Psi_G\rangle$ is the eigenvector corresponding to $E_G$. It is known that in general this problem is intractable, even on a quantum computer. This means that we cannot expect an efficient quantum algorithm that prepares the ground state of general local Hamiltonians. Despite this limitation, for specific Hamiltonians of interest it might be possible, given physical constraints on the interactions, to solve the above problem efficiently. Currently, at least four different methods exist to approach this problem:1. Quantum phase estimation: Assuming that we can approximately prepare the state $|\psi_G\rangle$, this routine uses controlled implementations of the Hamiltonian to find its smallest eigenvalue. 2. Adiabatic theorem of quantum mechanics: The quantum system is adiabatically dragged from being the ground state of a trivial Hamiltonian to the one of the target problem, via slow modulation of the Hamiltonian terms. 3. Dissipative (non-unitary) quantum operation: The ground state of the target system is a fixed point. The non-trivial assumption here is the implementation of the dissipation map on quantum hardware. 4. Variational quantum eigensolvers: Here we assume that the ground state can be represented by a parameterization containing a relatively small number of parameters.In this notebook we focus on the last method, as this is most likely the simplest to be realized on near-term devices. The general idea is to define a parameterization $|\psi(\boldsymbol\theta)\rangle$ of quantum states, and minimize the energy $$E(\boldsymbol\theta) = \langle \psi(\boldsymbol\theta)| H |\psi(\boldsymbol\theta)\rangle,$$ The key ansatz is that the number of parameters $|\boldsymbol\theta^*|$ that minimizes the energy function scales polynomially with the size (e.g., number of qubits) of the target problem. Then, any local fermionic Hamiltonian can be mapped into a sum over Pauli operators $P_i$, $$H\rightarrow H_P = \sum_i^M w_i P_i,$$ and the energy corresponding to the state $|\psi(\boldsymbol\theta\rangle$, $E(\boldsymbol\theta)$, can be estimated by sampling the individual Pauli terms $P_i$ (or sets of them that can be measured at the same time) on a quantum computer: $$E(\boldsymbol\theta) = \sum_i^M w_i \langle \psi(\boldsymbol\theta)| P_i |\psi(\boldsymbol\theta)\rangle.$$ Last, some optimization technique must be devised in order to find the optimal value of parameters $\boldsymbol\theta^*$, such that $|\psi(\boldsymbol\theta^*)\rangle\equiv|\psi_G\rangle$. Fermionic HamiltoniansThe Hamiltonians describing systems of interacting fermions can be expressed in second quantization language, considering fermionic creation (annihilation) operators $a^\dagger_\alpha(a_\alpha)$, relative to the $\alpha$-th fermionic mode. In the case of molecules, the $\alpha$ labels stand for the different atomic or molecular orbitals. Within the second-quantization framework, a generic molecular Hamiltonian with $M$ orbitals can be written as $$H =H_1+H_2=\sum_{\alpha, \beta=0}^{M-1} t_{\alpha \beta} \, a^\dagger_{\alpha} a_{\beta} +\frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta = 0}^{M-1} u_{\alpha \beta \gamma \delta}\, a^\dagger_{\alpha} a^\dagger_{\gamma} a_{\delta} a_{\beta},$$with the one-body terms representing the kinetic energy of the electrons and the potential energy that they experience in the presence of the nuclei, $$ t_{\alpha\beta}=\int d\boldsymbol x_1\Psi_\alpha(\boldsymbol{x}_1) \left(-\frac{\boldsymbol\nabla_1^2}{2}+\sum_{i} \frac{Z_i}{|\boldsymbol{r}_{1i}|}\right)\Psi_\beta (\boldsymbol{x}_1),$$and their interactions via Coulomb forces $$ u_{\alpha\beta\gamma\delta}=\int\int d \boldsymbol{x}_1 d \boldsymbol{x}_2 \Psi_\alpha^*(\boldsymbol{x}_1)\Psi_\beta(\boldsymbol{x}_1)\frac{1}{|\boldsymbol{r}_{12}|}\Psi_\gamma^*(\boldsymbol{x}_2)\Psi_\delta(\boldsymbol{x}_2),$$where we have defined the nuclei charges $Z_i$, the nuclei-electron and electron-electron separations $\boldsymbol{r}_{1i}$ and $\boldsymbol{r}_{12}$, the $\alpha$-th orbital wavefunction $\Psi_\alpha(\boldsymbol{x}_1)$, and we have assumed that the spin is conserved in the spin-orbital indices $\alpha,\beta$ and $\alpha,\beta,\gamma,\delta$. Molecules considered in this notebook and mapping to qubitsWe consider in this notebook the optimization of two potential energy surfaces, for the hydrogen and lithium hydride molecules, obtained using the STO-3G basis. The molecular Hamiltonians are computed as a function of their interatomic distance, then mapped to two-(H$_2$) and four-(LiH$_2$) qubit problems, via elimination of core and high-energy orbitals and removal of $Z_2$ symmetries. Approximate universal quantum computing for quantum chemistry problemsIn order to find the optimal parameters $\boldsymbol\theta^*$, we set up a closed optimization loop with a quantum computer, based on some stochastic optimization routine. Our choice for the variational ansatz is a deformation of the one used for the optimization of classical combinatorial problems, with the inclusion of $Z$ rotation together with the $Y$ ones. The optimization algorithm for fermionic Hamiltonians is similar to the one for combinatorial problems, and can be summarized as follows: 1. Map the fermionic Hamiltonian $H$ to a qubit Hamiltonian $H_P$.2. Choose the maximum depth of the quantum circuit (this could be done adaptively).3. Choose a set of controls $\boldsymbol\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$. The difference with the combinatorial problems is the insertion of additional parameterized $Z$ single-qubit rotations.4. Evaluate the energy $E(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H_P|~\psi(\boldsymbol\theta)\rangle$ by sampling each Pauli term individually, or sets of Pauli terms that can be measured in the same tensor product basis.5. Use a classical optimizer to choose a new set of controls.6. Continue until the energy has converged, hopefully close to the real solution $\boldsymbol\theta^*$, and return the last value of $E(\boldsymbol\theta)$. Note that, as opposed to the classical case, in the case of a quantum chemistry Hamiltonian one has to sample over non-computational states that are superpositions, and therefore take advantage of using a quantum computer in the sampling part of the algorithm. Motivated by the quantum nature of the answer, we also define a variational trial ansatz in this way: $$|\psi(\boldsymbol\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of cPhase gates (fully entangling gates), $U_\mathrm{single}(\boldsymbol\theta) = \prod_{i=1}^n Y(\theta_{i})Z(\theta_{n+i})$ are single-qubit $Y$ and $Z$ rotation, $n$ is the number of qubits and $m$ is the depth of the quantum circuit. References and additional details:[1] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, *Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets*, Nature 549, 242 (2017), and references therein.
###Code
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer
from qiskit.chemistry import QiskitChemistry
import warnings
warnings.filterwarnings('ignore')
# setup qiskit.chemistry logging
import logging
from qiskit.chemistry import set_qiskit_chemistry_logging
set_qiskit_chemistry_logging(logging.ERROR) # choose among DEBUG, INFO, WARNING, ERROR, CRITICAL and NOTSET
###Output
_____no_output_____
###Markdown
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiement on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
###Code
# from qiskit import IBMQ
# IBMQ.load_accounts()
###Output
_____no_output_____
###Markdown
Optimization of H$_2$ at bond lengthIn this first part of the notebook, we show the optimization of the H$_2$ Hamiltonian in the `STO-3G` basis at the bond length of 0.735 Angstrom. After mapping it to a four-qubit system with a parity transformation, two spin-parity symmetries are modded out, leading to a two-qubit Hamiltonian. The energy of the mapped Hamiltonian obtained is then minimized using the variational ansatz described in the introduction, and a stochastic perturbation simultaneous approximation (SPSA) gradient descent method. We stored the precomputed one- and two-body integrals and other molecular information in the `hdf5` file.Here we use the [*declarative approach*](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/aqua/chemistry/declarative_approach.ipynb) to run our experiement, but the same is doable in a [fully programmatic way](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/aqua/chemistry/programmatic_approach.ipynb), especially for those users who are interested in learning the Qiskit Aqua and Qiskit Chemistry APIs as well as contributing new algorithmic components.
###Code
# First, we use classical eigendecomposition to get ground state energy (including nuclear repulsion energy) as reference.
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': 'H2/H2_equilibrium_0.735_sto-3g.hdf5'},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'ExactEigensolver'}
}
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict)
print('Ground state energy (classical): {:.12f}'.format(result['energy']))
# Second, we use variational quantum eigensolver (VQE)
qiskit_chemistry_dict['algorithm']['name'] = 'VQE'
qiskit_chemistry_dict['optimizer'] = {'name': 'SPSA', 'max_trials': 350}
qiskit_chemistry_dict['variational_form'] = {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
backend = Aer.get_backend('statevector_simulator')
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict, backend=backend)
print('Ground state energy (quantum) : {:.12f}'.format(result['energy']))
print("====================================================")
# You can also print out other info in the field 'printable'
for line in result['printable']:
print(line)
###Output
Ground state energy (classical): -1.137306035753
Ground state energy (quantum) : -1.137287121511
====================================================
=== GROUND STATE ENERGY ===
* Electronic ground state energy (Hartree): -1.85725611279
- computed part: -1.85725611279
- frozen energy part: 0.0
- particle hole part: 0.0
~ Nuclear repulsion energy (Hartree): 0.719968991279
> Total ground state energy (Hartree): -1.137287121511
Measured:: Num particles: 2.000, S: 0.000, M: 0.00000
=== DIPOLE MOMENT ===
* Electronic dipole moment (a.u.): [0.0 0.0 -0.00514828]
- computed part: [0.0 0.0 -0.00514828]
- frozen energy part: [0.0 0.0 0.0]
- particle hole part: [0.0 0.0 0.0]
~ Nuclear dipole moment (a.u.): [0.0 0.0 0.0]
> Dipole moment (a.u.): [0.0 0.0 -0.00514828] Total: 0.00514828
(debye): [0.0 0.0 -0.01308562] Total: 0.01308562
###Markdown
Optimizing the potential energy surface The optimization considered previously is now performed for two molecules, H$_2$ and LiH, for different interatomic distances, and the corresponding nuclei Coulomb repulsion is added in order to obtain a potential energy surface.
###Code
# select H2 or LiH to experiment with
molecule='H2'
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': ''},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': ''},
'optimizer': {'name': 'SPSA', 'max_trials': 350},
'variational_form': {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
}
# choose which backend want to use
# backend = Aer.get_backend('statevector_simulator')
backend = Aer.get_backend('qasm_simulator')
backend_cfg = {'shots': 1024}
algos = ['ExactEigensolver', 'VQE']
if molecule == 'LiH':
mol_distances = np.arange(0.6, 5.1, 0.1)
qiskit_chemistry_dict['operator']['freeze_core'] = True
qiskit_chemistry_dict['operator']['orbital_reduction'] = [-3, -2]
qiskit_chemistry_dict['optimizer']['max_trials'] = 2500
qiskit_chemistry_dict['variational_form']['depth'] = 5
else:
mol_distances = np.arange(0.2, 4.1, 0.1)
energy = np.zeros((len(algos), len(mol_distances)))
for j, algo in enumerate(algos):
qiskit_chemistry_dict['algorithm']['name'] = algo
if algo == 'ExactEigensolver':
qiskit_chemistry_dict.pop('backend', None)
elif algo == 'VQE':
qiskit_chemistry_dict['backend'] = backend_cfg
print("Using {}".format(algo))
for i, dis in enumerate(mol_distances):
print("Processing atomic distance: {:1.1f} Angstrom".format(dis), end='\r')
qiskit_chemistry_dict['HDF5']['hdf5_input'] = "{}/{:1.1f}_sto-3g.hdf5".format(molecule, dis)
result = solver.run(qiskit_chemistry_dict, backend=backend if algo == 'VQE' else None)
energy[j][i] = result['energy']
print("\n")
for i, algo in enumerate(algos):
plt.plot(mol_distances, energy[i], label=algo)
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show()
###Output
_____no_output_____ |
python prep/5. Importing a Package - Complete.ipynb | ###Markdown
Importing a PackageSometimes we'll want to do something with python that isn't built into base python. Now we could code it up ourselves, but it is usually the case that there is a package you can import that does what you'd like to do.Python packages are collections of pre-written code that you can use.Let's see how we can use a package with the `math` package. `math` PackageThe `math` package contains all the mathematical functions and objects you may be interested in like $\sin$, $\cos$, $e$, $\pi$ and $\log$.
###Code
## importing a package happens with import package_name
import math
## When this code is run, you will import the math package
## packages have a number of built-in objects, and methods we can use
## let's start with pi
## First write the package name then a period then the object/function you want
## for example math.pi
math.pi
## You code
## Try finding the cosine of 3pi/4
## cosine is stored in math.cos
math.cos(3*math.pi/4)
###Output
-0.7071067811865476 -0.7071067811865475
###Markdown
You can find more about a package at the package's documentation. When you're learning how a new package you've found works, going their is usually your first step.Here we can see the documentation for the `math` package, https://docs.python.org/3/library/math.html.If you're new to python, reading documentation can seem difficult, that's normal! It can take a little bit to get comfortable reading documentation (often called the docs).
###Code
## Practice reading documentation and find out
## how to compute the base 10 logarithm of a number using the
## math package
math.log(.001, 10)
###Output
_____no_output_____
###Markdown
Import a Package With a Shorter NameSometimes a packages name can be cumbersome, we'll see an example of that soon, similar to a variable name we can import a package with a shorter name.Let's practice with the `random` package.
###Code
## The syntax is import package_name as shortened_name
import random as r
## Now we can get a (pseudo)random integer between 1 and 10
## using randint
r.randint(1,10)
###Output
_____no_output_____
###Markdown
Import a Specific Object or FunctionSometimes, packages can be quite large but you only want a small part of the package (like a single function), and thus importing the entire package can waste your computer's memory.It is possible to import just a small part of the package as well!
###Code
## say from package_name import what you want
from math import exp
## Now we can use euler's number without the entire math package
exp(1)
###Output
_____no_output_____ |
basepairmodels/reports/model_performance.ipynb | ###Markdown
Direct links to results[Profile metrics](profile)[Count metrics](count)
###Code
import h5py
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import os
import vdom.helpers as vdomh
from IPython.display import display
# Plotting defaults
plot_params = {
"figure.titlesize": 22,
"axes.titlesize": 22,
"axes.labelsize": 20,
"legend.fontsize": 18,
"xtick.labelsize": 16,
"ytick.labelsize": 16,
"font.weight": "bold"
}
plt.rcParams.update(plot_params)
###Output
_____no_output_____
###Markdown
Define constants and paths
###Code
# Define parameters/fetch arguments
metrics_dir = os.environ["TFM_METRICS_DIR"]
preds_path = os.environ["TFM_PRED_PATH"]
print("Performance metrics directory: %s" % metrics_dir)
print("Predictions path: %s" % preds_path)
# Constants
test_chroms = ["chr1"] # Set to None or empty list for all chromosomes
metric_keys = [
"mnll", "jsd", "cross_entropy", "pearson", "spearman", "mse",
"counts_pearson", "counts_spearman"
]
metric_names = {
"mnll": "normalized MNLL",
"jsd": "normalized JSD",
"cross_entropy": "normalized cross entropy",
"pearson": "Pearson correlation",
"spearman": "Spearman correlation",
"mse": "MSE",
"counts_pearson": "Pearson correlation",
"counts_spearman": "Spearman correlation"
}
strands = ["minus", "plus"]
###Output
_____no_output_____
###Markdown
Helper functionsFor extracting metrics values, plotting, etc.
###Code
def extract_performance_metrics(metrics_dir):
"""
Extracts the set of performance metrics from the directory of saved metrics.
Strands are pooled
Returns a dictionary of the following form:
`mnll`: <MNLL vector over peaks/strands>
`counts_pearson`: [
<MSE scalar for negative strand>
<MSE scalar for positive strand>
]
...
"""
result = {}
for key in metric_keys:
vecs = []
for strand in strands:
path = os.path.join(metrics_dir, "task0_" + strand, key + ".npz")
reader = np.load(path)
vecs.append(reader[key])
if key.startswith("counts_"):
result[key] = np.array(vecs)
else:
result[key] = np.concatenate(vecs)
return result
def import_true_pred_log_counts(preds_path, chrom_set=None):
"""
Imports the true and predicted log counts as two N x 2 arrays.
"""
with h5py.File(preds_path, "r") as f:
chroms = f["coords"]["coords_chrom"][:].astype(str)
if chrom_set:
subset_inds = np.sort(np.where(np.isin(chroms, chrom_set))[0])
else:
subset_inds = np.arange(len(chroms))
true_log_counts = np.log(f["predictions"]["true_counts"][subset_inds] + 1)
pred_log_counts = f["predictions"]["log_pred_counts"][subset_inds]
return true_log_counts[:, 0], pred_log_counts[:, 0]
def gradient_image(ax, extent, direction=0, cmap_range=(0, 1), **kwargs):
"""
Adapted from
https://matplotlib.org/3.2.1/gallery/lines_bars_and_markers/gradient_bar.html
"""
phi = np.abs(direction) * np.pi / 2
v = np.array([np.cos(phi), np.sin(phi)])
X = np.array([[v @ [1, 0], v @ [1, 1]],
[v @ [0, 0], v @ [0, 1]]])
a, b = cmap_range
X = a + (b - a) / X.max() * X
if direction < 0:
X = np.flip(X)
im = ax.imshow(X, extent=extent, interpolation='bicubic',
vmin=0, vmax=1, **kwargs)
return im
###Output
_____no_output_____
###Markdown
Import performance metrics/bounds
###Code
# Import metrics
metrics = extract_performance_metrics(metrics_dir)
# Import true/predicted log counts
true_log_counts, pred_log_counts = import_true_pred_log_counts(preds_path, chrom_set=test_chroms)
# Check that all the sizes are the same
num_peaks = len(true_log_counts)
for key in metric_keys:
if key.startswith("counts_"):
assert np.shape(metrics[key]) == (len(strands),)
else:
assert np.shape(metrics[key]) == (num_peaks * len(strands),)
###Output
_____no_output_____
###Markdown
Profile metricsShown as CDFs of min-max-normalized values. Strands are pooled. Note that a MNLL, cross entropy, JSD, and MSE are best when minimized. Pearson and Spearman correlation are best when maximized.
###Code
for key in metric_keys:
if key.startswith("counts_"):
continue
fig, ax = plt.subplots(figsize=(8, 8))
if key in ("pearson", "spearman"):
gradient_image(
ax, direction=1, extent=(0, 1, 0, 1), transform=ax.transAxes,
cmap="RdYlGn", cmap_range=(0, 1), alpha=0.2
)
bins = np.concatenate([[-np.inf], np.linspace(0, 1, num=100)])
ax.hist(metrics[key], bins=bins, density=True, histtype="step", cumulative=-1)
ax.set_title("Inverse CDF of %s" % metric_names[key])
else:
gradient_image(
ax, direction=-1, extent=(0, 1, 0, 1), transform=ax.transAxes,
cmap="RdYlGn", cmap_range=(0, 1), alpha=0.2
)
bins = np.concatenate([np.linspace(0, 1, num=100), [np.inf]])
ax.hist(metrics[key], bins=bins, density=True, histtype="step", cumulative=True)
ax.set_title("CDF of %s" % metric_names[key])
plt.show()
###Output
_____no_output_____
###Markdown
Count metricsShown as scatter plots (strands pooled).
###Code
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(np.ravel(true_log_counts), np.ravel(pred_log_counts), alpha=0.03)
ax.set_xlabel("True log counts")
ax.set_ylabel("Predicted log counts")
(min_x, max_x), (min_y, max_y) = ax.get_xlim(), ax.get_ylim()
min_both, max_both = min(min_x, min_y), max(max_x, max_y)
ax.set_xlim(min_both, max_both)
ax.set_ylim(min_both, max_both)
ax.plot(
[min_both, max_both], [min_both, max_both],
color="black", linestyle="--", alpha=0.3, zorder=0
)
plt.show()
def get_correlations(vec_1, vec_2):
finite_mask = np.isfinite(vec_1) & np.isfinite(vec_2)
vec_1 = vec_1[finite_mask]
vec_2 = vec_2[finite_mask]
return scipy.stats.pearsonr(vec_1, vec_2)[0], scipy.stats.spearmanr(vec_1, vec_2)[0]
pool_p, pool_s = get_correlations(np.ravel(true_log_counts), np.ravel(pred_log_counts))
pos_p, pos_s = get_correlations(true_log_counts[:, 0], pred_log_counts[:, 0])
neg_p, neg_s = get_correlations(true_log_counts[:, 1], pred_log_counts[:, 1])
avg_p, avg_s = np.mean([pos_p, neg_p]), np.mean([pos_s, neg_s])
header = vdomh.thead(
vdomh.tr(
vdomh.th(),
vdomh.th("Pearson correlation", style={"text-align": "center"}),
vdomh.th("Spearman correlation", style={"text-align": "center"})
)
)
body = vdomh.tbody(
vdomh.tr(
vdomh.td("Strands pooled"), vdomh.td(str(pool_p)), vdomh.td(str(pool_s))
),
vdomh.tr(
vdomh.td("Positive strand"), vdomh.td(str(pos_p)), vdomh.td(str(pos_s))
),
vdomh.tr(
vdomh.td("Negative strand"), vdomh.td(str(neg_p)), vdomh.td(str(neg_s))
),
vdomh.tr(
vdomh.td("Strands averaged"), vdomh.td(str(avg_p)), vdomh.td(str(avg_s))
)
)
vdomh.table(header, body)
###Output
_____no_output_____ |
examples/0. Embeddings Generation/Pipelines/ML20M/3. Feature Engineering .ipynb | ###Markdown
MCA: Multiple Correspondence Analysis
###Code
import json
import pandas as pd
# ! pip install prince
import prince
from sklearn.preprocessing import OneHotEncoder
import itertools
omdb = json.load(open("../../../../data/parsed/omdb.json", "r") )
tmdb = json.load(open("../../../../data/parsed/tmdb.json", "r") )
categorical = {
'omdb': ['Rated', 'Director', 'Genre', 'Language', 'Country', 'Type', 'Production'],
}
def apply_categorical(records, type, take):
res = {i: {} for i in records.keys()}
for row in records.keys():
for col in records[row][type].keys():
if col in take:
res[row][col] = records[row][type][col]
return res
def apply_split(records, split, limit):
for row in records.keys():
for col in split:
records[row][col] = tuple(records[row][col].split(', '))
return records
cat = apply_categorical(omdb, 'omdb', categorical['omdb'])
cat = apply_split(cat, ['Country', 'Language', 'Genre'], 3)
catdf = pd.DataFrame.from_dict(cat).T
###Output
_____no_output_____
###Markdown
TopK One Hot
###Code
def one_hot(arr, name, categories):
return dict((name+i, i in arr) for i in categories)
def apply_one_hot(records, type, name, categories):
for row in records.keys():
records[row] = {**records[row], **one_hot(records[row][type], name, categories)}
del records[row][type]
return records
genres_cat = list(set(itertools.chain(*tuple(catdf.Genre.unique()))))
language_cat = pd.Series(list(itertools.chain(*catdf.Language))).value_counts()[:30].index
countries_cat = pd.Series(list(itertools.chain(*catdf.Country))).value_counts()[:30].index
cat = apply_one_hot(cat, 'Genre', 'g_', genres_cat)
cat = apply_one_hot(cat, 'Country', 'c_', countries_cat)
cat = apply_one_hot(cat, 'Language', 'l_', language_cat)
catdf = pd.DataFrame.from_dict(cat).T
catdf.Rated = catdf.Rated.fillna('Not rated')
catdf.Rated[catdf.Rated == 'N/A'] = 'Not rated'
catdf.Production.fillna('-')
catdf.Production[catdf.Production == 'N/A'] = '-'
catdf.Production[catdf.Production == 'NaN'] = '-'
catdf.Production[catdf.Production.isna()] = '-'
catdf.Director.fillna('-')
catdf.Director[catdf.Director == 'N/A'] = '-'
catdf
mca = prince.MCA(
n_components=16,
n_iter=20,
copy=True,
check_input=True,
engine='auto',
)
mca = mca.fit(catdf)
%matplotlib inline
ax = mca.plot_coordinates(
X=catdf,
ax=None,
figsize=(6, 6),
show_row_points=True,
row_points_size=10,
show_row_labels=False,
show_column_points=True,
column_points_size=30,
show_column_labels=False,
legend_n_cols=1
)
mca.total_inertia_
mca.explained_inertia_
transformed = mca.transform(catdf)
transformed.head()
transformed.to_csv('../../../../data/engineering/mca.csv', index=True, index_label='idx')
###Output
_____no_output_____
###Markdown
Probabilistic PCA with missing data
###Code
import json
import pandas as pd
# ! pip install ppca
import ppca
import numpy as np
omdb = json.load(open("../../../../data/parsed/omdb.json", "r") )
tmdb = json.load(open("../../../../data/parsed/tmdb.json", "r") )
numerical = {
'omdb': ['Year', 'Ratings', 'Metascore', 'imdbRating', 'imdbVotes'],
'tmdb': ['budget', 'popularity', 'revenue', 'runtime', 'vote_average', 'vote_count']
}
def apply_numerical(records, type, take):
res = {i: {} for i in records.keys()}
for row in records.keys():
for col in records[row][type].keys():
if col in take:
res[row][col] = records[row][type][col]
return res
def apply_ratings(records):
res = records.copy()
for i in res.keys():
for rating in res[i]['Ratings']:
res[i]['r ' + rating['Source']] = rating['Value']
del res[i]['Ratings']
return res
numo = apply_numerical(omdb, 'omdb', numerical['omdb'])
numt = apply_numerical(tmdb, 'tmdb', numerical['tmdb'])
num = dict([(i, {**numo[i], **numt[i]}) for i in numo.keys()])
num = apply_ratings(num)
numdf = pd.DataFrame.from_dict(num).T
for col in numdf.columns:
numdf[col].loc[numdf[col] == 'N/A'] = np.nan
numdf['budget'] = numdf['budget'].replace(to_replace=0, value=np.nan)
numdf['r Internet Movie Database'].loc[numdf['r Internet Movie Database'].notnull()] = \
numdf['r Internet Movie Database'].loc[numdf['r Internet Movie Database'].notnull()].apply(lambda x: x.split('/')[0])
numdf['r Metacritic'].loc[numdf['r Metacritic'].notnull()] = \
numdf['r Metacritic'].loc[numdf['r Metacritic'].notnull()].apply(lambda x: int(x.split('/')[0]))
numdf['r Rotten Tomatoes'].loc[numdf['r Rotten Tomatoes'].notnull()] = \
numdf['r Rotten Tomatoes'].loc[numdf['r Rotten Tomatoes'].notnull()].apply(lambda x: float(x.replace('%', '')))
numdf['revenue'] = numdf['revenue'].replace(to_replace=0, value=np.nan)
numdf['Year'].loc[numdf['Year'].notnull()] = numdf['Year'].loc[numdf['Year'].notnull()].apply(lambda x: int(x.replace('–', '')[0]))
numdf['imdbVotes'].loc[numdf['imdbVotes'].notnull()] = numdf['imdbVotes'].loc[numdf['imdbVotes'].notnull()].apply(lambda x: int(x.replace(',', '')))
numdf.head()
from ppca import PPCA
ppca = PPCA()
ppca.fit(data=numdf.values.astype(float), d=16, verbose=True)
ppca.var_exp
transformed = ppca.transform()
transformed = pd.DataFrame(transformed)
transformed['idx'] = pd.Series(list(omdb.keys()))
transformed = transformed.set_index('idx')
transformed.head()
transformed.to_csv('../../../../data/engineering/pca.csv', index=True, index_label='idx')
# Bonus: Prince's PCA visualization
# Unfortunately, Prince does not work with missing data
# So you need to use PCA Magic instead
###Output
_____no_output_____
###Markdown
UMAP: Uniform Manifold Approximation and ProjectionP.S. As for now, I am using this as a proof of concept for numerical feature visualizationIt does not support NaN values, hence my main focus for numerics is Probabilistic PCA
###Code
# Get Rid of NaNs
from sklearn.impute import SimpleImputer
from sklearn import preprocessing
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
numdf = numdf.astype(float)
imputer = imputer.fit(numdf)
numdf = imputer.transform(numdf)
# pretty colors
color = []
for row in tmdb.keys():
genres = tmdb[row]['tmdb'].get('genres', False)
if genres:
color.append(genres[0]['id'])
else:
color.append('0')
le = preprocessing.LabelEncoder()
color = le.fit_transform(color)
import warnings
warnings.filterwarnings('ignore')
import umap
embedding = umap.UMAP(n_neighbors=100, min_dist=0.3).fit_transform(numdf)
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.cm as cm
%matplotlib inline
sns.set(context="paper", style="white")
# color = cm.rainbow(color)
fig, ax = plt.subplots(figsize=(12, 10))
plt.scatter(
embedding[:, 0], embedding[:, 1], c=color, cmap="Spectral", s=0.1
)
plt.setp(ax, xticks=[], yticks=[])
plt.title("2d umap Embeddings", fontsize=18)
plt.show()
###Output
_____no_output_____ |
week_2/week_2_unit_6_sequences_notebook.ipynb | ###Markdown
Sequences Standard sequence types: `list`, `tuple`, `range`You already learned about two of the three basic sequence types built into Python: `list` & `range`. All sequence typesshare some properties:- They consist of multiple items which are ordered. (That does not mean they are sorted e.g. by size! It simply means, there is a first item, a second item, etc.)- To access a specific item of the sequence, an index can be used.There is another commonly used sequence type called `tuple`. It will be discussed further in the next week. Text sequence type: `string`You already know strings, too. A string also is a sequence data type. It is used exclusively for text and can containany unicode character. There are more sequence data types in Python, which will not be discussed in this course, but youcan read about them in the[Python documentation on sequence types](https://docs.python.org/3/library/stdtypes.htmltypesseq), if you areinterested. Common operations on sequence typesAll sequence types share some common operations and some of them were already covered in this course. The followingtable is an excerpt taken from the[Python documentation on common operations](https://docs.python.org/3/library/stdtypes.htmlcommon-sequence-operations).It shows some of the possible operations common for the sequence types (Note, in the table`s` & `t` are sequence types and `i` is an integer). There is another important operation called slicing which will be introduced in the next unit.| Operation | Return value || --------- | --------------------------------------------------------------------- || `len(s)` | Length of the sequence || `min(s)` | Smallest element of the sequence || `max(s)` | Largest element of the sequence || `s + t` | Concatenation of `s` and `t` || `i * s` | A new sequence consisting of the sequence `s` `i` times consecutively || `s[i]` | The element at index `i` of sequence `s` || `i in s` | Checks if the sequence `s` contains the element `i` |Not all of these operations work on all sequences. For example, a `range` can not be multiplied with an integer tocreate a longer range. Exercise Try out the different operations in the next cell.
###Code
list_l = [1, 2, 3, 4, 5, 6]
string_s = "This is a string, right?"
range_100 = range(0, 100, 2)
print("Number 4 in list?", 4 in list_l)
print("Number 10 in list?", 10 in list_l)
print("String contains s!", "s" in string_s)
print("The range contains the number 27:", 27 in range_100)
print("Those str" + "ings were concatenated")
print(5 * range_100)
print(min(list_l))
###Output
_____no_output_____ |
2021Q1_DSF/5.- Spark/notebooks/spark_sql/respuestas/05_dw_missing_values_con_respuestas.ipynb | ###Markdown
Valores AusentesLos valores ausentes en _pyspark_ están identificados como _null_. El método `isNull` permite idenficar los registros nulos y `isNotNull` los no nulos.
###Code
from pyspark.sql import functions as F
vancouver_df = spark.read.csv(DATA_PATH + 'crime_in_vancouver.csv', sep=',', header=True, inferSchema=True)
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNull()).show(4)
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNotNull()).show(4)
###Output
_____no_output_____
###Markdown
Conteo de valores nulos
###Code
vancouver_df.filter(F.col('NEIGHBOURHOOD').isNull()).count()
vancouver_df.filter(F.col('TYPE').isNull()).count()
###Output
_____no_output_____
###Markdown
Porcentaje de ausentes por columnaEl primer método es menos eficiente que el segundo ya que requiere ejecutar una acción por cada columna. Como norma general en Spark hay que intentar realizar el número mínimo de acciones.
###Code
n_rows_vancouver = vancouver_df.count()
###Output
_____no_output_____
###Markdown
__Método 1:__
###Code
%%time
for col in vancouver_df.columns:
n_missing = vancouver_df.filter(F.col(col).isNull()).count()
perc_missing = 100 * n_missing / n_rows_vancouver
print(col, round(perc_missing, 2))
###Output
_____no_output_____
###Markdown
__Método 2:__Para una única columna
###Code
vancouver_df.select(F.round(F.sum(F.col('NEIGHBOURHOOD').isNull().cast('int')) * 100 / n_rows_vancouver, 2)\
.alias('NEIGHBOURHOOD')).show()
###Output
_____no_output_____
###Markdown
Todas las columnas
###Code
%%time
missing_ops = [F.round(F.sum(F.col(c).isNull().cast('int')) * 100 / n_rows_vancouver, 2).alias(c)
for c in vancouver_df.columns]
vancouver_df.select(missing_ops).show()
###Output
_____no_output_____
###Markdown
Eliminación registros nulosEl método `dropna` se utiliza para eliminar registros nulos. Con el parámetro `subset` se indican sobre qué columnas buscar nulos y el parámetro `how` selecciona con qué condición se elimina un registro. Por defecto, `how` está a 'any'.
###Code
vancouver_df.dropna(how='all').count()
vancouver_df.dropna(how='any').count()
vancouver_no_missing_df = vancouver_df.dropna(subset=['HOUR', 'MINUTE'])
vancouver_no_missing_df.select(missing_ops).show()
###Output
_____no_output_____
###Markdown
Imputación de valores nulos`fillna` imputa los valores nulos de las columnas a un valor fijo elegido.
###Code
vancouver_df.show(3)
###Output
_____no_output_____
###Markdown
Imputa los valores nulos de las columnas `HOUR` y `MINUTE` por el valor 0, y los de la columna `NEIGHBOURHOOD` por 'Unknown'.
###Code
vancouver_df.fillna(0, subset=['HOUR', 'MINUTE']).show(3)
vancouver_df.fillna('Unknown', subset=['NEIGHBOURHOOD']).show(3)
###Output
_____no_output_____
###Markdown
Ejercicio 1 Usando el siguiente dataframe
###Code
vancouver_df = spark.read.csv(DATA_PATH + 'crime_in_vancouver.csv', sep=',', header=True, inferSchema=True)
###Output
_____no_output_____
###Markdown
- a. Determine que columna(s) tiene(n) el mayor número de nulos- b. Complete las variables categóricas con nulos con el valor mayoritario- c. Elimine los registros con mayor número de nulos- d. Complete las variables cuantitativas con nulos con los valores medios correspondientes de esas columnas
###Code
# Respuesta
n_rows_vancouver = vancouver_df.count()
missing_ops = [F.round(F.sum(F.col(c).isNull().cast('int')) * 100 / n_rows_vancouver, 2).alias(c+"_PORCENTAJE_NULOS")
for c in vancouver_df.columns]
vancouver_df.select(missing_ops).show()
vancouver_df.printSchema()
# Respuesta
most_frequent_neighbourhood = vancouver_df.groupBy('NEIGHBOURHOOD').count().sort('count', ascending=False).first()['NEIGHBOURHOOD']
vancouver_df.fillna(most_frequent_neighbourhood, subset=['NEIGHBOURHOOD']).show(3)
# Respuesta
vancouver_df.withColumn('num_nulls', sum(vancouver_df[col].isNull().cast('int') for col in vancouver_df.columns)).show()
# Find the highest number of missing values in the registries
max_nulls = vancouver_df.withColumn('num_nulls', sum(vancouver_df[col].isNull().cast('int') for col in vancouver_df.columns)).select('num_nulls').sort(F.desc('num_nulls')).first()[0]
# Total number of columns
num_col = len(vancouver_df.columns)
# Set the limit for null removals per row
limit = num_col - max_nulls + 1
# Delete those registries with the most missing values
print("Number of rows after dropna: " + str(vancouver_df.dropna(thresh=limit).count()))
print("Number of initial rows: " + str(vancouver_df.count()))
# Respuesta
mean_hour = vancouver_df.agg(F.mean('HOUR')).first()[0]
mean_minute = vancouver_df.agg(F.mean('MINUTE')).first()[0]
vancouver_df.fillna(mean_hour, subset=['HOUR']).show(3)
vancouver_df.fillna(mean_minute, subset=['MINUTE']).show(3)
###Output
_____no_output_____
###Markdown
Ejercicio 2Fuente de los datos: https://www.kaggle.com/abhinav89/telecom-customer1) Obtener un diccionario de las variables con el valor del porcentaje de nulos que contengan. Ordenarlo, de alguna forma aunque la salida no sea un diccionario, de mayor a menor porcentaje de nulos.2) Realiza el tratamiento que consideres para los datos nulos, en función del significado de negocio que consideres para cada caso y la cantidad de datos nulos que contenga la columna. Imputar al menos cinco columnas a modo de ejemplo, justificando los valores sustituidos a nivel de negocio.Hint: consideraremos que la columna no aporta valor si contiene más del 40% de sus valores nulos
###Code
df = spark.read.csv(DATA_PATH + 'telecom_customer_churn.csv', sep=',', header=True, inferSchema=True)
df.count()
###Output
_____no_output_____
###Markdown
1) Obtener un diccionario de las variables con el valor del porcentaje de nulos que contengan. Ordenarlo, de alguna forma aunque la salida no sea un diccionario, de mayor a menor porcentaje de nulos.
###Code
# Respuesta
import pyspark.sql.functions as F
missing_ops = [F.round(F.sum(F.col(c).isNull().cast('int')) * 100 / df.count(), 2).alias(c)
for c in df.columns]
# Respuesta
null_values = df.select(missing_ops).first()
# Respuesta
with_null_values={}
for i, value in enumerate(null_values):
if value!=0:
with_null_values[df.columns[i]]=value
# Respuesta
sorted(with_null_values.items(), key=lambda x: x[1], reverse=True)
###Output
_____no_output_____
###Markdown
2) Realiza el tratamiento que consideres para los datos nulos, en función del significado de negocio que consideres para cada caso y la cantidad de datos nulos que contenga la columna. Imputar al menos cinco columnas a modo de ejemplo, justificando los valores sustituidos a nivel de negocio.Hint: consideraremos que la columna no aporta valor si contiene más del 40% de sus valores nulos
###Code
# Respuesta
# First we drop those variables that contain more then 40% nulls
for x in with_null_values.items():
if x[1]>40:
print("Se va a eliminar", x[0])
df = df.drop(F.col(x[0]))
with_null_values.pop(x[0])
# Respuesta
# If values cant be imputed and they are less than 40%
fill_cols_vals = {
"rev_Mean": 0,
"mou_Mean": 0,
"totmrc_Mean": 0,
"da_Mean": 0,
"ovrmou_Mean": 0,
"ovrrev_Mean": 0,
"vceovr_Mean": 0,
"datovr_Mean": 0,
"roam_Mean": 0,
"change_mou": 0,
"change_rev": 0,
"eqpdays": 0,
"forgntvl": 0,
"avg6qty": 0,
"avg6rev": 0,
"avg6mou": 0,
"kid16_17": "U",
"kid11_15": "U",
"kid6_10": "U",
"kid3_5": "U",
"kid0_2": "U",
}
df = df.na.fill(fill_cols_vals)
df = df.na.drop("any", subset=["HHstatin", "dwllsize", "creditcd", "ownrent", "marital",
"rv", "truck", "hnd_webcap", "models",
"phones", "hnd_price", "refurb_new", "dualband",
"area", "prizm_social_one", "dwlltype", "lor",
"income", "adults", "infobase"])
###Output
_____no_output_____ |
samples/secret-santa/secret-santa.ipynb | ###Markdown
Azure Quantum Optimization Sample: Secret SantaThis sample walks through how to solve the Secret Santa problem using Azure Quantum. The scenario is defined as follows:- Vincent, Tess, and Uma each write their name on a slip of paper and place the paper in a jar.- Everybody then draws a slip of paper from the jar at random.- Each person buys a small gift and writes a poem for the person whose name they have drawn. - If they draw their own name, they return the slip of paper and re-draw.> **Note:**> The inspiration for this scenario came from Vincent's blog post ([found here](https://vincent.frl/quantum-secret-santa/)), which demonstrates how to use [Q and the QDK](https://docs.microsoft.com/azure/quantum/overview-what-is-qsharp-and-qdk) to solve this scenario. In this sample, we will make use of the [Azure Quantum QIO service](https://docs.microsoft.com/azure/quantum/optimization-what-is-quantum-optimization) to solve the same problem. Introduction: binary optimizationBinary optimization problems take the general form:$\text{Minimize: } \sum_{k} term_k = \sum_k c_k \prod_{i} x_i \text{ where } x_i \in \{ 0,1 \} \text{ or } \{ -1 , 1 \} \text{ and } c_k \in \mathbb{R} $Our job is to define a mathematical representation of our problem in this binary format and then use Azure Quantum to solve it.For example, the problem shown below:$13 + 17x_0 + 23x_1x_3x_{77},$would be represented by the following Terms in the Azure Quantum Python SDK:```pythonterms = [Term(c = 13.0, indices = []), Term(c=17.0, indices = [0]) , Term(c = 23.0, indices = 1, 3, 77)] ```> **Note:** See [this documentation page](https://docs.microsoft.com/azure/quantum/quickstart-microsoft-qio?pivots=platform-microsoftexpress-a-simple-problem) for further information. Binary formulation for the Secret Santa problemTo represent the Secret Santa problem, we can use six binary variables, as outlined in the Scenario Table below:|- buys ->|**Vincent**|**Tess**|**Uma**||--|--|--|--||**Vincent**|--|$x_0$|$x_1$||**Tess**|$x_2$|--|$x_3$||**Uma**|$x_3$|$x_4$|--|The constraints for the problem can be expressed as doing logical ANDs ($ \land $) of variables that are EXCLUSIVE-ORd ($ \oplus $) together, like this:$( x_0 \oplus x_1 ) \land ( x_2 \oplus x_3 ) \land ( x_4 \oplus x_5 ) \land ( x_2 \oplus x_4 ) \land ( x_0 \oplus x_5 ) \land ( x_1 \oplus x_3 )$$\text{ where } x_i \in \{ 0,1 \} $ The truth table for exclusive or ($ \oplus $) is shown below (one variable or the other is **one**, but not both):|$x_0$|$x_1$|$x_0 \oplus x_1$||--|--|--||0|0|0||0|1|1||1|0|1||1|1|0|Using this truth table, we can see how the constraints are derived. Looking at the Scenario Table defined previously:- Reading the first **row** of the table, Vincent may buy a gift and write a poem for Tess or for Uma, but not both.- Reading the first **column** of the table, Vincent may receive a gift and poem from Tess or from Uma, but not both.More generally:- Each person should give and receive **exactly one** gift from one other person in the group. - If a person gives more or less than one gift, or receives more or less than one gift, this constraint has been violated and the solution will not be valid. So, how do we represent $ ( x_0 \oplus x_1 ) $ in a binary format that Azure Quantum will understand?Keeping in mind that we want to **minimize** our cost function, let's try to use the following representation:$ ( x_0 + x_1 - 1 )^2 $ Let's check the truth table for this formulation:|$x_0$|$x_1$|$(x_0 + x_1 - 1)^2$||--|--|--||0|0|1||0|1|0||1|0|0||1|1|1|As you can see, in rows where there is exactly one $1$, the result is $0$. This means the penalty applied in those situations will be $0$. Since we want to minimize the cost function, getting $0$ for the answers we want is the correct result.We are almost there! The next step is to do a [quadratic expansion of this formula](https://en.wikipedia.org/wiki/Polynomial). This leaves us with the following expanded formula:$ x_0^2 + x_1^2 + 2x_0x_1 - 2x_0 - 2x_1 + 1 $We build up the Terms in the helper function `build_terms` shown below, but instead of using $x_0$ and $x_1$, we use the indices for our variables instead ($i$ and $j$). So for example, $x_0 \oplus x_1$ (where $i = 0$ and $j = 1$) would translate to `build_terms(0, 1)`.
###Code
# This allows you to connect to the Workspace you've previously deployed in Azure.
# Be sure to fill in the settings below which can be retrieved by running 'az quantum workspace show' in the terminal.
from azure.quantum import Workspace
workspace = Workspace (
subscription_id = "",
resource_group = "",
name = "",
location = ""
)
# Import required modules
from typing import List
from azure.quantum.optimization import Problem, ProblemType, Term, SimulatedAnnealing
def build_terms(i: int, j: int):
"""
Construct Terms for a row or a column (two variables) of the Secret Santa matrix
Arguments:
i (int): index of first variable
j (int): index of second variable
"""
terms = [] # Initialize empty terms list
terms.append(Term(c = 1.0, indices = [i, i])) # x(i)^2
terms.append(Term(c = 1.0, indices = [j, j])) # x(j)^2
terms.append(Term(c = 2.0, indices = [i, j])) # 2x(i)x(j)
terms.append(Term(c = -2.0, indices = [i])) # -2x(i)
terms.append(Term(c = -2.0, indices = [j])) # -2x(j)
terms.append(Term(c = 1.0, indices = [])) # +1
return terms
###Output
_____no_output_____
###Markdown
We have one more helper function, which takes the answer returned from the service and interprets it in a human-readable way based on the Scenario Table, above.
###Code
def print_results(config: dict
):
"""
print results of run
Arguements:
config (dictionary): config returned from solver
"""
result = {
'0': 'Vincent buys Tess a gift and writes her a poem.',
'1': 'Vincent buys Uma a gift and writes her a poem.',
'2': 'Tess buys Vincent a gift and writes him a poem.',
'3': 'Tess buys Uma a gift and writes her a poem.',
'4': 'Uma buys Vincent a gift and writes him a poem.',
'5': 'Uma buys Tess a gift and writes her a poem.'}
for key, val in config.items():
if val == 1:
print(result[key])
###Output
_____no_output_____
###Markdown
Bringing it all together:
###Code
"""
build secret santa matrix
Vincent Tess Uma
Vincent - x(0) x(1)
Tess x(2) - x(3)
Uma x(4) x(5) -
"""
# row 0 + row 1 + row 2
terms = build_terms(0, 1) + build_terms(2, 3) + build_terms(4, 5)
# + col 0 + col 1 + col 2
terms = terms + build_terms(2, 4) + build_terms(0, 5) + build_terms(1, 3)
print(f'Terms: {terms}\n')
problem = Problem(name = 'secret santa', problem_type = ProblemType.pubo, terms = terms)
solver = SimulatedAnnealing(workspace, timeout = 2)
print('Submitting problem to Azure Quantum')
result = solver.optimize(problem)
print(f'\n\nResult: {result}\n')
# Display the final human-readable result
print('Human-readable solution:')
print_results(result['configuration'])
###Output
_____no_output_____
###Markdown
Azure Quantum Optimization Sample: Secret Santa
This sample walks through how to solve the Secret Santa problem using Azure Quantum. The scenario is defined as follows:
- Vincent, Tess, and Uma each write their name on a slip of paper and place the paper in a jar.
- Everybody then draws a slip of paper from the jar at random.
- Each person buys a small gift and writes a poem for the person whose name they have drawn.
- If they draw their own name, they return the slip of paper and re-draw.
> **Note:**
> The inspiration for this scenario came from Vincent's blog post ([found here](https://vincent.frl/quantum-secret-santa/)), which demonstrates how to use [Q and the QDK](https://docs.microsoft.com/azure/quantum/overview-what-is-qsharp-and-qdk) to solve this scenario. In this sample, we will make use of the [Azure Quantum QIO service](https://docs.microsoft.com/azure/quantum/optimization-what-is-quantum-optimization) to solve the same problem. Introduction: binary optimization
Binary optimization problems take the general form:
$\text{Minimize: } \sum_{k} term_k = \sum_k c_k \prod_{i} x_i \text{ where } x_i \in \{ 0,1 \} \text{ or } \{ -1 , 1 \} \text{ and } c_k \in \mathbb{R} $
Our job is to define a mathematical representation of our problem in this binary format and then use Azure Quantum to solve it.
For example, the problem shown below:
$13 + 17x_0 + 23x_1x_3x_{77},$
would be represented by the following Terms in the Azure Quantum Python SDK:
```python
terms = [Term(c = 13.0, indices = []), Term(c=17.0, indices = [0]) , Term(c = 23.0, indices = 1, 3, 77)]
```
> **Note:** See [this documentation page](https://docs.microsoft.com/azure/quantum/quickstart-microsoft-qio?pivots=platform-microsoftexpress-a-simple-problem) for further information. Binary formulation for the Secret Santa problem
To represent the Secret Santa problem, we can use six binary variables, as outlined in the Scenario Table below:
|- buys ->|**Vincent**|**Tess**|**Uma**|
|--|--|--|--|
|**Vincent**|--|$x_0$|$x_1$|
|**Tess**|$x_2$|--|$x_3$|
|**Uma**|$x_3$|$x_4$|--|
The constraints for the problem can be expressed as doing logical ANDs ($ \land $) of variables that are EXCLUSIVE-ORd ($ \oplus $) together, like this:
$
( x_0 \oplus x_1 ) \land ( x_2 \oplus x_3 ) \land ( x_4 \oplus x_5 ) \land ( x_2 \oplus x_4 ) \land ( x_0 \oplus x_5 ) \land ( x_1 \oplus x_3 )
$
$
\text{ where } x_i \in \{ 0,1 \}
$
The truth table for exclusive or ($ \oplus $) is shown below (one variable or the other is **one**, but not both):
|$x_0$|$x_1$|$x_0 \oplus x_1$|
|--|--|--|
|0|0|0|
|0|1|1|
|1|0|1|
|1|1|0|
Using this truth table, we can see how the constraints are derived. Looking at the Scenario Table defined previously:
- Reading the first **row** of the table, Vincent may buy a gift and write a poem for Tess or for Uma, but not both.
- Reading the first **column** of the table, Vincent may receive a gift and poem from Tess or from Uma, but not both.
More generally:
- Each person should give and receive **exactly one** gift from one other person in the group.
- If a person gives more or less than one gift, or receives more or less than one gift, this constraint has been violated and the solution will not be valid. So, how do we represent $ ( x_0 \oplus x_1 ) $ in a binary format that Azure Quantum will understand?
Keeping in mind that we want to **minimize** our cost function, let's try to use the following representation:
$ ( x_0 + x_1 - 1 )^2 $
Let's check the truth table for this formulation:
|$x_0$|$x_1$|$(x_0 + x_1 - 1)^2$|
|--|--|--|
|0|0|1|
|0|1|0|
|1|0|0|
|1|1|1|
As you can see, in rows where there is exactly one $1$, the result is $0$. This means the penalty applied in those situations will be $0$. Since we want to minimize the cost function, getting $0$ for the answers we want is the correct result.
We are almost there! The next step is to do a [quadratic expansion of this formula](https://en.wikipedia.org/wiki/Polynomial). This leaves us with the following expanded formula:
$ x_0^2 + x_1^2 + 2x_0x_1 - 2x_0 - 2x_1 + 1 $
We build up the Terms in the helper function `build_terms` shown below, but instead of using $x_0$ and $x_1$, we use the indices for our variables instead ($i$ and $j$).
So for example, $x_0 \oplus x_1$ (where $i = 0$ and $j = 1$) would translate to `build_terms(0, 1)`.
###Code
# Import required modules
from typing import List
from azure.quantum import Workspace
from azure.quantum.optimization import Problem, ProblemType, Term, SimulatedAnnealing
def build_terms(i: int, j: int):
"""
Construct Terms for a row or a column (two variables) of the Secret Santa matrix
Arguments:
i (int): index of first variable
j (int): index of second variable
"""
terms = [] # Initialize empty terms list
terms.append(Term(c = 1.0, indices = [i, i])) # x(i)^2
terms.append(Term(c = 1.0, indices = [j, j])) # x(j)^2
terms.append(Term(c = 2.0, indices = [i, j])) # 2x(i)x(j)
terms.append(Term(c = -2.0, indices = [i])) # -2x(i)
terms.append(Term(c = -2.0, indices = [j])) # -2x(j)
terms.append(Term(c = 1.0, indices = [])) # +1
return terms
###Output
_____no_output_____
###Markdown
We have one more helper function, which takes the answer returned from the service and interprets it in a human-readable way based on the Scenario Table, above.
###Code
def print_results(config: dict
):
"""
print results of run
Arguements:
config (dictionary): config returned from solver
"""
result = {
'0': 'Vincent buys Tess a gift and writes her a poem.',
'1': 'Vincent buys Uma a gift and writes her a poem.',
'2': 'Tess buys Vincent a gift and writes him a poem.',
'3': 'Tess buys Uma a gift and writes her a poem.',
'4': 'Uma buys Vincent a gift and writes him a poem.',
'5': 'Uma buys Tess a gift and writes her a poem.'}
for key, val in config.items():
if val == 1:
print(result[key])
###Output
_____no_output_____
###Markdown
Bringing it all together:
###Code
# Sign into your Azure Quantum workspace
# Copy the settings for your workspace below
workspace = Workspace(
subscription_id = "",
resource_group = "",
name = "",
location = ""
)
"""
build secret santa matrix
Vincent Tess Uma
Vincent - x(0) x(1)
Tess x(2) - x(3)
Uma x(4) x(5) -
"""
# row 0 + row 1 + row 2
terms = build_terms(0, 1) + build_terms(2, 3) + build_terms(4, 5)
# + col 0 + col 1 + col 2
terms = terms + build_terms(2, 4) + build_terms(0, 5) + build_terms(1, 3)
print(f'Terms: {terms}\n')
problem = Problem(name = 'secret santa', problem_type = ProblemType.pubo, terms = terms)
solver = SimulatedAnnealing(workspace, timeout = 2)
print('Submitting problem to Azure Quantum')
result = solver.optimize(problem)
print(f'\n\nResult: {result}\n')
print('Human-readable solution:')
print_results(result['configuration'])
###Output
_____no_output_____ |
20180908_AdvancedAlgorithmicTrading_ch10/Autoregressive.ipynb | ###Markdown
AR(1)___${ \alpha }_{ 1 }=0.6$로 정하고 시계열을 생성해보자.$$x_{ t }={ 0.6x }_{ t-1 }+{ w }_{ t }$$
###Code
np.random.seed(1)
proc = sm.tsa.ArmaProcess([1, -0.6], [1])
samples = proc.generate_sample(100)
plt.plot(samples, 'o-')
plt.title("Realisation of AR(1) Model, with α1 = 0.6")
plt.xlabel("index")
plt.ylabel("x")
plt.show()
sm.graphics.tsa.plot_acf(samples, lags=20)
plt.title("Correlogram")
plt.xlabel("Lag")
plt.ylabel("ACF")
plt.show()
###Output
_____no_output_____
###Markdown
위의 시계열 a1을 가지고 α1을 추정해보자.
###Code
arma = sm.tsa.ARMA(samples, (1, 0))
ret = arma.fit(disp=False)
print(ret.summary())
print("\n데이터에서 α1의 추정: {0}".format(ret.arparams[0]))
print("\n추정된 계수의 신뢰구간(95%): {0}".format(ret.conf_int(alpha=0.05)[1]))
###Output
ARMA Model Results
==============================================================================
Dep. Variable: y No. Observations: 100
Model: ARMA(1, 0) Log Likelihood -128.899
Method: css-mle S.D. of innovations 0.877
Date: Fri, 07 Sep 2018 AIC 263.797
Time: 23:00:54 BIC 271.613
Sample: 0 HQIC 266.960
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const 0.1650 0.185 0.893 0.374 -0.197 0.527
ar.L1.y 0.5304 0.085 6.245 0.000 0.364 0.697
Roots
=============================================================================
Real Imaginary Modulus Frequency
-----------------------------------------------------------------------------
AR.1 1.8855 +0.0000j 1.8855 0.0000
-----------------------------------------------------------------------------
데이터에서 α1의 추정: 0.530351737665234
추정된 계수의 신뢰구간(95%): [0.3639079 0.69679557]
###Markdown
${ \alpha }_{ 1 }=-0.6$로 정하고 똑같이 해보자.$$x_{ t }={ -0.6x }_{ t-1 }+{ w }_{ t }$$
###Code
np.random.seed(1)
proc = sm.tsa.ArmaProcess([1, 0.6], [1])
samples = proc.generate_sample(100)
plt.plot(samples, 'o-')
plt.title("Realisation of AR(1) Model, with α1 = -0.6")
plt.xlabel("index")
plt.ylabel("x")
plt.show()
sm.graphics.tsa.plot_acf(samples, lags=20)
plt.title("Correlogram")
plt.xlabel("Lag")
plt.ylabel("ACF")
plt.show()
arma = sm.tsa.ARMA(samples, (1, 0))
ret = arma.fit(disp=False)
print(ret.summary())
print("\n데이터에서 α1의 추정: {0}".format(ret.arparams[0]))
print("\n추정된 계수의 신뢰구간(95%): {0}".format(ret.conf_int(alpha=0.05)[1]))
###Output
_____no_output_____
###Markdown
AR(2)___${ \alpha }_{ 1 }=0.666, { \alpha }_{ 2 }=-0.333$로 정하고 시계열을 생성해보자.$$x_{ t }={ 0.666x }_{ t-1 }-0.333{ x }_{ t-2 }+{ w }_{ t }$$
###Code
np.random.seed(1)
proc = sm.tsa.ArmaProcess([1, -0.666, 0.333], [1])
samples = proc.generate_sample(100)
plt.plot(samples, 'o-')
plt.title("Realisation of AR(2) Model, with α1 = 0.666, α2 = -0.333")
plt.xlabel("index")
plt.ylabel("x")
plt.show()
sm.graphics.tsa.plot_acf(samples, lags=20)
plt.title("Correlogram")
plt.xlabel("Lag")
plt.ylabel("ACF")
plt.show()
arma = sm.tsa.ARMA(samples, (2, 0))
ret = arma.fit(disp=False)
print(ret.summary())
print("\n데이터에서 α1의 추정: {0}".format(ret.arparams[0]))
print("데이터에서 α2의 추정: {0}".format(ret.arparams[1]))
print("\n추정된 α1의 신뢰구간(95%): {0}".format(ret.conf_int(alpha=0.05)[1]))
print("추정된 α2의 신뢰구간(95%): {0}".format(ret.conf_int(alpha=0.05)[2]))
###Output
_____no_output_____
###Markdown
Financial Data - Amazon___아마존의 데이터를 AR(2)로 분석해보자.
###Code
day_data = pd.read_csv("AMZN.csv")
day_data["Date"] = pd.to_datetime(day_data["Date"], format='%Y-%m-%d')
day_data = day_data.set_index("Date", inplace=False)
day_data["Close"].plot()
plt.title("Close of Amazon")
plt.xlabel("date")
plt.ylabel("price")
plt.show()
day_data['log_return'] = np.log(day_data['Close']).diff()
day_data = day_data.dropna()
day_data['log_return'].plot()
plt.title("First Order Differenced Daily Logarithmic Returns of AMZN Closing Prices")
plt.xlabel("date")
plt.ylabel("price")
plt.show()
sm.graphics.tsa.plot_acf(day_data['log_return'], lags=30)
plt.title("Correlogram")
plt.xlabel("Lag")
plt.ylabel("ACF")
plt.show()
arma = sm.tsa.ARMA(day_data['log_return'], (2, 0))
ret = arma.fit(disp=False)
print(ret.summary())
print("\n데이터에서 α 추정")
print(ret.arparams)
print("\n추정된 α의 신뢰구간(95%)")
print(ret.conf_int(alpha=0.05))
###Output
_____no_output_____
###Markdown
Financial Data - S&P 500___S&P 500의 데이터를 AR(2)로 분석해보자.
###Code
day_data = pd.read_csv("GSPC.csv")
day_data["Date"] = pd.to_datetime(day_data["Date"], format='%Y-%m-%d')
day_data = day_data.set_index("Date", inplace=False)
day_data["Close"].plot()
plt.title("Close of S&P 500")
plt.xlabel("date")
plt.ylabel("price")
plt.show()
day_data['log_return'] = np.log(day_data['Close']).diff()
day_data = day_data.dropna()
day_data['log_return'].plot()
plt.title("First Order Differenced Daily Logarithmic Returns of S&500 Closing Prices")
plt.xlabel("date")
plt.ylabel("price")
plt.show()
sm.graphics.tsa.plot_acf(day_data['log_return'], lags=30)
plt.title("Correlogram")
plt.xlabel("Lag")
plt.ylabel("ACF")
plt.show()
arma = sm.tsa.ARMA(day_data['log_return'], (22, 0))
ret = arma.fit(disp=False)
print(ret.summary())
print("\n데이터에서 α 추정")
print(ret.arparams)
print("\n추정된 α의 신뢰구간(95%)")
print(ret.conf_int(alpha=0.05))
###Output
_____no_output_____ |
embeddinglayer.ipynb | ###Markdown
###Code
import tensorflow as tf
import numpy as np
x_data = np.array([[4], [20]])
x_data, x_data.shape
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(50,2, input_length=1)) # input Layer
# model.add() # hidden layer
# model.add() # output layer
model.compile(optimizer='adam', loss='mse')
model.predict(x_data)
###Output
_____no_output_____
###Markdown
###Code
import tensorflow as tf
import numpy as np
x_data = np.array([[4], [20]])
x_data, x_data.shape
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(50,2, input_length=1)) # input Layer
# model.add() # hidden layer
# model.add() #
# model.add() # output layer
model.compile(optimizer='adam', loss='mse')
model.predict(x_data)
###Output
_____no_output_____
###Markdown
###Code
import tensorflow as tf
import numpy as np
x_data = np.array([[4],[20]])
x_data, x_data.shape
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(50,2, input_length=1)) # input layer
# model.add # hidden layer
# model.add # output layer
model.compile(optimizer='adam',loss='mse')
model.predict(x_data)
###Output
_____no_output_____
###Markdown
###Code
import tensorflow as tf
import numpy as np
x_data = np.array([[4], [20]])
x_data, x_data.shape
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(50,2, input_length=1)) # input layer
# model.add() # hidden layer
# model.add() # output layer
model.compile(optimizer='adam', loss='mse',)
model.predict(x_data)
###Output
_____no_output_____ |
OPeNDAP-GDAS.ipynb | ###Markdown
Draw a global analysis by NOAA GDAS using OPeNDAP31-Dec-20202020年の年末は、12月29日 21UTC に[モンゴルで海面較正気圧が最高記録?の 1084 hPa になる](https://twitter.com/FPappenberger/status/1344318861659824129)?という、非常に強い寒気が形成されたことが[話題になりました](https://news.yahoo.co.jp/articles/c1dc42cc2bb7cebfe640f43eff655272c41e8f48?fbclid=IwAR1fJRafEohgT5kUtMFppF5sQl0obk0wsIJiYRKsrCjeI6rvnu3KWpuzLb0)。そこで、地表近くの気温を、NOAA の数値予報の解析値データで可視化してみます。※ 以下は 2019年に鳥取大学乾燥地研究センターでの集中講義で使ったものを少し更新したもので、大元は NASA EarthObservatory ウェブサイトの2019年1月29日の図 (https://earthobservatory.nasa.gov/images/144489/arctic-weather-plunges-into-north-america) を真似てみたものです。この notebook では [OPeNDAP](https://www.opendap.org) で準リアルタイムの数値予報を公開している NOAA のサーバーから解析データ(Global Data Assimilation System, GDAS)を取得して描画しています。地理情報描画には `cartopy` を、データ取得には多次元データライブラリ `xarray` を用いています (※ `xarray` の代わりに `netCDF4` でも同じことができます).
###Code
%matplotlib inline
import datetime as dt
import pytz
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
###Output
_____no_output_____
###Markdown
NOAA のウェブサイト https://nomads.ncep.noaa.gov では最新の数値予報データが提供されており、いくつかは OPeNDAP 形式で取得できます。サイトを調べると、0.25°解像度の解析値 GDAS は次のような URL で OPeNDAP アクセスができることがわかります。http://nomads.ncep.noaa.gov:80/dods/gdas_0p25/gdas20201230/gdas_0p25_18zそのため、まず日付を指定して URL 文字列を作成します。
###Code
# 準リアルタイムデータ取得ならこんな風に
# d = dt.datetime.now(pytz.utc) - dt.timedelta(hours=9) # 9-hour for a latency
# hr = int(d.hour / 6) * 6
d = dt.datetime(2020, 12, 29, 18, 0) # hour は 0, 6, 12, 18 のいずれか
hr = d.hour
d
url = 'http://nomads.ncep.noaa.gov:80/dods/gdas_0p25/gdas{YMD}/gdas_0p25_{H:02}z'.format(YMD=d.strftime('%Y%m%d'),H=d.hour)
url
###Output
_____no_output_____
###Markdown
この URL から、`xarray` を用いてデータを取得します。 (GitHUB で表示すると冗長になるため、2行めはコメントアウトしています)
###Code
gdas_xr = xr.open_dataset(url)
# gdas_xr
###Output
C:\Users\taichu\miniconda3\envs\atmos_anaconda\lib\site-packages\xarray\coding\times.py:83: SerializationWarning: Ambiguous reference date string: 1-1-1 00:00:0.0. The first value is assumed to be the year hence will be padded with zeros to remove the ambiguity (the padded reference date string is: 0001-1-1 00:00:0.0). To remove this message, remove the ambiguity by padding your reference date strings with zeros.
warnings.warn(warning_msg, SerializationWarning)
###Markdown
地表面付近の気温の変数名は `tmp2m` なので、これを `cartopy` と `matplotlib` で可視化してみます。
###Code
fig = plt.figure(figsize=(10.0, 10.0))
proj = ccrs.Orthographic(135.0, 50.0)
ax = fig.add_subplot(1, 1, 1, projection=proj)
ax.set_global()
ax.coastlines(resolution='50m', linewidth=0.3)
t = 3
tmp2m = gdas_xr.tmp2m.isel(time=t) - 273.15
tmp2m.plot(ax=ax, cmap='RdBu_r', vmin=-40.0, vmax=40.0,
transform=ccrs.PlateCarree(),
cbar_kwargs={'shrink': 0.6})
ax.set_title(f'GDAS 2-m temperature : {gdas_xr.time[t].dt.strftime("%Y-%m-%d %H:%Mz").data}',
fontsize='x-large')
###Output
_____no_output_____
###Markdown
画像として保存します。
###Code
fig.savefig('GDAS_tmp2m_{}.png'.format(gdas_xr.time[t].dt.strftime("%Y%m%d%H%M").data), dpi=300, bbox_inches='tight', pad_inches=0)
###Output
_____no_output_____
###Markdown
動画を作成してみるコードをまとめて、入手可能な範囲のデータで動画を作成してみます。(この動画生成はすごく時間がかかるので、試行錯誤の末2時間ごとにしています)
###Code
import matplotlib.animation as animation
d_start = dt.datetime(2020, 12, 26, 0, 0)
d_end = dt.datetime(2021, 1, 1, 0, 0)
# d_end = d_start + dt.timedelta(hours=3) # Test
def nomads_gdas_url(d):
hr = int(d.hour / 6) * 6
url = 'http://nomads.ncep.noaa.gov:80/dods/gdas_0p25/gdas{YMD}/gdas_0p25_{H:02}z'.format(YMD=d.strftime('%Y%m%d'),H=hr)
return url
def plot_tmp2m_globe(d: dt.datetime, ims: list, gdas_xr: xr.core.dataset.Dataset) -> list:
# plt.cla()
proj = ccrs.Orthographic(135.0, 50.0)
ax = fig.add_subplot(1, 1, 1, projection=proj)
if d == d_start:
ax.set_global()
ax.coastlines(resolution='50m', linewidth=0.2)
t = d.hour % 6
tmp2m = gdas_xr.tmp2m.isel(time=t) - 273.15
if d == d_start:
im = tmp2m.plot(ax=ax, cmap='RdBu_r', vmin=-40.0, vmax=40.0,
transform=ccrs.PlateCarree(), animated=True,
cbar_kwargs={'shrink': 0.6, 'extend': 'both'})
else:
im = tmp2m.plot(ax=ax, cmap='RdBu_r', vmin=-40.0, vmax=40.0,
transform=ccrs.PlateCarree(), animated=True,
add_colorbar=False)
ax.set_title('')
title_string = 'GDAS 2-m temperature : ' + gdas_xr.time[t].dt.strftime("%Y-%m-%d %H:%Mz").data
ttl = plt.text(0.5, 1.01, title_string, fontsize='x-large', horizontalalignment='center',
verticalalignment='bottom', transform=ax.transAxes)
ims.append([ax, im, ttl])
return ims
fig = plt.figure(figsize=(9.6, 5.4), tight_layout=True)
d = d_start
delta_t = 2
ims = []
while d <= d_end:
print(d) # Print the datetime for checking the progress.
t = d.hour % 6
if t == 0 or d == d_start:
gdas_xr = xr.open_dataset(nomads_gdas_url(d))
ims = plot_tmp2m_globe(d, ims, gdas_xr)
d += dt.timedelta(hours=delta_t)
print('Drawing loop Finished.')
ani = animation.ArtistAnimation(fig, ims, interval=100)
print('Saving the animation to MP4.')
writer = animation.FFMpegWriter(fps=15, metadata=dict(artist='Me'), bitrate=3600)
ani.save(f"GDAS_tmp2m_{d_start.strftime('%Y%m%d%H%M')}-{d_end.strftime('%Y%m%d%H%M')}.mp4", writer=writer, dpi=200)
###Output
2020-12-26 00:00:00
###Markdown
JavaScript アニメーションにしてインラインで表示。
###Code
# from IPython.display import HTML
# HTML(ani.to_jshtml())
###Output
_____no_output_____ |
Welcome_To_Colaboratory_Francesca_Lingo.ipynb | ###Markdown
What is Colaboratory?Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with - Zero configuration required- Free access to GPUs- Easy sharingWhether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below! **Getting started**The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
###Code
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
###Output
_____no_output_____
###Markdown
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.Variables that you define in one cell can later be used in other cells:
###Code
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
###Output
_____no_output_____
###Markdown
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.comcreate=true).Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org). Data scienceWith Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
###Code
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
###Output
_____no_output_____ |
Quantitative Finance Lectures/time_series_momentum.ipynb | ###Markdown
Trend-following (or Time Series Momentum) Signals by Gustavo SoaresIn this notebook you will apply a few things you learned in the [FinanceHub's Python lectures](https://github.com/Finance-Hub/FinanceHubMaterials/tree/master/Python%20Lectures):* You will use and manipulate different kinds of variables in Python such as text variables, booleans, floats, dictionaries, lists, etc.;* We will also use `Pandas.DataFrame` objects and methods which are very useful in manipulating financial time series;* You will use if statements, loops, and list comprehensions, and;* You will use [FinanceHub's Bloomberg tools](https://github.com/Finance-Hub/FinanceHub/tree/master/bloomberg) for fetching data from a [Bloomberg terminal](https://data.bloomberglp.com/professional/sites/10/LUISS_2018Primer.pdf). If you are using this notebook within BQuant (or BQNT), you may want to use BQL for getting the data. IntroductionTrend-following is one of the most prevalent quantitative strategies and it applies to several markets such as equity indices, currencies, and futures markets such as commodities and bond futures. Trend-following (TF) or Time Series Momentum (TSM) refers to the predictability of the past returns on future returns and is the focus of several influential studies. The best overall academic summary of trend-following can be found on in the paper [Moskowitz, Ooi, and Pedersen (2012)](https://doi.org/10.1016/j.jfineco.2011.11.003) where the authors document significant momentum in monthly returns. There they find strongest results for relatively short holdings periods (1 or 3 months) and mid-range lookback periods such as (9 and 12 months).There are several ways of identifying trends. The two most comon ones are time series momentum (past excess returns) and moving average crossovers. [Levine and Pedersen (2015)](https://ssrn.com/abstract=2603731) show both measures are closely connected. In fact, they are equivalent representations in their most general forms, and they also capture many other types of filters such as the HP filter, the Kalman filter, and all other linear filters. If we think of the log-price of asset $i$ in period $t$ as $p_{i,t}$, they show that trend indicators can be generally understood as:$$TSMOM_{i,t} = \sum_{s=1}^{\infty} c_{s}(p_{i,t-s+1} - p_{i,t-s})$$where $c_{s}$ is a set of weights. For instance, one might use $c_{s}$ to weigh more recent price changes in assessing the current price trend and even allow for reversals in some frequencies with negative weights.In the section, [Momentum signals](signals) below we discuss commonly used momentum signals. Before we start: *import Python packages and upload tracker time series data*Before we get started, let's import some Python packages we are going to need. Also, let's upload time series data that has been properly constructed to reflect the cumulative excess returns of underlying asssets. If you are not quite sure what that means, check out or materials on constructing **tracker** time series for financial time series for trading [currencies](https://github.com/Finance-Hub/FinanceHubMaterials/blob/master/Quantitative%20Finance%20Lectures/creating_fx_time_series_fh.ipynb), [futures](https://github.com/Finance-Hub/FinanceHubMaterials/blob/master/Quantitative%20Finance%20Lectures/rolling_futures_time_series.ipynb), and [interest rates swaps](https://github.com/Finance-Hub/FinanceHubMaterials/blob/master/Quantitative%20Finance%20Lectures/swap_historical_returns.ipynb)!Our tracker data consists of 112 financial time series. Each one of them starts at a different date. The largest time series has 5350 data points and the shortest one has only 1736 datapoints. The first column of the file is the date column and we will use that as the dataframe index:
###Code
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
%matplotlib inline
tracker_data = pd.read_csv('set_of_trackers.csv',index_col=0)
###Output
_____no_output_____
###Markdown
Momentum signals Classic momentumThe classic definition of momentum is simply the percentage change in the financial time series over $h$ time periods. Sometimes, this classic definition is calculated by looking at log-percentage changes by calculating the the difference between the log of today's price vs. the log of the price $h$ periods ago.The function below calculates these two types of classic momentum:
###Code
def momentum(ts, h=252, signal_type='classic'):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
return df_mom
ts = tracker_data.iloc[:,0]
mom = momentum(ts, h=252, signal_type='classic')
mom.plot(title='Classic momentum signal',figsize=(20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Momentum in equities: twelve-minus-one momentumIn equities, the classic momentum measure is typically a bit different. There is empricial evidence (e.g., [Jagadeesh and Titman (1993)]) that short-term momentum actually has a reversal affect, whereby the previous winners (measured over the past period) do poorly the next period, while the previous losers (measured over the past period) do well the next month. Most academic papers in equities ignore the previous period return in the momentum calculation even though it only has a marginal affect on the performance of momentum strategies. Here, we do not apply the last period "drop" but you should know this is how momentum is defined by defined in equities such as in [Kenneth French's Data Library](https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html).Just for completion, let's add that definition to our momentum function as signal type *classic_minus_1*:
###Code
def momentum(ts, h=252, signal_type='classic'):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
elif signal_type=='classic_minus_1':
df_mom = np.log(df).diff(h).shift(21)
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
return df_mom
###Output
_____no_output_____
###Markdown
Volatility adjustmentSince volatility varies across assets, it is common for momentum signals to be *volatility adjusted*, i.e., for momentum signals to be scaled at each point in time by some measure of *ex-ante* volatility. For example, [Moskowitz, Ooi, and Pedersen (2012)](https://doi.org/10.1016/j.jfineco.2011.11.003) use the lagged 60-days half-life EWMA on squared daily returns to run their regression analysis. As discussed in **Designing robust trend-following systems** by the Global Quantitative & Derivatives Strategy from JP Morgan, this also allows us to interpret the signal as a statistical test whether the mean return of an asset is either positive or negative, i.e., a *t-test*.Of course, the volatility adjustment depends crucially on the volatility measure used.
###Code
def get_lagged_vol(ts, h=21, vol_type='ewma', min_vol_window=756, halflife=60):
# clean up
lagged_data = ts.astype(float).shift(1) # note the day lag!!
lagged_data.index = pd.DatetimeIndex(pd.to_datetime(lagged_data.index))
if vol_type == 'ewma': # From Moskowitz, Ooi, and Pedersen (2012) and roughly similar to a GARCH(1, 1) model
vols = np.sqrt(((np.log(lagged_data).diff(1)**2).ewm(halflife=halflife).mean())*252)
else:
if vol_type != 'rolling':
print('vol_type not recognized, assuming rolling window of %s bdays over %s bday returns' % (min_vol_window,h))
vols = np.log(lagged_data).diff(h).rolling(window=min_vol_window).std()*np.sqrt(252/h)
return vols
ts = tracker_data.iloc[:,0]
ewma_vol = get_lagged_vol(ts)
rolling_vol = get_lagged_vol(ts,h=1,vol_type='rolling',min_vol_window=(3*21))
pd.concat([ewma_vol.to_frame('ewma'),rolling_vol.to_frame('rolling_3M_on_daily_returns')]
,axis=1,sort=True).plot(title='Ex-ante volatility measures',figsize=(20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Let's add the option of volatility adjusting our momentum signal to our momentum function:
###Code
def momentum(ts, h=252, signal_type='classic', vol_adjust=False):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
elif signal_type=='classic_minus_1':
df_mom = np.log(df).diff(h).shift(21)
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
if vol_adjust: # this will be true whethere the parameter vol_adjust is a boolean equal to True or if it's a string
if isinstance(vol_adjust,bool): # check if boolean
vols = get_lagged_vol(ts)
else:
vols = get_lagged_vol(ts,vol_type=vol_adjust)
return df_mom/vols
else:
return df_mom
ts = tracker_data.iloc[:,0]
vol_adjusted_mom = momentum(ts, h=252, signal_type='classic',vol_adjust=True)
vol_adjusted_mom.plot(title='Vol-adjusted by ewma momentum signal',figsize=(20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Momentum signal based on the t-stat's p-valueIf we can interpret the momentum signal adjusted by volatility as a *t-test* then, following **Designing robust trend-following systems** by the Global Quantitative & Derivatives Strategy from JP Morgan, we can map the final signal to the strength of the *p-value* of the t-test. The authors suggest using the cdf of the standard normal distribution $\Phi$:$$S_{i,t}(W) = 2 \times \Phi (MOM_{t}) - 1$$to normalize the signal to be in between -1 and 1, making signal calculations more evolved.In addition to the p-value strength interpretation of the singal there we can also interpret it as the delta of a straddle with specific input parameters. See also [Fung and Hsieh (2001)](https://faculty.fuqua.duke.edu/~dah7/RFS2001.pdf) for a classic referce on the connection between the PnL drivers of a ‘delta-hedged straddle’ and trend-following strategies.Let's add the *p-value* case to our momentum function:
###Code
def momentum(ts, h=252, signal_type='classic', vol_adjust=False):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
elif signal_type=='classic_minus_1':
df_mom = np.log(df).diff(h).shift(21)
elif signal_type=='p_value':
rets = np.log(df).diff(h)
vols = get_lagged_vol(ts)
ttest = (((rets/h) * np.sqrt(252))/vols).dropna()
df_mom = pd.Series(index=ttest.index,data=(2*norm.cdf(ttest)- 1)) # this normalizes the signal to be in between [-1,1]
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
if vol_adjust: # this will be true whethere the parameter vol_adjust is a boolean equal to True or if it's a string
if isinstance(vol_adjust,bool): # check if boolean
vols = get_lagged_vol(ts)
else:
vols = get_lagged_vol(ts,vol_type=vol_adjust)
return df_mom/vols
else:
return df_mom
ts = tracker_data.iloc[:,0]
mom = momentum(ts, h=252, signal_type='p_value')
mom.plot(title='P-value based momentum signal',figsize=(20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Moving average convergence/divergence (MACD) indicatorMoving average convergence/divergence (MACD), is a trading indicator used in technical analysis designed to reveal changes in the strength, direction, momentum, and duration of a trend. The MACD signal is the difference between a "fast" (short period, typically, 10 or 12 days) exponential moving average (EMA), and a "slow" (longer period, typically, 21 or 26 days) EMA of the price series.
###Code
def momentum(ts, h=252, signal_type='classic', vol_adjust=False):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
elif signal_type=='classic_minus_1':
df_mom = np.log(df).diff(h).shift(21)
elif signal_type=='macd':
df_rap = df.ewm(halflife=12).mean()
df_len = df.ewm(halflife=26).mean()
df_mom = df_rap - df_len
elif signal_type=='p_value':
rets = np.log(df).diff(h)
vols = get_lagged_vol(ts)
ttest = (((rets/h) * np.sqrt(252))/vols).dropna()
df_mom = pd.Series(index=ttest.index,data=(2*norm.cdf(ttest)- 1)) # this normalizes the signal to be in between [-1,1]
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
if vol_adjust: # this will be true whethere the parameter vol_adjust is a boolean equal to True or if it's a string
if isinstance(vol_adjust,bool): # check if boolean
vols = get_lagged_vol(ts)
else:
vols = get_lagged_vol(ts,vol_type=vol_adjust)
return df_mom/vols
else:
return df_mom
ts = tracker_data.iloc[:,0]
mom = momentum(ts, h=252, signal_type='macd')
mom.plot(title='MACD based momentum signal',figsize=(20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Relative Strenght Index (RSI)The relative strength index (RSI) is a technical indicator of momentum as well. Up periods are characterized by the close being higher than the previous close. Conversely, a down period is characterized by the close being lower than the previous period's close. Hence, for each date $t$, we calculate the change $\Delta y_{t}$ in the series and calculate:$$U_{t} = \Delta y_{t} \text{ if } \Delta y_{t} >0 \text{ and } 0 \text{ otherwise }$$$$D_{t} = \Delta y_{t} \text{ if } \Delta y_{t} <0 \text{ and } 0 \text{ otherwise }$$and therefore $\overline{U}_{n,t} \equiv n^{-1} \times \sum_{s=1}^{n} U_{t-s}$ is the average of upward moves and $\overline{D}_{n,t} \equiv n^{-1} \times \sum_{s=1}^{n} D_{t-s}$ is the average of downward changes calculated (these average are sometimes replaced by EWMA averages).The ratio of these averages is the relative strength or relative strength factor $RS_{n,t} \equiv \overline{U}_{n,t} / \overline{D}_{n,t}$ and the Relative Strenght Index (RSI) is calculated as:$$RSI_{n,t} \equiv 100-{100 \over {1+RS_{n,t}}}$$If the average of *down* values declines/increases vs. the average of *up* values, then $RSI_{n,t}$ will increase/decrease indicating positive/negative momentum. In the limits, if the average of *down* values approaches zero, $RSI_{n,t}$ will converge to 100 and as the average of *up* values approaches zero, $RSI_{n,t}$ will converge to zero.
###Code
def momentum(ts, h=252, signal_type='classic', vol_adjust=False):
df = ts.astype(float).shift(1) # note the day lag!!
df.index = pd.to_datetime(df.index)
if signal_type=='classic_log':
df_mom = np.log(df).diff(h)
elif signal_type=='classic_minus_1':
df_mom = np.log(df).diff(h).shift(21)
elif signal_type=='macd':
df_rap = df.ewm(halflife=12).mean()
df_len = df.ewm(halflife=26).mean()
df_mom = df_rap - df_len
elif signal_type=='p_value':
rets = np.log(df).diff(h)
vols = get_lagged_vol(ts)
ttest = (((rets/h) * np.sqrt(252))/vols).dropna()
df_mom = pd.Series(index=ttest.index,data=(2*norm.cdf(ttest)- 1)) # this normalizes the signal to be in between [-1,1]
elif signal_type=='rsi':
df_delta = df.diff().dropna()
up, down = df_delta.copy(), df_delta.copy()
up[up < 0] = 0
down[down > 0] = 0
roll_up1 = up.rolling(h).sum()
roll_down1 = down.rolling(h).sum().abs()
df_rs = roll_up1 / roll_down1
df_mom = 100 - 100 / (1 + df_rs)
else:
if signal_type != 'classic':
print('Momentum signal type was not recognized! assuming default')
df_mom = df.pct_change(h)
if vol_adjust: # this will be true whethere the parameter vol_adjust is a boolean equal to True or if it's a string
if isinstance(vol_adjust,bool): # check if boolean
vols = get_lagged_vol(ts)
else:
vols = get_lagged_vol(ts,vol_type=vol_adjust)
return df_mom/vols
else:
return df_mom
ts = tracker_data.iloc[:,0]
mom = momentum(ts, h=252, signal_type='rsi')
mom.plot(title='RSI based momentum signal',figsize=(20,10))
plt.show()
###Output
_____no_output_____ |
twitter lab.ipynb | ###Markdown
###Code
rest_auth = twitter.oauth.OAuth(OAUTH_TOKEN,OATH_TOKEN_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
rest_api = twitter.Twitter(auth=rest_auth)
count = 10 #number of returned tweets, default and max is 100
#geocode = "38.4392897,-78.9412224,50mi" # defin the location, in Harrisonburg, VA
q = "ia job"
search_results = rest_api.search.tweets( count=count,q=q)
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at'])# print the date of the collected tweets
except:
pass
since_id_old = 0
while(since_id_new != since_id_old):
since_id_old = since_id_new
search_results = rest_api.search.tweets( count=count,q=q,
max_id= since_id_new)
statuses = search_results["statuses"]
since_id_new = statuses[-1]['id']
for statuse in statuses:
try:
tweet_collection.insert_one(statuse)
pprint(statuse['created_at']) # print the date of the collected tweets
except:
pass
print(tweet_collection.estimated_document_count())# number of tweets collected
user_cursor = tweet_collection.distinct("user.id")
print (len(user_cursor)) # number of unique Twitter users
tweet_collection.create_index([("text", pymongo.TEXT)], name='text_index', default_language='english') # create a text index
tweet_cursor = tweet_collection.find({"$text": {"$search": "data mining"}}) # return tweets contain football
for document in tweet_cursor:
try:
print ('----')
# pprint (document) # use pprint to print the entire tweet document
print ('name:', document["user"]["name"]) # user name
print ('text:', document["text"]) # tweets
except:
print ("***error in encoding")
pass
###Output
----
name: TMJ-IAD Jobs
text: Cognizant is looking for teammates like you. See our latest #BusinessMgmt job openings, including "Data Entry Clerk… https://t.co/5WHRWYM6j0
|
2_Analyse_ventes_appartements_IDF_premier_sem_2020.ipynb | ###Markdown
Analyse de ventes en IDF au premier semestre 2020 Projet notebook (groupe Michelle, Amine, Vixra) A faire- choisir un thème en groupe (medecine, sport, musique, …)- trouver un dataset au format csv dans kaggle sur le thème choisi- trouver une colonne à prédire- indiquer une colonne qui pourrait d’après vos connaissances sur le dataset choisi être un bon prédicteur pour la colonne à prédire- plotter un catplot de ces 2 colonnes (ou tout autre visu qui pourrait confirmer ou infirmer le choix précédent)- utiliser le TP intro stats pour trouver le meilleur predicteur (ici une fonction lineaire y = ax + b), selon la methode des moindres carrés- evaluer la qualité du predicteur en calculant son RMSE (Root Mean Squared Error)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Lecture du dataset nettoyer "ventes_appartements_idf_final.csv"
df = pd.read_csv('ventes_appartements_idf_final.csv')
df.head(2)
###Output
_____no_output_____
###Markdown
1 - Répartition des ventes d'appartements au 1er semestre 2020 en IDF
###Code
# Permet de séparer la colonne "date_mutation" en 3 colonnes : "Annee", "Mois" et "jour"
df[['Annee','Mois','jour']] = df['date_vente'].str.split("-",expand=True)
df.head(1)
# Permet de grouper par mois le nombre d'appartement vendus sur toute l'Ile de France
df.groupby('Mois')[['local']].count()
# Histogramme de la répartition des ventes d'appartements en Ile-de-France au 1er semestre 2020 (entre janvier et juin 2020)
sns.catplot(x='Mois', data=df, kind='count')
###Output
_____no_output_____
###Markdown
Description: - au mois de janvier max des ventes- au mois de avril minimum des ventes- On contate une diminution globale des ventes d'appartements en IDF entre janvier et avril, puis une tendance à la hausse entre mai et juin. Interprétation : La France à été confiné entre mi mars et début mai à cause du COVID. Les gens ne pouvaient pas sortir pour aller visiter des biens ou finaliser des transactions, ce qui peut expliquer cette dimunution des ventes au mois d'avril.
###Code
## 2 - Répartition des ventes par département
###Output
_____no_output_____
###Markdown
Permet de grouper par département le nombre d'appartement vendusdf1= df.groupby("code_dprtmt").local.count().reset_index()df1 Histogramme de la répartition des ventes d'appartements par départementsns.catplot(x="code_dprtmt",y='local', data=df1, kind='bar')
###Code
Description : On voit qu'il y a eu beaucoup plus d'appartements vendus dans le 75 Paris intra-muros.
###Output
_____no_output_____
###Markdown
Autre représentation : barres horizontales
###Code
plt.figure(figsize = (9, 6))
plt.barh(y = (df1["code_dprtmt"]).astype(str), #this is an inverted bar chart, so the first argument is the y axis
width = df1["local"]) #the second argument shows the x-axis or is the width
plt.xticks( fontsize = 13)
plt.yticks( fontsize = 13)
plt.title("Ventes par département", fontsize = 16, fontweight = "bold")
# plt.xlabel("vente", fontsize = 13 )
#plt.tight_layout()
# plt.savefig("Ice Cream Shop h.png")
plt.show()
###Output
_____no_output_____
###Markdown
Autre visualisation sous forme de "Donut"
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['font.size'] = 16
departements = ['75', '78', '91', '92','93', '94', '95']
nb_ventes =[15064, 6709, 4583, 5594, 4608, 6861, 4492]
# explode = (0, 0.1, 0)
fig1, ax1 = plt.subplots(figsize=(7,7))
plt.title('Ventes par département', fontsize=15, weight='bold')
ax1.pie(nb_ventes, labels=departements, autopct='%1.1f%%',
startangle=10)
#draw circle
centre_circle = plt.Circle((0,0),0.70,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
###Output
_____no_output_____
###Markdown
3 - Etude des ventes selon les arrondissements dans Paris
###Code
# Regardons qu'est-ce qu'il y a comme valeur dans la colonne nb_lots
df.groupby('nb_lots')['nb_lots'].count()
###Output
_____no_output_____
###Markdown
Comme on ne sait pas à quoi correspond vraiment le nb_lots on va considérer les lignes nb_lots égale au valeur "0" et "1" en supposant que les valeurs 0 et 1 correspondent à des lots de 1 appartement. Soit au total 25612 lignes (8276 lignes correspondant à la valeur "0" + 17336 lignes correspondant à la valeur "1")
###Code
### 3.1 : Récupérerer un dataframe avec les données de paris
###Output
_____no_output_____
###Markdown
Récupérer les lignes qui concerne Paris avec les nb_lots entre "0" et "1"df_paris = df[(df['code_dprtmt']==75) & (df['Prix']) & (df['nb_lots']<=1) ]df_paris.head(2) Enlever la colonne 'Unnamed:0' qui ne correspond à rien df_paris.drop(columns=['Unnamed: 0'],inplace=True) Vérifier si il y a des valeurs nullesdf_paris.isnull().sum() Enlever les valeurs nullesdf_paris.dropna(inplace=True)
###Code
### 3.2 : Etudier la distribution des appartements vendus par arrondissement
###Output
_____no_output_____
###Markdown
voir la distribution des ventes par arrondissement.df_paris.groupby('code_postal')['code_postal'].count() somme des surfacessum_surface = df_paris.groupby('code_postal')[['surface']].sum().reset_index() double crochet a surface pour crée dataframe somme des prixsum_prix = df_paris.groupby('code_postal')[['Prix']].sum().reset_index() double crochet a prix pour crée dataframe Jointure pour créer nouveau dataframe pour réunir somme des surfaces et somme des prix par arrondissementleft = sum_surface.set_index(["code_postal"])right = sum_prix.set_index(["code_postal"])Prix_m2 =left.join(right)Prix_m2 Ajout nouvelle colonne avec le prix au m² par arrondissementPrix_m2['prix_m2_reel'] = Prix_m2['Prix'] / Prix_m2['surface']Prix_m2.head(2)
###Code
https://www.journaldunet.com/patrimoine/prix-immobilier/paris/ville-75056
###Output
_____no_output_____
###Markdown
Valeurs des prix de références trouvés sur internet et copier sous forme de liste ici pour les mettre dans le dataframe "Prix_m2"Prix_m2['prix_m2_ref']= [13312,12268,13351,13158,13059,15617,13442,12809,11702,10368,10595,9703, 9246,9940,10816,11322,11420,9904,9402,9282]Prix_m2
###Code
### 3.3 : Calcul du prix moyen d'une vente classé par arrondissement
###Output
_____no_output_____
###Markdown
Moyenne de prix d'une vente dans chaque arrondissementmean_prix_ar = df_paris.groupby('code_postal')[['Prix']].mean().reset_index()mean_prix_ar
###Code
### 3.4 : Histogramme des prix moyens d'une vente par arrondissement
###Output
_____no_output_____
###Markdown
plt.figure(figsize = (15, 7))plt.barh(y = (mean_prix_ar["code_postal"]).astype(str), this is an inverted bar chart, so the first argument is the y axis width = mean_prix_ar["Prix"]) the second argument shows the x-axis or is the width plt.xticks( fontsize = 13)plt.yticks( fontsize = 13)plt.title("Prix moyen d'un appartement vendu dans chaque arrondissement\n(toutes surfaces confondus)", fontsize = 16, fontweight = "bold")plt.xlabel(" Prix de vente moyen", fontsize = 13 ) plt.tight_layout() plt.savefig("Prix_moyen_vente_paris.png")plt.show() Répartition du nombre d'appart vendus par nb de pièces dans Parisdf_paris.groupby('nb_pieces')['nb_pieces'].count()
###Code
### 3.5 : Etude de la corrélation
Les coefficients de corrélation se situent dans l’intervalle [-1,1].
si le coefficient est proche de 1 c’est qu’il y a une forte corrélation positive
si le coefficient est proche de -1 c’est qu’il y a une forte corrélation négative
si le coefficient est proche de 0 en valeur absolue c’est qu’il y a une faible corrélation
###Output
_____no_output_____
###Markdown
matrice_corr = df_paris.corr().round(1)sns.heatmap(data=matrice_corr, annot=True)
###Code
Interpréation :
- On s'intéresse au Prix des appartements vendu et on cherche à connaitre s'il y a une corrélation avec l'une des variables explicatives dans le dataframe "df_paris".
- En gros on cherche à savoir quelle colonne à une corrélation forte avec le prix.
- On constate que le "surface" à la plus forte corrélation avec le Prix (d'après la matrice de corrélation ci-dessus = 0.2) par rapport aux autres variables explicatives potentielles.
### 3.6 : Recherche manuelle de la courbe de régression linéaire entre surface et le prix
On se concentre sur les appartements vendus dans paris entre 100000 et 500000 de surface < 100m²
###Output
_____no_output_____
###Markdown
On regroupe les appartements ayant coutés ente 100000 et 500000 et inférieur à 100m²groupe = df_paris[(df_paris['Prix']>=100000)& (df_paris['Prix']<=500000)& (df_paris['surface']<=100) ]groupe.head(2) Plot pour superposer la courbe de regression (recherché manuellement) et la répartition des surfaces des appartements en fonction de leur prixfig = plt.figure(figsize=(10,6))plt.plot(groupe['surface'],groupe['Prix'],'+',c='green',label='Prix en fonction de la surface')plt.plot([0, 50], [0, 500000] ) jouer avec ces paramètres pour faire varier la droite bleueplt.xlabel('Surface', fontsize=14)plt.ylabel('Prix', fontsize=14)plt.legend()plt.show()
###Code
## 4 - Estimateurs
#### Rappel de cours pour calculer les a et b de l'equation y = ax + b
<img src="img/reglin4.png" width="600">
### 4.1 : Estimateur du prix methode David
###Output
_____no_output_____
###Markdown
x=df_paris['surface']y=df_paris['Prix'] Calcul des membres principaux de la solution a et b de l'équation y =ax + b Puis calcul du aun=len(x)*(df_paris['surface']*df['Prix']).sum()sumx=df_paris['surface'].sum()sumy=df_paris['Prix'].sum()deux=len(df_paris['surface'])*((df_paris['surface']*df_paris['surface']).sum())trois=sumx*sumxa=(un-(sumx*sumy))/(deux-trois)a Puis calcul du bquatre=sumy*((df_paris['surface']*df_paris['surface']).sum())cinq=sumx*(df_paris['surface']*df_paris['Prix']).sum()six=len(df_paris['surface'])*((df_paris['surface']*df_paris['surface']).sum())sept=sumx*sumxb=(quatre-cinq)/(six-sept)b a=35780.33209187344b=563244.1054112952def pred(x,a,b): return a*x+bpred(20,a,b)
###Code
Interpration : d'après ce modèle un appartement de 20m² coûterait 1278850. Ca semble un peu elévée, c'est surement du aux "Outliers". Il faudrait filtrer ces "Outliers" dans le dataset pour affiner la prédiction.
### 4.2 : Prédiction avec Scikit learn - methode Josselin
###Output
_____no_output_____
###Markdown
import numpy as npfrom sklearn.linear_model import LinearRegressionreg = LinearRegression().fit(df_paris[['surface']], df_paris.Prix)reg.predict(np.array([[20]]))
###Code
Interprétation : on trouve la même valeur que dans le modèle précédent (20m² = 1278850).
Ce qui montre que le calcul à la main et via l'outil de calcul Scitkit learn revient au même mais scikit learn est plus rapide.
=> pour la valeur élevé il faut surement nettoyer encore les valeur du dataset pour avoir une prédiction plus juste.
### 4.3 : Estimateur - autre méthode vue avec David - ( PAS REUSSI A REFAIRE avec notre dataset)
<img src="img/Estimateur_des_moindres_carres.png" width="490">
###Output
_____no_output_____
###Markdown
def pred_p(coef,hp,b): return (coef*hp)+b coeff de corelation coef=0.3taille_pere=180biais=140pred_p(coef,taille_pere,biais) x=groupe.groupby(["surface"])['Prix'].median().indexest_p=[pred_p(coef,elem,biais) for elem in groupe.groupby(["surface"])['Prix'].median().index] fig = plt.figure(figsize=(12, 6))plt.plot(groupe['surface'],groupe['Prix'],'+',c='blue',label='Observation des prix au m²')plt.plot(x,est_p,c='black',label='Notre estimateur')plt.xlabel('Surface', fontsize=14)plt.ylabel('Prix', fontsize=14)plt.legend(loc="upper right")plt.show()
###Code
## 5 - Evaluation de l'estimateur avec le RMSE
- Pour évaluer l'ecart entre notre estimateur et les données, nous allons construire un outil qui permet de le quantifier.
- Le RMSE (Root Mean Squared Error) <=> La racine de la moyenne des erreurs au carré ...
<img src="img/rmse.jpg" width="400">
Pour cela on va:
- Quantifier la precision de vos predicteurs en caclulant le RMSE
###Output
_____no_output_____
###Markdown
from sklearn.metrics import mean_squared_errorfrom math import sqrtrmse = sqrt(mean_squared_error(Prix_m2['prix_m2_reel'],Prix_m2['prix_m2_ref'] ))print(rmse)
###Code
Interprétation : ???
###Output
_____no_output_____ |
Notebook sin nulos y comenzando el analisis de correlacion.ipynb | ###Markdown
Importamos el dataset de prueba (una versión de solo un mes para poder definir los procesos de limpiezas de datafrom google.colab import drive drive.mount('/content/gdrive') import pandas as pd df=pd.ExcelFile('TFM DATABASE.xlsx')
###Code
df=pd.ExcelFile('TFM DATABASE.xlsx')
df1 = pd.read_excel(df,'GPS - Enero 2020', na_values='?')
df1.head()
# Podemos observar la cantidad de valores nulos que se encuentran en la data
df1.isnull().sum()
#Funcion Dummy
# Definimos la función que nos ayudara a crear los dummies para alimentar el modelo
#Data Transformation With Functions 1)Dummy para Plant 2)Dummy Para Estrutura 3)Dummy para modalidad 4)Aplicar Funcion de Distancia 5)Dummy para el dia de la semana
def dummy_convert (df,ID_ESTRUC):
estruturas = pd.get_dummies(df[ID_ESTRUC], prefix = ID_ESTRUC)
df = pd.concat([df, estruturas], axis = 1)
return df
# Obtenemos un analisis de la estadística de la data con los dummies creados
# df1 = pd.read_excel(df,'GPS - Enero 2020')
df1.describe()
# Es necesario estandarizar el formato de las columnas y eliminar los espacios para trabajar de manera mas fluida
df1.columns
# procedemos a crear un diccionario con los nuevos valores de columnas
df1.rename(columns={'Fec. Prod' : 'FEC_PROD',
'Doc. Transporte' : 'DOC_TRANS',
'WERKS' : 'WERKS',
'FECHA_DESP' : 'FECHA_DESP',
'KUNNR ' : 'KUNNR',
'IDEOBRA ' : 'IDEOBRA',
'MATNR ' : 'MATNR',
'FORMULA ' : 'FORMULA',
'ID_ESTRUC' : 'ID_ESTRUC',
'ID_MODALIDAD' : 'ID_MODALIDAD',
'VBELN_PED' : 'VBELN_PED',
'VBELN_ENTREGA' : 'VBELN_ENTREGA',
'WERK LON' : 'WERK_LON',
'WERK LAT' : 'WERK_LAT' ,
'OBRA LAT': 'OBRA_LAT',
'OBRA LON' : 'OBRA_LON',
'DES_ESTRUCTURA' : 'DES_ESTRUCTURA',
'Estado': 'ESTADO',
'Placa': 'PLACA',
'Cliente': 'CLIENTE',
'Descripción de Obra': 'DESC_OBRA',
'Pto. Exped': 'PTO_EXPED',
'V. Entregado' : 'V_ENTREGADO',
'H. Program' : 'H_PROGRAM',
'Tiempo de Proceso 1' : 'TPROCESO1',
'Tiempo de Proceso 2' : 'TPROCESO2',
'Tiempo Proceso' : 'TPROCESO',
'Traslado a Obra 1' : 'TRASLADO1',
'Traslado a Obra 2' : 'TRASLADO 2',
'Tiempo Translado Minutos' : 'TRASLADO',
'Espera en Obra 1': 'ESPERA1',
'Espera en Obra 2': 'ESPERA2',
'Tiempo Espera Minutos': 'TESPERA',
'Descarga en Obra 1' : 'DESCARGA1',
'Descarga en Obra 2' : 'DESCARGA2',
'Tiempo Descarga Minutos': 'TDESCARGA',
'Retorno as Planta 1': 'RETORNO1',
'Retorno as Planta 2': 'RETORNO2',
'Tiempo Retorno Planta Minutos': 'TRETORNO',
'Dif. Total': 'DIFTOTAL',
'Reconstruido': 'RECONSTRUIDO',
'T.Proceso' : 'TPROCESOTOTAL',
'T.Traslado' : 'TTRASLADOTOTAL',
'T.Espera' : 'TESPERATOTAL',
'T.Descarga': 'TDESCARGATOTAL',
'T.Retorno' : 'TRETORNOTOTAL',
'Dia' : 'DIA',
'Fecha_DiaSem' : 'DIASEMANA',
'FinSemana' : 'FINSEMANA',
'FinMes': 'FINMES'},
inplace=True)
# Comprobamos que se ha realizado el cambio
df1.columns
# Creamos la columna distancia basandonos en la funcion definida para su calculo y se llenan las filas con los valores obtenidos
df1["distancia"]=df1.apply(lambda x: DistanciaGPS(x['WERK_LON'],x["WERK_LAT"],x["OBRA_LAT"],x["OBRA_LON"]),axis = 1)
# Redondeamos para tener valores enteros
df1["distancia"] = df1["distancia"].apply(np.round)
df2 =df1.copy()
# Creamos variables dummy para cada una de las variables descriptivas que inferimos podrian incidir en el tiempo de descarga
df2=dummy_convert(df1,"ID_ESTRUC")
#aplica la funcion para ID_Estruc
df2=dummy_convert(df2,"WERKS")
#aplica la fucntion para Werks
df2=dummy_convert(df2, "ID_MODALIDAD")
#aplica la funcion para Modalidad
#apartir de aqui acaba la transformacion del database
#Corroboramos el unmbero de observaciones
len(df2)
#Hacemos print de todas las columnas que hemos creado y renombrado
print(df2.columns.tolist())
# Creamos un nuevo dataset para verificar la existencia de nulos y tratarlos de manera conveniente
dfclean = df2[[ 'DOC_TRANS', 'WERKS', 'IDEOBRA', 'ID_ESTRUC','ID_MODALIDAD','VBELN_PED','VBELN_ENTREGA',
'WERK_LON','WERK_LAT','OBRA_LAT','OBRA_LON','V_ENTREGADO','TPROCESOTOTAL', 'TTRASLADOTOTAL', 'TESPERATOTAL', 'TDESCARGATOTAL',
'TRETORNOTOTAL', 'TTOTAL', 'DIASEMANA', 'FINSEMANA', 'FINMES', 'distancia', 'ID_ESTRUC_1', 'ID_ESTRUC_3', 'ID_ESTRUC_4',
'ID_ESTRUC_5', 'ID_ESTRUC_6', 'ID_ESTRUC_7', 'ID_ESTRUC_8', 'ID_ESTRUC_10', 'ID_ESTRUC_11', 'ID_ESTRUC_12',
'ID_ESTRUC_14', 'ID_ESTRUC_15', 'ID_ESTRUC_16', 'ID_ESTRUC_17', 'ID_ESTRUC_18', 'ID_ESTRUC_19',
'ID_ESTRUC_20', 'ID_ESTRUC_22', 'ID_ESTRUC_23', 'ID_ESTRUC_24', 'ID_ESTRUC_25', 'ID_ESTRUC_26',
'ID_ESTRUC_27', 'ID_ESTRUC_28', 'ID_ESTRUC_30', 'ID_ESTRUC_32', 'ID_ESTRUC_33', 'ID_ESTRUC_34',
'ID_ESTRUC_36', 'ID_ESTRUC_40', 'ID_ESTRUC_41', 'ID_ESTRUC_42', 'ID_ESTRUC_43', 'ID_ESTRUC_44',
'ID_ESTRUC_45', 'ID_ESTRUC_46', 'ID_ESTRUC_48', 'ID_ESTRUC_50', 'WERKS_1203', 'WERKS_1207',
'WERKS_1209', 'WERKS_1213', 'WERKS_1217', 'WERKS_1219', 'WERKS_1253', 'WERKS_5202', 'WERKS_5212',
'ID_MODALIDAD_1', 'ID_MODALIDAD_2', 'ID_MODALIDAD_6', 'ID_MODALIDAD_15', 'ID_MODALIDAD_16',
'ID_MODALIDAD_17', 'ID_MODALIDAD_18', 'ID_MODALIDAD_19', 'ID_MODALIDAD_20', 'ID_MODALIDAD_21']]
dfclean.head(100)
###Output
_____no_output_____
###Markdown
eliminar nulosdf2.dropna(subset = ["TDESCARGATOTAL"], inplace=True)df2.isnull()df2.dtypes
###Code
# Hacemos un recuento de todos los valores nulos en el nuevo dataset
dfclean.info(verbose=True)
# Procedemos a imputar todos los valores nulos utilizando KNN para evitar sesgar los datos, según la explicación de
# https://medium.com/@kyawsawhtoon/a-guide-to-knn-imputation-95e2dc496e
from sklearn.impute import KNNImputer
imputer = KNNImputer(n_neighbors=5)
dfclean = pd.DataFrame(imputer.fit_transform(dfclean),columns = dfclean.columns)
dfclean.isna().any()
# Verificamos que ya no existen valores nulos
dfclean.info(verbose=True)
# Guardamos el resultado en un archivo excel para facilitar una visualización previa
dfclean.to_excel("databasetransformed.xlsx")
# Realizamos un analisis preliminar de correlaciones de la nueva data para explorar insights
dfclean.corr()
# SELECCIONAMOS LAS VARIABLES QUE PODRÍAN TENER INFLUENCIA EN EL TIEMPO DE DESCARGA Y LO ANALIZAMOS GRÁFICAMENTE
color=["#f94144","#f3722c","#f8961e","#f9c74f","#90be6d","#43aa8b","#577590"]
sns.palplot(color)
df2=dfclean[['V_ENTREGADO', 'TDESCARGATOTAL', 'TTOTAL', 'DIASEMANA','FINSEMANA', 'FINMES', 'ID_MODALIDAD']]
cols=df2.corr()['TDESCARGATOTAL'].sort_values(ascending=False)
fig=plt.figure(figsize=(15,10))
plt.suptitle("Comparativa de variables que influyen en el tiempo de descarga",family='Serif', weight='bold', size=20)
j=0
for i in cols.index[1:]:
ax=plt.subplot(421+j)
ax=sns.regplot(data=df2, x='TDESCARGATOTAL',y=i, color=color[-j])
ax.legend('')
j=j+1
plt.legend('')
# hasta aca nos damos cuenta de que esta muy atomizado y no es facil ver una correlacion, procederemos a agrupar los datos para ver si es mas evidente
###Output
_____no_output_____
###Markdown
sns.distplot(dfclean['TTOTAL']) f, ax = plt.subplots(figsize=(10,10))sns.scatterplot(x='TDESCARGATOTAL', y='dºistancia', hue='IDEOBRA', data=dfclean, ax=ax, ) ax = sns.swarmplot(x="DIASEMANA", y="TDESCARGATOTAL", data=dfclean) fig, ax = plt.subplots(figsize=(10,10))sns.boxplot(y='TDESCARGATOTAL', x='distancia', data=dfclean, orient="h", ax=ax) import pandas as pdimport numpy as npimport seaborn as snssns.set_style('whitegrid')raw_df = dfcleandata = raw_df.groupby('distancia')['TDESCARGATOTAL'].sum().reset_index()data['TDESCARGATOTAL'].plot(kind='hist') dfclean['TDESCARGATOTAL'].describe()pd.qcut(dfclean['TDESCARGATOTAL'], q=30) Grafa para checar la linealidade entre las variables (distancia No Importa)plt.scatter(dfclean['distancia'], dfclean['TDESCARGATOTAL'], color='red')plt.title('Tiempo Descarga Vs Distancia', fontsize=14)plt.xlabel('Distancia', fontsize=14)plt.ylabel('Tiempo Descarga', fontsize=14)plt.grid(True)plt.show() g =sns.scatterplot(x="distancia", y="TDESCARGATOTAL", hue="TTOTAL", data=dfclean);g.set(xscale="log"); Checando la linealidade entre plt.scatter(dfclean['FINSEMANA'], dfclean['TDESCARGATOTAL'], color='red')plt.title('Tiempo Descarga Vs Fin Semana', fontsize=14)plt.xlabel('FinSemana', fontsize=14)plt.ylabel('TiempoDescarga', fontsize=14)plt.grid(True)plt.show() plt.scatter(dfclean['WERKS'], dfclean['TDESCARGATOTAL'], color='red')plt.title('Tiempo Descarga Vs Plantas', fontsize=14)plt.xlabel('Plantas', fontsize=14)plt.ylabel('Tiempo Descarga', fontsize=14)plt.grid(True)plt.show()
###Code
#Empezamos el Modelo
#1) Importar todas las libraria
#2) grafas de la linealidade entres las variables independentes y la variable dependent
#3) Criar analises de regression multipla para todas las variables (Ire parar aqui)
#4) con data set maior tener un modelo de entrenamento
#5)Analisas resultadoss
###Output
_____no_output_____ |
2015/ferran/day15.ipynb | ###Markdown
Day 15: Science for Hungry People Day 15.1
###Code
import csv
import numpy as np
def parse(input_path):
with open(input_path, 'rt') as f_input:
csv_reader = csv.reader(f_input, delimiter=' ')
l = next(csv_reader)
a = np.array([int(l[2*(j + 1)].rstrip(',')) for j in range(5)])
count = 1
for l in csv_reader:
a = np.concatenate((a, np.array([int(l[2*(j + 1)].rstrip(',')) for j in range(5)])))
count += 1
return a.reshape((count, 5))
###Output
_____no_output_____
###Markdown
Generator of tuples of integers that add up to a given constant.
###Code
from itertools import combinations
def intervals(cuts, n):
last = -1
for cut in cuts:
yield cut - last - 1
last = cut
yield n - 1 - last
def partitions(n, k):
"""
Generator of seqs of k integers that sum up to n.
"""
assert(1 <= k <= n)
for cut in combinations(range(n + k - 1), k - 1):
yield np.array(tuple(intervals(cut, n + k - 1)))
###Output
_____no_output_____
###Markdown
Test all recipy candidates.
###Code
import operator
from functools import reduce
def max_recipy_score(values, n=100):
max_score = 0
k = values.shape[0]
for recipy in partitions(n, k):
y = np.dot(recipy, values[:,:-1])
mask = y > 0
if reduce(operator.and_, mask, True):
score = np.prod(y)
if max_score < score: max_score = score
return max_score
###Output
_____no_output_____
###Markdown
Test
###Code
values = parse('inputs/input15.test1.txt')
assert(max_recipy_score(values) == 62842880)
###Output
_____no_output_____
###Markdown
Solution
###Code
values = parse('inputs/input15.txt')
max_recipy_score(values)
###Output
_____no_output_____
###Markdown
Day 15.2
###Code
def max_recipy_score_calorie(values, n=100, c=500):
max_score = 0
k = values.shape[0]
for recipy in partitions(n, k):
y = np.dot(recipy, values[:,:-1])
positive_totals = y > 0
calorie_restriction = (np.dot(recipy, values[:,-1]) == c)
if reduce(operator.and_, positive_totals, True) and calorie_restriction:
score = np.prod(y)
if max_score < score: max_score = score
return max_score
###Output
_____no_output_____
###Markdown
Test
###Code
values = parse('inputs/input15.test1.txt')
assert(max_recipy_score_calorie(values) == 57600000)
###Output
_____no_output_____
###Markdown
Solution
###Code
values = parse('inputs/input15.txt')
max_recipy_score_calorie(values)
###Output
_____no_output_____ |
Train_Basline_With_Gradient_Clipping_And_LR=0_01.ipynb | ###Markdown
🧰 Setups, Installations and Imports
###Code
%%capture
!pip install wandb --upgrade
!pip install albumentations
!git clone https://github.com/ayulockin/Explore-NFNet
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import sys
sys.path.append("Explore-NFNet")
import os
import cv2
import numpy as np
from functools import partial
import matplotlib.pyplot as plt
# Imports from the cloned repository
from models.resnet import resnet_v1
from models.mini_vgg import get_mini_vgg
# Augmentation related imports
import albumentations as A
# Seed everything for reproducibility
def seed_everything():
# Set the random seeds
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
np.random.seed(hash("improves reproducibility") % 2**32 - 1)
tf.random.set_seed(hash("by removing stochasticity") % 2**32 - 1)
seed_everything()
# Avoid TensorFlow to allocate all the GPU at once.
# Ref: https://www.tensorflow.org/guide/gpu
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
import wandb
from wandb.keras import WandbCallback
wandb.login()
DATASET_NAME = 'cifar10'
IMG_HEIGHT = 32
IMG_WIDTH = 32
NUM_CLASSES = 10
SHUFFLE_BUFFER = 1024
BATCH_SIZE = 256
EPOCHS = 100
AUTOTUNE = tf.data.experimental.AUTOTUNE
print(f'Global batch size is: {BATCH_SIZE}')
###Output
Global batch size is: 256
###Markdown
⛄ Download and Prepare Dataset
###Code
(train_ds, val_ds, test_ds), info = tfds.load(name=DATASET_NAME,
split=["train[:85%]", "train[85%:]", "test"],
with_info=True,
as_supervised=True)
@tf.function
def preprocess(image, label):
# preprocess image
image = tf.cast(image, tf.float32)
image = image/255.0
return image, label
# Define the augmentation policies. Note that they are applied sequentially with some probability p.
transforms = A.Compose([
A.HorizontalFlip(p=0.7),
A.Rotate(limit=30, p=0.7)
])
# Apply augmentation policies.
def aug_fn(image):
data = {"image":image}
aug_data = transforms(**data)
aug_img = aug_data["image"]
return aug_img
@tf.function
def apply_augmentation(image, label):
aug_img = tf.numpy_function(func=aug_fn, inp=[image], Tout=tf.float32)
aug_img.set_shape((IMG_HEIGHT, IMG_WIDTH, 3))
return aug_img, label
train_ds = (
train_ds
.shuffle(SHUFFLE_BUFFER)
.map(preprocess, num_parallel_calls=AUTOTUNE)
.map(apply_augmentation, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
# plt.title(f'{np.argmax(label_batch[n].numpy())}')
plt.title(f'{label_batch[n].numpy()}')
plt.axis('off')
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch, label_batch)
print(image_batch.shape, label_batch.shape)
###Output
(256, 32, 32, 3) (256,)
###Markdown
🐤 Model
###Code
class ResNetModel(tf.keras.Model):
def __init__(self, resnet):
super(ResNetModel, self).__init__()
self.resnet = resnet
def train_step(self, data):
images, labels = data
with tf.GradientTape() as tape:
predictions = self.resnet(images)
loss = self.compiled_loss(labels, predictions)
trainable_params = self.resnet.trainable_variables
gradients = tape.gradient(loss, trainable_params)
gradients_clipped = [tf.clip_by_norm(g, 0.01) for g in gradients] # clippling threshold = 0.01
self.optimizer.apply_gradients(zip(gradients_clipped, trainable_params))
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
images, labels = data
predictions = self.resnet(images, training=False)
loss = self.compiled_loss(labels, predictions)
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def save_weights(self, filepath):
self.resnet.save_weights(filepath=filepath, save_format="tf")
def call(self, inputs, *args, **kwargs):
return self.resnet(inputs)
tf.keras.backend.clear_session()
test_model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
test_model.build((1, IMG_HEIGHT, IMG_WIDTH, 3))
test_model.summary()
print(f"Total learnable parameters: {test_model.count_params()/1e6} M")
###Output
Model: "res_net_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
model (Functional) (None, 10) 271754
=================================================================
Total params: 271,754
Trainable params: 271,722
Non-trainable params: 32
_________________________________________________________________
Total learnable parameters: 0.271754 M
###Markdown
📲 Callbacks
###Code
earlystopper = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=10, verbose=0, mode='auto',
restore_best_weights=True
)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5,
patience=3, verbose=1
)
###Output
_____no_output_____
###Markdown
🚋 Train with W&B
###Code
tf.keras.backend.clear_session()
# Intialize model
model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(opt, 'sparse_categorical_crossentropy', metrics=['acc'])
# Intialize W&B run
run = wandb.init(project='nfnet', job_type='train-grad-clip')
# Train model
model.fit(train_ds,
epochs=EPOCHS,
validation_data=val_ds,
callbacks=[WandbCallback(),
reducelronplateau,
earlystopper])
# Evaluate model on test set
loss, acc = model.evaluate(test_ds)
wandb.log({'Test Accuracy': round(acc, 3)})
# Close W&B run
run.finish()
###Output
_____no_output_____ |
unit-2/sprint-1/Unit_2_Sprint_1_Linear_Models_Study_Guide.ipynb | ###Markdown
This study guide should reinforce and provide practice for all of the concepts you have seen in the past week. There are a mix of written questions and coding exercises, both are equally important to prepare you for the sprint challenge as well as to be able to speak on these topics comfortably in interviews and on the job.If you get stuck or are unsure of something remember the 20 minute rule. If that doesn't help, then research a solution with google and stackoverflow. Only once you have exausted these methods should you turn to your Team Lead - they won't be there on your SC or during an interview. That being said, don't hesitate to ask for help if you truly are stuck.Have fun studying! Resources [SKLearn Linear Regression Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)[SKLearn Train Test Split Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)[SKLearn Logistic Regression Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)[SKLearn Scoring Metrics](https://scikit-learn.org/stable/modules/model_evaluation.html)
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Linear Regression Basics and Data Preparation Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions.***Linear Regression:** `Your Answer Here`**Polynomial Regression:** `Your Answer Here`**Overfitting:** `Your Answer Here`**Underfitting:** `Your Answer Here`**Outlier:** `Your Answer Here`**Categorical Encoding:** `Your Answer Here` Use `auto_df` to complete the following.
###Code
columns = ['symboling','norm_loss','make','fuel','aspiration','doors',
'bod_style','drv_wheels','eng_loc','wheel_base','length','width',
'height','curb_weight','engine','cylinders','engine_size',
'fuel_system','bore','stroke','compression','hp','peak_rpm',
'city_mpg','hgwy_mpg','price']
auto_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data'
auto_df = pd.read_csv(auto_url, names=columns, header=None)
auto_df.head()
###Output
_____no_output_____
###Markdown
Perform a train test split on `auto_df`, your target feature is `price`
###Code
###Output
_____no_output_____
###Markdown
It's always good to practice EDA, so explore the dataset with both explanatory statistics and visualizations.
###Code
###Output
_____no_output_____
###Markdown
Check for nulls and then write a function to fill in null values. As you can see with `norm_loss`, some of the nulls have a placeholder value of `?` that will need to be addressed.
###Code
###Output
_____no_output_____
###Markdown
How does train test split address underfitting/overfitting?`Your Answer Here`What are three synonyms for the Y Variable?- `Your Answer Here`- `Your Answer Here`- `Your Answer Here`What are three synonyms for the X Variable(s)?- `Your Answer Here`- `Your Answer Here`- `Your Answer Here` One hot encode a categorical feature
###Code
###Output
_____no_output_____
###Markdown
Define the 5 versions of **Baseline**:1. `Your Answer Here`2. `Your Answer Here`3. `Your Answer Here`4. `Your Answer Here`5. `Your Answer Here`What is the purpose of getting a baseline that tells you what you would get with a guess? (Mean or Majority Classifier Baseline)`Your Answer Here` Get the mean baseline for the target feature. If you log transformed the target feature, get the mean baseline of the log transformed target feature.
###Code
###Output
_____no_output_____
###Markdown
Modeling What is the 5 step process for using the Scikit-learn's estimator API?1. `Your Answer Here`2. `Your Answer Here`3. `Your Answer Here`4. `Your Answer Here`5. `Your Answer Here`Follow the 5 steps to make a prediction on your test set. The functions and changes you made to `X_train` may need to be applied to `X_test` if you have not done so already.
###Code
# Step 1 - Use Linear Regression
# Step 2
# Step 3
# Step 4
# Step 5
###Output
_____no_output_____
###Markdown
Scoring Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions.***Mean Absolute Error (MAE):** `Your Answer Here`**Mean Squared Error (MSE):** `Your Answer Here`**Root Mean Squared Error (RMSE):** `Your Answer Here`**Coefficient of Determination ($R^2$):** `Your Answer Here`**Residual Error:** `Your Answer Here`**Bias:** `Your Answer Here`**Variance:** `Your Answer Here`**Validation Curve:** `Your Answer Here`**Ordinary Least Squares:** `Your Answer Here`**Ridge Regression:** `Your Answer Here` In a short paragraph, explain the Bias-Variance Tradeoff```Your Answer Here``` Use each of the regression metrics (MAE, MSE, RMSE, and $R^2$) on both the mean baseline and your predictions.
###Code
# MAE
# MSE
# RMSE
# R^2
###Output
_____no_output_____
###Markdown
Print and plot the coefficients of your model.
###Code
# Print the coefficients
# Plot the coefficients
###Output
_____no_output_____
###Markdown
Interpret your results with a short paragraph. How well did your model perform? How do you read a single prediction? Did you beat the baseline? ```Your Answer Here``` Use Ridge Regression and get the $R^2$ score
###Code
###Output
_____no_output_____
###Markdown
How does the ridge regression score compare to your linear regression and baseline scores?```Your Answer Here``` Logistic Regression Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions.***Logistic Regression:** `Your Answer Here`**Majority Classifier:** `Your Answer Here`**Validation Set:** `Your Answer Here`**Accuracy:** `Your Answer Here`**Feature Selection:** `Your Answer Here` Answer each of the following questions with no more than a short paragraph.What is the difference between linear regression and logistic regression?```Your Answer Here```What is the purpose of having a validation set?```Your Answer Here```Can we use MAE, MSE, RMSE, and $R^2$ to score a Logistic Regression model? Why or why not? If not, how do we score Logistic Regression models?```Your Answer Here``` Use the Titanic dataset below to predict whether passengers survived or not. Try to avoid looking at the work you did during the lecture.Make sure to do the following but feel free to do more:- Train/Test/Validation Split- Majority Classifier Baseline- Include at least 2 features in X (Stretch, try K-Best)- Use Logistic Regression- Score your model's accuracy against the Majority Classifier Baseline - If you did not beat the baseline, tweak your model until it exceeds the baseline- Score your model on the validation set
###Code
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
train = pd.read_csv(DATA_PATH+'titanic/train.csv')
test = pd.read_csv(DATA_PATH+'titanic/test.csv')
train.head()
###Output
_____no_output_____ |
strings.ipynb | ###Markdown
Strings Bir veya birden fazla karakterden oluşan yapıya karakter dizisi yani string denir.String'leri tanımlamak için ''(tek tınak), ""(çift tırnak), """"""(üç tırnak) ile gösterebiliriz.Python'da tırnak işaretlerin arasında yazılan her ifade karakter dizisi olarak algılanır.
###Code
print("Elma")
print("282&kn=")
print("dlanfn")
print("") #içi boş karakter dizisi
print(" ") # içinde bir adet boşluk olan karakter dizisi
###Output
Elma
282&kn=
dlanfn
###Markdown
içinde boşluk olan karakter dizisi boşluğu temsil eder, karakter dizileri arasında boşluk bırakmamızı sağlar
###Code
print("Murat"+" "+"Can")
###Output
Murat Can
###Markdown
Boş bir karakter dizisi olmasaydı:
###Code
print("Murat"+"Can")
###Output
MuratCan
###Markdown
Type() Fonksiyonu type() fonskiyonu bir nesnenin hangi veri tipinde olduğunu gösterir
###Code
print(type("aşf30&"))
###Output
<class 'str'>
###Markdown
Fonksiyonların ve metotların parantez içinde belirtilen değerlere teknkik dilde parametre denir.
###Code
print("tnbc1"+".com")
###Output
tnbc1.com
###Markdown
boşluk bırakma " " ile boşluk bırakmak
###Code
print("Mustafa"+" "+"Kemal"+" "+"Atatürk")
###Output
Mustafa Kemal Atatürk
###Markdown
"blabla " tırnak içinde boşluk bırakma
###Code
print("Mustafa "+"Kemal "+"Atatürk")
###Output
Mustafa Kemal Atatürk
###Markdown
Karakter dizilerinde:(+) işareti birleştirme(*) işareti tekrarlama işlevi görür
###Code
print("w"*3)
print("yavaş "*2+"yaşamak lazım")
print("-"*10)
###Output
www
yavaş yavaş yaşamak lazım
----------
###Markdown
Çift tırnağı kullandığımız zaman tek tırnak içindetek tırnağı kullanacağımız zaman çift tırnak içinde yazılmalıdır.aksi takdirde makinenin kafası karışıp hata alırız print('Murat'ın kalemi var') hata aldık çünkü python karakter dizisinin sonunu algılayamadı.
###Code
print("Murat'ın kalemi var")
###Output
Murat'ın kalemi var
###Markdown
All of these python notebooks are available at [https://github.com/caxqueiroz/coding-with-python] Working with strings The Print Statement As seen previously, The **print()** function prints all of its arguments as strings, separated by spaces and follows by a linebreak: - print("Hello World") - print("Hello",'World') - print("Hello", )Note that **print** is different in old versions of Python (2.7) where it was a statement and did not need parenthesis around its arguments.
###Code
print("Hello","World")
###Output
Hello World
###Markdown
The print has some optional arguments to control where and how to print. This includes `sep` the separator (default space) and `end` (end charcter) and `file` to write to a file.
###Code
print("Hello","World",sep='...',end='!!')
###Output
Hello...World!!
###Markdown
String FormatingThere are lots of methods for formating and manipulating strings built into python. Some of these are illustrated here.String concatenation is the "addition" of two strings. Observe that while concatenating there will be no space between the strings.
###Code
string1='World'
string2='!'
print('Hello' + string1 + string2)
###Output
HelloWorld!
###Markdown
The **%** operator is used to format a string inserting the value that comes after. It relies on the string containing a format specifier that identifies where to insert the value. The most common types of format specifiers are: - %s -> string - %d -> Integer - %f -> Float - %o -> Octal - %x -> Hexadecimal - %e -> exponential
###Code
print("Hello %s" % string1)
print("Actual Number = %d" %18)
print("Float of the number = %f" %18)
print("Octal equivalent of the number = %o" %18)
print("Hexadecimal equivalent of the number = %x" %18)
print("Exponential equivalent of the number = %e" %18)
###Output
Hello World
Actual Number = 18
Float of the number = 18.000000
Octal equivalent of the number = 22
Hexadecimal equivalent of the number = 12
Exponential equivalent of the number = 1.800000e+01
###Markdown
When referring to multiple variables parenthesis is used. Values are inserted in the order they appear in the paranthesis (more on tuples in the next lecture)
###Code
print("Hello %s %s. This meaning of life is %d" %(string1,string2,42))
###Output
Hello World !. This meaning of life is 42
###Markdown
We can also specify the width of the field and the number of decimal places to be used. For example:
###Code
print('Print width 10: |%10s|'%'x')
print('Print width 10: |%-10s|'%'x') # left justified
print("The number pi = %.2f to 2 decimal places"%3.1415)
print("More space pi = %10.2f"%3.1415)
print("Pad pi with 0 = %010.2f"%3.1415) # pad with zeros
###Output
Print width 10: | x|
Print width 10: |x |
The number pi = 3.14 to 2 decimal places
More space pi = 3.14
Pad pi with 0 = 0000003.14
###Markdown
Other String Methods Multiplying a string by an integer simply repeats it
###Code
print("Hello World! "*5)
###Output
Hello World! Hello World! Hello World! Hello World! Hello World!
###Markdown
Strings can be tranformed by a variety of functions:
###Code
s="hello wOrld"
print(s.capitalize())
print(s.upper())
print(s.lower())
print('|%s|' % "Hello World".center(30)) # center in 30 characters
print('|%s|'% " lots of space ".strip()) # remove leading and trailing whitespace
print("Hello World".replace("World","Class"))
###Output
Hello world
HELLO WORLD
hello world
| Hello World |
|lots of space|
Hello Class
###Markdown
There are also lost of ways to inspect or check strings. Examples of a few of these are given here:
###Code
s="Hello World"
print("The length of '%s' is"%s,len(s),"characters") # len() gives length
s.startswith("Hello") and s.endswith("World") # check start/end
# count strings
print("There are %d 'l's but only %d World in %s" % (s.count('l'),s.count('World'),s))
print('"el" is at index',s.find('el'),"in",s) #index from 0 or -1
###Output
The length of 'Hello World' is 11 characters
There are 3 'l's but only 1 World in Hello World
"el" is at index 1 in Hello World
###Markdown
String comparison operationsStrings can be compared in lexicographical order with the usual comparisons. In addition the `in` operator checks for substrings:
###Code
'abc' < 'bbc' <= 'bbc'
"ABC" in "This is the ABC of Python"
###Output
_____no_output_____
###Markdown
Accessing parts of strings Strings can be indexed with square brackets. Indexing starts from zero in Python.
###Code
s = '123456789'
print('First charcter of',s,'is',s[0])
print('Last charcter of',s,'is',s[len(s)-1])
###Output
First charcter of 123456789 is 1
Last charcter of 123456789 is 9
###Markdown
Negative indices can be used to start counting from the back
###Code
print('First charcter of',s,'is',s[-len(s)])
print('Last charcter of',s,'is',s[-1])
###Output
First charcter of 123456789 is 1
Last charcter of 123456789 is 9
###Markdown
Finally a substring (range of characters) an be specified as using $a:b$ to specify the characters at index $a,a+1,\ldots,b-1$. Note that the last charcter is *not* included.
###Code
print("First three charcters",s[0:3])
print("Next three characters",s[3:6])
###Output
First three charcters 123
Next three characters 456
###Markdown
An empty beginning and end of the range denotes the beginning/end of the string:
###Code
print("First three characters", s[:3])
print("Last three characters", s[-3:])
###Output
First three characters 123
Last three characters 789
###Markdown
Strings are immutableIt is important that strings are constant, immutable values in Python. While new strings can easily be created it is not possible to modify a string:
###Code
s='012345'
sX=s[:2]+'X'+s[3:] # this creates a new string with 2 replaced by X
print("creating new string",sX,"OK")
sX=s.replace('2','X') # the same thing
print(sX,"still OK")
s[2] = 'X' # an error!!!
###Output
creating new string 01X345 OK
01X345 still OK
###Markdown
Strings and its methods
###Code
str = ("hello")
str
# Returns a centered string
str.center(10)
# New string
str1 = ("I am Manoj")
str1
# Returns an encoded version of the string
str3 = str1.encode()
str3
# Returns True if all characters in the string are in the alphabet
str3 = str1.isalpha()
str3
# Returns the number of times a specified value occurs in a string
str3 = str1.count("manoj")
str3
# Converts the first character to upper case
str3 = str1.capitalize()
str3
###Output
_____no_output_____
###Markdown
StringsÉ como uma lista, cada palavra é uma lista de letrasPossui vários métodos de manipulação semelhante a listasToda string pode ser convertida em lista
###Code
# toda string é uma lista
nome = "Leandro"
print(type(nome))
list(nome)
# fatiando uma string
nome[:3]
nome[-3:]
len(nome)
nome * 3
nome[:3] * 3
nome.upper()
nome.title()
###Output
_____no_output_____
###Markdown
Split
###Code
# split transforma o nome composto e um array de palavras
nome = "Leandro Barbieri"
[nome.title() for nome in nome.split()]
###Output
_____no_output_____
###Markdown
Replace
###Code
# substitui o n pelo t apenas nas duas primeiras ocorrencias de n
"bananinha".replace("n", "t", 2)
###Output
_____no_output_____
###Markdown
Find
###Code
# buscar uma letra
text_buscar = "a abelinha zune que zune durante a manha"
text_buscar.find("a", 0) # primeira ocorrencia de a
text_buscar.find("a", 2) # segunda ocorrencia de a
# busca uma palavra zune
print(text_buscar.find("zune", 8)) # primeira ocorrencia de "zune"
print(text_buscar.find("zune", 15)) # segunda ocorrencia de "zune"
cpf = ["13212344566", "134.234.456-34"]
for c in cpf:
print(c if c.find("-") != -1 else "Não está formatado")
###Output
Não está formatado
134.234.456-34
###Markdown
Read e Write
###Code
# ler strings em arquivos de texto
with open("arquivo.txt", "w") as arquivo:
for i in range(10):
arquivo.write(f"Inserindo um linha nova {i}\n")
with open("arquivo.txt", "r") as arquivo:
linhas = arquivo.readlines()
linhas
###Output
_____no_output_____
###Markdown
Formatação de strings
###Code
faturamento = 1500
custo = 500
lucro = faturamento - custo
# string dinâmica inserindo variaveis como parte da saída
# usar a letra "f" no inicio da string
print(f"O lucro foi {lucro}")
# Controlar a mascara de formatação.
# Usar dois pontos iniciar a definição
# Vírgula representa que terá separador de milhar
print(f"O lucro foi {lucro: ,}")
# Adicionar casa decinal ".2f" (float com duas casa) depois da vírgula
print(f"O lucro foi R${lucro: ,.2f}")
# Sem formatação de percentual
margem = lucro / faturamento
print(margem)
# Formatação em Percentual com duas casas decimais
print(f"A margem foi:{margem: .2%}")
# Formatar para o padrão brasileiro em reais
# Usar o underline como separador de milhar pq se usar . irá conflitar com o . do separador decimal padrão
# 1 passo: alterar o separador de ponto milhar para _ (underscore)
texto_lucro = f"R${lucro: _.2f}"
# 2 passo: alterar o . (ponto) quer representa decinal para vírgula,
# e depois o _ (underscore) que representa milhar para .
texto_lucro = texto_lucro.replace(".", ",").replace("_", ".")
print(f"O lucro foi {texto_lucro}")
###Output
O lucro foi 1000
O lucro foi 1,000
O lucro foi R$ 1,000.00
0.6666666666666666
A margem foi: 66.67%
O lucro foi R$ 1.000,00
###Markdown
Translate
###Code
# traduzir
# Traduzir um texto automaticamente usando a internet
!pip install textblob
from textblob import TextBlob
texto_outro_idioma = "Hello world. This text are translated by one python library called TextBlob"
# vai na internet e faz a tradução
texto_traduzido = TextBlob(texto_outro_idioma).translate(to="pt-br")
!pip install textblob
###Output
_____no_output_____
###Markdown
Sequences of Characters using syntax of either '' or ""- 'Hello'- "Hello"
###Code
'Hello'
"World"
'This is also a string using a full sentence'
"I'm using double quotes to use the single quote in the string"
print("Hello World")
#Prints the last string if print() is not used
"Hello World one"
"Hello World two"
# use print()
print("Hello World one")
print("Hello World two")
#using escape sequence \
print("Hello \n World")
print("Hello \nWorld")
print("\t Hello \n\t World")
#check the length of the string ( counts spaces too)
len("Hello World")
###Output
_____no_output_____
###Markdown
Python More on Strings---
###Code
sales_records = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'
}
sales_statements = '{} bought items(s) at a price of {} each for a total of {} '
print( sales_statements.format(
sales_records['person'],
sales_records['price'],
sales_records['num_items']*sales_records['price']
) )
###Output
_____no_output_____
###Markdown
Python Strings
###Code
a=6
a
a+2
a='hi'
a
len(a)
a+len(a)
a+str(len(a))
run hello.py Guido
def main():
print repeat('Yay! ',False)
print repeat('Woo Hoo ',True)
repeat
main()
def repeat(s,exclaim):
result=s+s+s
if exclaim:
result=result+'!!!'
return result
main()
help(len)
###Output
Help on built-in function len in module __builtin__:
len(...)
len(object) -> integer
Return the number of items of a sequence or collection.
###Markdown
String Methods If 's' is a string, following are string methods on it| String Method | Returns str unless noted || ---------------------- | ------------------------------------------------------ || s.lower() | copy of 's' in lowercase || s.upper() | copy of 's' in uppercase || s.center(w) | copy of 's' centered in 'w' characters wide || s.capitalize | copy of 's' with first char uppercase || s.islower() | bool, check case || s.isupper() | bool, check case || s.isalpha() | bool, || s.isdigit() | bool, || s.isspace() | bool, || s.startswith('other') | bool, || s.endswith('other') | bool, || s.count(sub) | int, count of 'sub' in s || s.find('other') | int, index of first occurence or -1 || s.replace('old','new') | copy of 's' with 'old' replaced by 'new' || s.split('delim') | list of substrings separated by delimiter || s.split() | list of substrings separated by whitespace || s.join(list) | joins elements of 'list' using string 's' as delimiter || s.lstrip() | copy of 's' without leading whitespace || s.rstrip() | copy of 's' without trailing whitespace | String Slices
###Code
# Slicing: s[start:end]
# Suppose if
# H e l l o
# 0 1 2 3 4
#-5-4-3-2-1
s="Hello"
s[1:4]# end not inclusive
s[1:]
s[:]
s[1:100]# out of bound ranges default to string length
###Output
_____no_output_____
###Markdown
Negative Indexing
###Code
s[-1]
s[-4]
s[:-3]# Until last three characters i.e., -3 == 2
s[-3:]# Starting from -3 == 2
###Output
_____no_output_____
###Markdown
**Truism: s[ :n ] + s[ n: ] == s** String % Operator printf() like functionality, takes printf-type format string on left ( %d, %s, %f, %g ) and matches the tuple on the right
###Code
text = "%d little pigs come out or I'll %s and %s and %s" % (3,'huff','puff','blow down')
text
# code-across-lines technique (works with (),[],{})
text = ("%d little pigs come out or I'll %s and %s and %s" %
(3,'huff','puff','blow down'))
text
###Output
_____no_output_____
###Markdown
If Statement 'if', 'else', 'elif'boolean operators are 'and', 'or', 'not' unlike '&&' or '||'
###Code
if speed >= 80:
print 'License and registration please'
if mood == 'terrible' or speed >= 100:
print 'You have the right to remain silent.'
elif mood == 'bad' or speed >= 90:
print "I'am going to have to write you a ticket."
write_ticket()
else:
print "Let's try to keep it under 80 ok?"
###Output
_____no_output_____ |
data/years-preprocessing.ipynb | ###Markdown
Currently is in the form $[b,A]$ but want it as $[A,b]$.
###Code
b.shape
A.shape
new_data = np.column_stack((A,b))
new_data.shape
np.savetxt("YearPredictionsMSD.csv", new_data)
new_data[0,90]
new_data[0,:]
###Output
_____no_output_____ |
check_data_across_input_files.ipynb | ###Markdown
Checking data matchesIn this notebook we will check that the geographical data used as inputs for the Save The Turtles project is complete and that the data fields are consistent across the files.The model requires four input files:1. travel matrix1. activity data1. scenario data1. clinic dataThe *travel matrix* contains the distance from each patient postcode sector to the clinicThe *activity data* contains the number of admissions from each patient postcode sector for each treatment function, defined as either new admission or followup admissionThe *scenario data* contains the proportion of admissions for each treatment function that are seen virtually, defined as either new admission or followup admissionThe *clinic data* defines which clinic location is open for each treatment functionAs there are the same fields across the four input files, this notebook checks that the fields are complete across the four files. And also future proofs the model by making sure that all of the possible postcode sectors are included. Part 1: Summary of the fields in each input file Import libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Open data files
###Code
input_folder = "dummy_data"
# read in three data files
travel_matrix = pd.read_csv(f'{input_folder}/210528_travelMatrix.csv')
activity_data = pd.read_csv(f'{input_folder}/dummy_activity_data.csv')
scenario_data = pd.read_csv(f'{input_folder}/dummy_scenarios_data.csv')
#clinic_data = pd.read_csv(f'{input_folder}/clinics.csv')
###Output
_____no_output_____
###Markdown
The content of the travel matrix(travel_matrix has pc sector and clinic location)
###Code
print (f'Travel matrix has {travel_matrix.shape[0]} postcode sectors')
print (f'Travel matrix has {travel_matrix.shape[1] - 1} clinic locations')
###Output
Travel matrix has 107 postcode sectors
Travel matrix has 3 clinic locations
###Markdown
The content of the activity data(activity_data has pc sector and treatment function (for followup and new))
###Code
activity_column_names = activity_data.columns
print (f'Activity data has {activity_data["pc_sector"].nunique()} postcode sectors')
print (f'Activity data has {activity_data["treatment_function"].nunique()} treatment functions')
print (f'Activity data calls the two admission splits: {activity_column_names[-2]} and {activity_column_names[-1]}')
###Output
Activity data has 107 postcode sectors
Activity data has 80 treatment functions
Activity data calls the two admission splits: total_new_adms and total_followup_adms
###Markdown
The content of the scenarios data(scenarios_data has treatment function (for followup and new))
###Code
scenario_column_names = scenario_data.columns
print (f'Scenario data has {scenario_data["treatment_function"].nunique()} treatment functions')
print (f'Scenario data has {scenario_data["scenario_title"].nunique()} scenarios')
print (f'Scenario data calls the two admission splits: {scenario_column_names[-2]} and {scenario_column_names[-1]}')
###Output
Scenario data has 67 treatment functions
Scenario data has 3 scenarios
Scenario data calls the two admission splits: pc_followup_adms_virtual and pc_new_adms_virtual
###Markdown
The content of the clinic data
###Code
print("yet to do")
###Output
yet to do
###Markdown
Part 2 Comparing fields across files Comparing treatment function content across files
###Code
scenario_tf = list(scenario_data["treatment_function"].unique())
activity_tf = list(activity_data["treatment_function"].unique())
only_in_scen = list(set(scenario_tf) - set(activity_tf))
only_in_activity = list(set(activity_tf) - set(scenario_tf))
print (f'Here are the {len(only_in_activity)} treatment functions included in the Activity data and not in the Scenario data:')
print(only_in_activity)
print ()
print (f'Here are the {len(only_in_scen)} treatment functions included in the Scenario data and not in the Activity data:')
print(only_in_scen)
###Output
Here are the 15 treatment functions included in the Activity data and not in the Scenario data:
['109_bariatric_surgery_service', '370_medical_oncology', '677_gastrointestinal_physiology_service', '311_clinical_genetics', '170_cardiothoracic_surgery', '145_oral_and_maxillofacial_surgery_service', '505_fetal_medicine_service_', '670_urological_physiology_service', '007_non_consultant', '000_dummy_treatment_function', '110_trauma_and_orthopaedic_service', '347_sleep_medicine_service', '461_ophthalmic_and_vision_science_service', '675_cardiac_physiology_service', '950_nursing_episode']
Here are the 2 treatment functions included in the Scenario data and not in the Activity data:
['110_trauma_and_orthopaedics', '140_oral_surgery']
###Markdown
Comparing postcode sector content across files
###Code
travel_pc = list(travel_matrix["pc_sector"].unique())
activity_pc = list(activity_data["pc_sector"].unique())
only_in_travel = list(set(travel_pc) - set(activity_pc))
only_in_activity = list(set(activity_pc) - set(travel_pc))
print (f'Here are the {len(only_in_activity)} postcode sectors included in the Activity data and not in the Travel data:')
print(only_in_activity)
print ()
print (f'Here are the {len(only_in_scen)} postcode sectors included in the Travel data and not in the Activity data:')
print(only_in_travel)
###Output
Here are the 2 postcode sectors included in the Activity data and not in the Travel data:
['TR27 9', 'TR11 9']
Here are the 2 postcode sectors included in the Travel data and not in the Activity data:
['PL14 9', 'TR14 4']
###Markdown
Comparing clinic content across files
###Code
# yet to do
###Output
_____no_output_____
###Markdown
Part 3: Complete postcode sector list for CornwallWe have seen already that the travel matrix does not contain all of the postcode sectors that are in the activity data. To future proof the model we need to have all of the Cornish postcode sectors in the travel matrix.Plot a map of the county, and the postcode sectors to see what we are wanting to include.We will use geopandas. Import libraries
###Code
import geopandas as gpd
pc_sector_shp = gpd.read_file("shapefiles/GB_Postcodes/PostalSector_cornwall.shp")#zip://./data/ne_110m_admin_0_countries.zip")
pc_sector_shp.head()
pc_sector_shp.plot()
filename = ("shapefiles/county_boundaries/devon_county.shp")
county_devon_shp = gpd.read_file(filename,
crs='EPSG:4326')
county_devon_shp = county_devon_shp.to_crs(epsg=27700)
county_devon_shp.head()
county_devon_shp.plot()
filename = ("shapefiles/county_boundaries/cornwall_county.shp")
county_cornwall_shp = gpd.read_file(filename,
crs='EPSG:4326')
county_cornwall_shp = county_cornwall_shp.to_crs(epsg=27700)
###Output
_____no_output_____
###Markdown
Plot the shapfiles on the same map
###Code
ax = pc_sector_shp.plot(figsize=(10, 10), zorder=2,
linewidth=0.3, edgecolor='k', facecolor='none')#, #alpha=0.8, areacolor='none')
#ax.set_axis_off()
county_devon_shp.plot(ax=ax, zorder=1, edgecolor='mediumseagreen', facecolor='mediumseagreen')#1 is at bottom
county_cornwall_shp.plot(ax=ax, zorder=1, edgecolor='mediumseagreen', facecolor='mediumseagreen')#1 is at bottom
ax.set(xlim=(130000, 260000), ylim=(0, 120000))
###Output
_____no_output_____
###Markdown
Part 3: Get full list of postcode sectors in cornwall.From https://www.doogal.co.uk/AdministrativeAreas.php?district=E06000052 download document to get access to the file "Cornwall postcodes.csv". Want to extract the unique postcode sectors that are currently in use. Take the Postcode column and extract the postcode sector (the first block of characters, plus the first character following the space). Import libraries
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Define a function to return the unique instances of a list
###Code
def unique(list1):
# function to get unique values in a list
x = np.array(list1)
return(np.unique(x))
###Output
_____no_output_____
###Markdown
Read in data and include on those rows that are in use.
###Code
# read in data file
df = pd.read_csv('shapefiles/Cornwall postcodes.csv')
#Only include those that are in use
df = df[df["In Use?"] == "Yes"]
###Output
_____no_output_____
###Markdown
Take the postcode column, and split it into two, based on the whitespace
###Code
pc_split = df['Postcode'].str.split()
###Output
_____no_output_____
###Markdown
For each postcode (split into two), create the postcode sector by taking the first half and adding on the first character of the second half
###Code
cornish_pc_sector = []
cornish_pc_area = []
for i in range(pc_split.shape[0]):
cornish_pc_sector.append(pc_split.iloc[i][0] + " " + pc_split.iloc[i][1][0])
cornish_pc_area.append(pc_split.iloc[i][0])
###Output
_____no_output_____
###Markdown
Take the unique occurances
###Code
cornish_pc_sector = unique(cornish_pc_sector)
cornish_pc_area = unique(cornish_pc_area)
###Output
_____no_output_____
###Markdown
Create dataframes to output the values as a file
###Code
df1 = pd.DataFrame(cornish_pc_sector, columns=["postcode_sector"])
df2 = pd.DataFrame(cornish_pc_area, columns=["postcode_area"])
df1.to_csv("shapefiles/cornwall_postcode_sectors.csv",index=False)
df2.to_csv("shapefiles/cornwall_postcode_areas.csv",index=False)
###Output
_____no_output_____
###Markdown
Part 4: Compare the list of postcode sectors in the geographical model inputs with this complete list of postcode sectors
###Code
not_in_travel = list(set(cornish_pc_sector) - set(travel_pc))
not_in_activity = list(set(cornish_pc_sector) - set(activity_pc))
not_in_both = list(set(not_in_activity+not_in_travel))
only_in_travel = list(set(travel_pc) - set(cornish_pc_sector))
only_in_activity = list(set(activity_pc) - set(cornish_pc_sector))
print (f'Here are the {len(not_in_activity)} postcode sectors included in the full list and not in the activity data:')
print(not_in_activity)
print ()
print (f'Here are the {len(not_in_travel)} postcode sectors included in the full list and not in the Travel data:')
print(not_in_travel)
print ()
print (f'Here are the {len(not_in_both)} postcode sectors included in the full list and not in both the activity & travel data:')
print(not_in_both)
print ()
print (f'Here are the {len(only_in_activity)} postcode sectors included in the activity data and not in the full list:')
print(only_in_activity)
print ()
print (f'Here are the {len(only_in_travel)} postcode sectors included in the Travel data and not in the full list:')
print(only_in_travel)
###Output
Here are the 14 postcode sectors included in the full list and not in the activity data:
['TR26 9', 'PL12 9', 'PL31 9', 'TR14 4', 'PL27 9', 'TR15 9', 'TR7 9', 'PL15 0', 'TR13 3', 'PL13 9', 'TR18 9', 'EX23 3', 'PL14 9', 'PL17 0']
Here are the 14 postcode sectors included in the full list and not in the Travel data:
['TR26 9', 'PL12 9', 'PL31 9', 'TR27 9', 'TR11 9', 'PL27 9', 'TR15 9', 'TR7 9', 'PL15 0', 'TR13 3', 'PL13 9', 'TR18 9', 'EX23 3', 'PL17 0']
Here are the 16 postcode sectors included in the full list and not in both the activity & travel data:
['TR26 9', 'PL12 9', 'PL31 9', 'TR27 9', 'TR11 9', 'TR14 4', 'PL27 9', 'TR15 9', 'TR7 9', 'PL15 0', 'TR13 3', 'PL13 9', 'TR18 9', 'EX23 3', 'PL14 9', 'PL17 0']
Here are the 4 postcode sectors included in the activity data and not in the full list:
['TR22 0', 'TR25 0', 'TR23 0', 'TR24 0']
Here are the 4 postcode sectors included in the Travel data and not in the full list:
['TR22 0', 'TR25 0', 'TR23 0', 'TR24 0']
|
03_Benchmark_Analyses.ipynb | ###Markdown
Benchmark Analyses We lean on the `psifr` toolbox to generate three plots corresponding to the contents of Figure 4 in Morton & Polyn, 2016:1. Recall probability as a function of serial position2. Probability of starting recall with each serial position3. Conditional response probability as a function of lagInput data is presumed to be [formatted for use of the psifr toolbox](https://psifr.readthedocs.io/en/latest/guide/import.html).
###Code
# export
import pandas as pd
import seaborn as sns
from psifr import fr
import matplotlib.pyplot as plt
def visualize_individuals(data, data_query='subject > -1'):
"""
Visualize variation between subjects in dataset wrt key organizational metrics.
"""
# generate data-based spc, pnr, lag_crp
data_spc = fr.spc(data).query(data_query).reset_index()
data_pfr = fr.pnr(data).query('output <= 1').query(data_query).reset_index()
data_lag_crp = fr.lag_crp(data).query(data_query).reset_index()
# spc
g = sns.FacetGrid(dropna=False, data=data_spc)
g.map_dataframe(sns.lineplot, x='input', y='recall', hue='subject')
g.set_xlabels('Serial position')
g.set_ylabels('Recall probability')
#plt.title('Recall Probability by Serial Position')
g.set(ylim=(0, 1))
plt.savefig('spc.pdf', bbox_inches='tight')
# pfr
h = sns.FacetGrid(dropna=False, data=data_pfr)
h.map_dataframe(sns.lineplot, x='input', y='prob', hue='subject')
h.set_xlabels('Serial position')
h.set_ylabels('Probability of First Recall')
#plt.title('P(First Recall) by Serial Position')
h.set(ylim=(0, 1))
plt.savefig('pfr.pdf', bbox_inches='tight')
# lag crp
max_lag = 5
filt_neg = f'{-max_lag} <= lag < 0'
filt_pos = f'0 < lag <= {max_lag}'
i = sns.FacetGrid(dropna=False, data=data_lag_crp)
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_neg),
x='lag', y='prob', hue='subject', **kws))
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_pos),
x='lag', y='prob', hue='subject', **kws))
i.set_xlabels('Item Lag')
i.set_ylabels('Conditional Response Probability')
#plt.title('Recall Probability by Item Lag')
i.set(ylim=(0, 1))
plt.savefig('crp.pdf', bbox_inches='tight')
#export
import pandas as pd
import seaborn as sns
from psifr import fr
import matplotlib.pyplot as plt
def visualize_aggregate(data, data_query):
# generate data-based spc, pnr, lag_crp
data_spc = fr.spc(data).query(data_query).reset_index()
data_pfr = fr.pnr(data).query('output <= 1').query(data_query).reset_index()
data_lag_crp = fr.lag_crp(data).query(data_query).reset_index()
# spc
g = sns.FacetGrid(dropna=False, data=data_spc)
g.map_dataframe(sns.lineplot, x='input', y='recall',)
g.set_xlabels('Serial position')
g.set_ylabels('Recall probability')
#plt.title('Recall Probability by Serial Position')
g.set(ylim=(0, 1))
plt.savefig('spc.pdf', bbox_inches='tight')
# pfr
h = sns.FacetGrid(dropna=False, data=data_pfr)
h.map_dataframe(sns.lineplot, x='input', y='prob')
h.set_xlabels('Serial position')
h.set_ylabels('Probability of First Recall')
#plt.title('P(First Recall) by Serial Position')
h.set(ylim=(0, 1))
plt.savefig('pfr.pdf', bbox_inches='tight')
# lag crp
max_lag = 5
filt_neg = f'{-max_lag} <= lag < 0'
filt_pos = f'0 < lag <= {max_lag}'
i = sns.FacetGrid(dropna=False, data=data_lag_crp)
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_neg),
x='lag', y='prob', **kws))
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_pos),
x='lag', y='prob', **kws))
i.set_xlabels('Item Lag')
i.set_ylabels('Conditional Response Probability')
#plt.title('Recall Probability by Item Lag')
i.set(ylim=(0, 1))
plt.savefig('crp.pdf', bbox_inches='tight')
# export
from repfr.datasets import simulate_data
def visualize_model(model, experiment_count, first_recall_item=None):
visualize_aggregate(simulate_data(model, experiment_count, first_recall_item), data_query)
# export
import pandas as pd
import seaborn as sns
from psifr import fr
import matplotlib.pyplot as plt
def visualize_fit(
model_class, parameters, data, data_query=None, experiment_count=1000, savefig=False):
"""
Apply organizational analyses to visually compare the behavior of the model
with these parameters against specified dataset.
"""
# generate simulation data from model
model = model_class(**parameters)
sim_data = simulate_data(model, experiment_count)
# generate simulation-based spc, pnr, lag_crp
sim_spc = fr.spc(sim_data).reset_index()
sim_pfr = fr.pnr(sim_data).query('output <= 1') .reset_index()
sim_lag_crp = fr.lag_crp(sim_data).reset_index()
# generate data-based spc, pnr, lag_crp
data_spc = fr.spc(data).query(data_query).reset_index()
data_pfr = fr.pnr(data).query('output <= 1').query(data_query).reset_index()
data_lag_crp = fr.lag_crp(data).query(data_query).reset_index()
# combine representations
data_spc['Source'] = 'Data'
sim_spc['Source'] = model_class.__name__
combined_spc = pd.concat([data_spc, sim_spc], axis=0)
data_pfr['Source'] = 'Data'
sim_pfr['Source'] = model_class.__name__
combined_pfr = pd.concat([data_pfr, sim_pfr], axis=0)
data_lag_crp['Source'] = 'Data'
sim_lag_crp['Source'] = model_class.__name__
combined_lag_crp = pd.concat([data_lag_crp, sim_lag_crp], axis=0)
# generate plots of result
# spc
g = sns.FacetGrid(dropna=False, data=combined_spc)
g.map_dataframe(sns.lineplot, x='input', y='recall', hue='Source')
g.set_xlabels('Serial position')
g.set_ylabels('Recall probability')
#plt.title('Recall Probability by Serial Position')
g.add_legend()
g.set(ylim=(0, 1))
plt.savefig('{}_fit_spc.pdf'.format(model_class.__name__), bbox_inches='tight')
#pdf
h = sns.FacetGrid(dropna=False, data=data_pfr)
h.map_dataframe(sns.lineplot, x='input', y='prob', hue='Source')
h.set_xlabels('Serial position')
h.set_ylabels('Probability of First Recall')
#plt.title('P(First Recall) by Serial Position')
h.add_legend()
h.set(ylim=(0, 1))
plt.savefig('{}_fit_pfr.pdf'.format(model_class.__name__), bbox_inches='tight')
# lag crp
max_lag = 5
filt_neg = f'{-max_lag} <= lag < 0'
filt_pos = f'0 < lag <= {max_lag}'
i = sns.FacetGrid(dropna=False, data=data_lag_crp)
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_neg),
x='lag', y='prob', hue='Source', **kws))
i.map_dataframe(
lambda data, **kws: sns.lineplot(data=data.query(filt_pos),
x='lag', y='prob', hue='Source', **kws))
i.set_xlabels('Item Lag')
i.set_ylabels('Conditional Response Probability')
#plt.title('Recall Probability by Item Lag')
i.add_legend()
i.set(ylim=(0, 1))
if savefig:
plt.savefig('{}_fit_crp.pdf'.format(model_class.__name__), bbox_inches='tight')
else:
plt.show()
###Output
_____no_output_____ |
ML_Soup_to_Nuts.ipynb | ###Markdown
Scikit-learn Soup to Nuts: Developing a Machine-Learning WorkflowIn this lecture we will discuss the tools and steps necessary to build a successful machine-learning model. Adam A Miller CIERA/Northwestern & Adler Planetarium (c) 2017 Nov 2 Machine Learning fundamentally concerned with the problem of classification *particularly in regime of large dimensional data sets* (methods can be extended to regression) (glorified) Pattern Matching *a slight over-simplification* In other words, be careful about over-interpreting the "learning"... *credit*: this image is everywhere on the web, and I cannot track down original source, [this blog](https://devblogs.nvidia.com/parallelforall/mocha-jl-deep-learning-julia/) uses it without attribution Terminology **Features** measured properties of objects in the data set can be numerical or categorical (e.g., red vs. blue) **Labels** target classification or regression variable (to be predicted) Standard ([supervised](https://en.wikipedia.org/wiki/Supervised_learning)) ML goal: 1. **Train** — Develop a mapping between *features* and *labels* 2. **Test** — Evaluate model on non-training labeled data 3. **Predict** — Apply model to sources with unknown labels Today I will not discuss [unsupervised machine learning](https://en.wikipedia.org/wiki/Unsupervised_learning). Primarily because we do not have time, but also because I have not seen a single useful application of these techniques in my own science.In brief, unsupervised learning ignores any labels that may be available and instead attempts to cluster sources based on their similarity in the multidimensional feature space. However, once the clusters have been identified there is no mathematical method for measuring the quality of the clusters (and hence my feelings that these methods aren't that useful). Question 1Why is the step with the test data necessary?[Take a few min to dicuss with your partner] With this simple picture in mind, let's get started.Our tool for today is `python`'s [scikit-learn](http://scikit-learn.org/stable/).`scikit-learn` is amazing! It includes everything needed to construct the ML workflow, and has excellent documentation. With only 4 (!) lines of code, we can build a ML model with `scitkit-learn`.
###Code
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
rf_clf = RandomForestClassifier().fit(iris.data, iris.target)
###Output
_____no_output_____
###Markdown
BangJust like that - you're done. Now you can all go home. As a very important aside - allow me a moment on my soapbox to advise caution regarding the simplicity of `scikit-learn`: the package is so user friendly, and documentation so good, that it is not just easy to build a model, but it is also incredibly easy to become over confident in the model. Generally speaking, ML models are highly subject to noise and training-set biases and the simplicity of `scikit-learn` can consistently lead to a few lines of code that appear to produce outstanding results.This is the first (but it will not be the last) time that I will implore you to **worry about the data** On to building a full pipeline... 1. Data PreparationAs ML is a data-driven method, the first, and arguably most important step is to curate the data. 1. Query, observe, simulate, etc. (i.e. collect the observations) 2. Select features to be used in the model 3. Determine training set "ground truth" (i.e. *labels*) Beyond these initial considerations, additional setps to consider include: 4. Convert categorical features e.g., male, female, male, male $\rightarrow$ [0, 1, 0, 0] 5. [Impute](https://en.wikipedia.org/wiki/Imputation_(statistics) (or discard) missing data 6. Feature normalization typically only necessary for certain ML models 7. Visualize the data a critical step for all data-science applications and of course, don't forget... Worry About the Data Today we will work with the famous [iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set), which is small and understandable, but as a result avoids many of the trappings of dealing with real world data. There are 3 iris classes: setosa, virginica, and versicolor.For each flower, 4 features have been measured: petal length, petal width, sepal length, and sepal width. We will use [`seaborn`](https://seaborn.pydata.org) to visualize the data (but all subsequent work will be in `scikit-learn`).
###Code
xkcd_colors = ["windows blue", "amber", "slate"]
def infer_cmap(color):
xkcd_colors = ["windows blue", "amber", "slate"]
hues = sns.xkcd_palette(xkcd_colors)
if color == hues[0]:
return sns.light_palette(hues[0], as_cmap=True)
elif color == hues[1]:
return sns.light_palette(hues[1], as_cmap=True)
elif color == hues[2]:
return sns.light_palette(hues[2], as_cmap=True)
def kde_color_plot(x, y, **kwargs):
cmap = infer_cmap(kwargs['color'])
ax = sns.kdeplot(x, y, shade=True, shade_lowest=False, cmap=cmap, **kwargs)
return ax
iris_df = sns.load_dataset("iris")
g = sns.PairGrid(iris_df, hue='species',
vars=['sepal_length','sepal_width',
'petal_length','petal_width'],
palette=sns.xkcd_palette(xkcd_colors),
diag_sharey=False)
g = g.map_upper(plt.scatter, alpha=0.7)
g = g.map_lower(kde_color_plot)
g = g.map_diag(sns.kdeplot, shade=True)
###Output
_____no_output_____
###Markdown
In brief, these corner plots show that the data are fairly well separated, though there is some overlap between the virginica and versicolor species. 2. Feature Engineering[This step may need to be repeated]Add new features (if necessary) Utilize domain knowledge to create/compute new features e.g., sepal_length/petal_length may be more informativeRemove noisy/uniformative features (if necessary) [Feature importance can be measured](http://scikit-learn.org/stable/modules/ensemble.htmlfeature-importance-evaluation) via Random Forest [Forward/backward feature selection](https://www.cs.cmu.edu/~kdeng/thesis/feature.pdf) can thin feature set In this case we have only 4 features, so we will proceed under the assumption that the feature set need not be thinned. 3. Model Selection[This step may need to be repeated]Following data organization and feature engineering, the practitioner must then select an ML algorithm.Every problem/data set is different. Best practices often include trying multiple algorithms to determine which is best. After lots of experience it is possible to develop some intuition for which algorithms will work best in which regimes. But remember - ultimately we are working with black boxes. Intuition can easily lead you astray in this case... We will adopt the [$k$-Nearest Neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) ($k$NN) algorithm for today's problem. The primary reason is that it is very easy to understand:Final classifications are determined by identifying the $k$, a user-selected number, nearest neighbors in the training set to the source being classified. Euclidean distances are typically used to determine the separation between sources, though other metrics are also possible. $k$NN is an algorithm that may require feature normalization (discussed above). Imagine, for example, a two feature model where feature $x_1$ is gaussian distributed with mean 0 and standard deviation 10 $[x_1 \sim \mathcal{N}(0, 100)]$, compared to feature $x_2 \sim \mathcal{N}(0,0.01)$. In this case, the classifications will be entirely decided by $x_1$ as the typical $\Delta x_1$ will be orders of magnitude larger than $\Delta x_2$. Of course, if $x_1$ is significantly more important than $x_2$ then maybe this is okay. `scikit-learn` makes $k$NN easy with the [`KNeighborsClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) class from the [`neighbors`](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.neighbors) subpackage.
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=11)
###Output
_____no_output_____
###Markdown
You may have noticed that I set $k = 11$. This should worry you - why 11 neighbors and not 7? or the default 5? or 121?We will answer that now... The real answer for why I set $k = 11$ is that today I got a [slurpie](https://en.wikipedia.org/wiki/Slurpee) and it tasted good. 4. Model Evaluation[This step may need to be repeated]With model in hand, we now need to evaluate its performance. What is the best metric for evaluating the selected model?There are many metrics we can use to evalute a model's performance, and we will cover a few of those now. Before we evaluate the model, we need to split the data into a training and test set (described above). We can easily do this using [`train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) from the [`model_selection`](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.model_selection) `scikit-learn` subpackage.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris.data,
iris.target,
test_size = 0.3,
random_state = 23)
###Output
_____no_output_____
###Markdown
Why random_state = 23? Because we are in Chicago, and Michael Jordan was on the Bulls, and Michael Jordan is the best basketball player ever. At this stage - we set the test set aside, and ignore it completely until we have fully optimized our model.Applying the model to these data before finalizing the model is SNOOPING - don't do it. Terminology **True Positive** (TP) + classified as + **False Positive** (FP) - classified as + **True Negative** (TN) - classified as - **False Negative** (FN) + classified as - Terminology - Most metrics are defined by [TP, FP, TN, and FN](https://en.wikipedia.org/wiki/Sensitivity_and_specificity): **Accuracy** (TP + TN)/(TP + TN + FP + FN) **True Positive Rate** (aka sensitivity, recall, etc) TP/(TP + FN) **False Positive Rate** FP/(TN + FP) **True Negative Rate** (aka specificity) TN/(TN + FP) **Precision** TP/(TP + FP) and many, many more... Another extremely useful tool is the [Receiver Operating Characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) curve. For $k$NN it is not possible to determine the ROC curve, as the ROC curve is determined by measuring the TPR vs. FPR as a function of varying classification decision thresholds. Models that produce probabilistic classifications can be used to create ROC curves.The ROC curve is extremely useful for setting decision thresholds in cases where a desired TPR or FPR is known a priori (e.g., when to trigger human review of credit card transactions in possible cases of fraud). When "follow-up" resources are limited setting these thresholds helps to optimize performance. Finally, the [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) is exceptionally useful for identifying classes that are being misclassified: True Class + - predicted + TP FN class - FP TN As we cannot touch the test set, how do we evaluate the model performance?[Cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics). In brief, we will further split the training set, use some of it to define the mapping between features and labels, and then evaluate the quality of that mapping using the sources that were withheld from training. There are many flavors of CV, but $k$-fold CV is most common. In $k$-fold CV, the training set is split into $k$ partitions. Iteratively, each partion is withheld, the model is trained, and predictions are made on the withheld partition. With predictions for every source in hand, we can compare the predictions to the known labels. Cross validation is simple with `scikit-learn` using the `model_selection` subpackage. We will focus on [`cross_val_predict`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_predict.html) which returns predictions for every source in the training set.
###Code
from sklearn.model_selection import cross_val_predict
y_train_preds = cross_val_predict(knn_clf, X_train, y_train, cv = 10)
###Output
_____no_output_____
###Markdown
The super useful [`metrics`](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics) subpackage allows us to evaluate the model.
###Code
from sklearn.metrics import accuracy_score, confusion_matrix
print("kNN CV acc = {:.4f}".format(accuracy_score(y_train,
y_train_preds)))
###Output
_____no_output_____
###Markdown
We can also use `scikit-learn` to make a confusion matrix. A nice looking confusion matrix requires [more code than fits on a slide](http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.htmlsphx-glr-auto-examples-model-selection-plot-confusion-matrix-py). I'll hide that and instead just show the results.
###Code
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cmap = sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True)
cm = confusion_matrix(y_train, y_train_preds)
with sns.axes_style("white"):
plot_confusion_matrix(cm, iris.target_names,
normalize = True,
cmap = cmap)
###Output
_____no_output_____
###Markdown
5. Model Optimization[This step may need to be repeated]Previously, we set $k = 11$ for the $k$NN model, but we (rightly) asked, what is so special about 11?Now we should optimize the model tuning parameters. The tried and true method in this regard is brute force: an exhaustive grid search across all relevant tuning parameters.In cases with many parameters (or extremely large data sets) a [randomized parameter search may be more pragmatic](http://scikit-learn.org/stable/auto_examples/model_selection/plot_randomized_search.htmlsphx-glr-auto-examples-model-selection-plot-randomized-search-py). Unfortunately, there is no substitute for patience. It is virtually never the case that some objective function can be optimized to determine the optimal tuning parameters. Using the `model_selection` package, we can use the [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) class to perform the exhaustive search.In this case, the relevant parameters are $k$ and $p$, the order of the [Minkowski distance](https://en.wikipedia.org/wiki/Minkowski_distance) ($p = 2$ is equivalent to Euclidean distance).
###Code
from sklearn.model_selection import GridSearchCV
opt_tune = GridSearchCV(knn_clf, {'n_neighbors': [1, 3, 5, 10, 30, 50],
'p': [1, 2, 3]}, cv = 10)
opt_tune.fit(X_train, y_train)
opt_k = opt_tune.best_params_['n_neighbors']
opt_p = opt_tune.best_params_['p']
print("Opt model has k = {:d} and p = {:d}".format(opt_k, opt_p))
###Output
_____no_output_____
###Markdown
Furthermore, it is useful to understand how the performance changes as a function of the parameters in the search.
###Code
k_grid = np.unique(opt_tune.cv_results_['param_n_neighbors'])
p_grid = np.unique(opt_tune.cv_results_['param_p'])
K, P = np.meshgrid(k_grid, p_grid)
score_grid = np.empty(np.shape(K))
for params, acc in zip(opt_tune.cv_results_['params'],
opt_tune.cv_results_['mean_test_score']):
this_source = np.where((K == params['n_neighbors']) & (P == params['p']))
score_grid[this_source] = acc
with sns.axes_style('white'):
fig, ax = plt.subplots()
im = ax.imshow(score_grid, origin = 'lower_left',
cmap = cmap)
thresh = 0.92
for i, j in itertools.product(range(score_grid.shape[0]),
range(score_grid.shape[1])):
ax.text(j, i, format(score_grid[i, j], '.4f'),
horizontalalignment="center",
color="w" if score_grid[i, j] > thresh else "k")
ax.set_xticks(np.arange(len(k_grid)))
ax.set_xticklabels(k_grid)
ax.set_yticks(np.arange(len(p_grid)))
ax.set_yticklabels(p_grid)
ax.set_xlabel('k', fontsize = 14)
ax.set_ylabel('p', fontsize = 14)
cb = plt.colorbar(im)
###Output
_____no_output_____
###Markdown
In this case we see that a range of different parameter choices provide the optimal results. Moving forward we will make predictions using models with $k = 10$ and $p = 2$. 6. PredictionThe final step, now that we have fully specified and trained our model, is to make predictions. Apply the model to the test set $\rightarrow$ estimate the generalization error (i.e. how does the model perform on new data?). The test-set generalization error typically overestimates the model performance. There are several reasons why this might be the case, but the easiest to understand is training set bias. Every source that has a label is labeled "for a reason" (typically because someone, somewhere decided to devote resources towards labeling). This selection process is rarely random, meaning the training set is biased relative to the population. These biases, even if small, will be propagated through the model, but not identified via the test set, which (in most cases) comes from the same super set as the training set. As always - **worry about the data** Pulling back the curtain on the test set is a point of no return. At this stage, it's possible (and encouraged if necessary) to go back and adjust the work in section 2 (feature engineering), 3 (model selection), 4 (model evaluation) and 5 (model optimization). Cycling through these procedures multiple times is typically needed prior to evaluating the model with the test set. We, however, will proceed with our simple model:
###Code
knn_clf_final = KNeighborsClassifier(n_neighbors=3, p = 2)
knn_clf_final.fit(X_train, y_train)
test_preds = knn_clf_final.predict(X_test)
gen_error = 1 - accuracy_score(y_test, test_preds)
print("The kNN test-set acc = {:.4f}".format(1 - gen_error))
###Output
_____no_output_____ |
LSST/SuperNovaLightCurves/SN_Photometry_PlotAndModel.ipynb | ###Markdown
K-fold cross validation analysis and root mean squared error calculation
###Code
def calc_RMSE(flux, times, flux_errors, band, fit):
flux_predictions = []
#loop to run 'leave one out' CV
for ind, f in enumerate(flux):
flux_del = np.delete(flux, ind)
times_del = np.delete(times, ind)
Coeffs, Covar = curve_fit(fit, times_del, flux_del, priors[band], bounds= param_bounds)
ypred = fit(times[ind], Coeffs[0], Coeffs[1], Coeffs[2], Coeffs[3], Coeffs[4], Coeffs[5])
flux_predictions.append(ypred)
flux_predictions = np.array(flux_predictions)
#Root Mean Square Error calculations
dif = (flux_predictions - flux)/flux_errors
temp = np.sum((flux_predictions - flux)**2)
temp = temp / (len(flux))
RMSE = np.sqrt(temp)
return RMSE
print(calc_RMSE(I_data.flux.values, I_data.time.values, I_data.flux_error.values, "I", Kapernka))
print(calc_RMSE(g_data.flux.values, g_data.time.values, g_data.flux_error.values, 'g', Kapernka))
print(calc_RMSE(R_data.flux.values, R_data.time.values, R_data.flux_error.values, 'R', Kapernka))
###Output
3.80203633369e-05
2.45527400617e-05
4.181858384e-05
|
data/Autoencoders_outliers_detection/NASAbearingDataset-PyODautoencoders_SetNo2.ipynb | ###Markdown
Dataset preprocessing
###Code
# Read the CSV file and set first column as the dataframe index
dataset = pd.read_csv("../input/NASA-bearing-dataset/merged_dataset_BearingTest_2.csv", index_col=0)
dataset.head()
###Output
_____no_output_____
###Markdown
Normalize data
###Code
from sklearn import preprocessing
# Decide on what normalizer function to use
## https://www.geeksforgeeks.org/standardscaler-minmaxscaler-and-robustscaler-techniques-ml
scaler = preprocessing.MinMaxScaler() # scales all the data features in the range [0, 1] or if there are negative values to [-1, 1]
#scaler = preprocessing.StandardScaler() # It follows Standard Normal Distribution (SND). Therefore, it makes mean = 0 and scales the data to unit variance
# If you needed to operate in the whole dataset, you could apply normalization to the full time series
#X_all = scaler.fit_transform(dataset)
#X_all = pd.DataFrame(dataset)
#X_all.columns = dataset.columns
# Dataset is scaled so that maximum for every column is 1
dataset_scaled = pd.DataFrame(scaler.fit_transform(dataset),
columns=dataset.columns,
index=dataset.index)
dataset_scaled.describe()
###Output
_____no_output_____
###Markdown
Split into training and test datasets- We want the training set contains only "normal" data- The rest of points will be in the test set, that will contain both "normal" and anomalous data
###Code
print("dataset_scaled shape is",dataset_scaled.shape,"\n\n", dataset_scaled.index)
###Output
dataset_scaled shape is (984, 4)
Index(['2004-02-12 10:32:39', '2004-02-12 10:42:39', '2004-02-12 10:52:39',
'2004-02-12 11:02:39', '2004-02-12 11:12:39', '2004-02-12 11:22:39',
'2004-02-12 11:32:39', '2004-02-12 11:42:39', '2004-02-12 11:52:39',
'2004-02-12 12:02:39',
...
'2004-02-19 04:52:39', '2004-02-19 05:02:39', '2004-02-19 05:12:39',
'2004-02-19 05:22:39', '2004-02-19 05:32:39', '2004-02-19 05:42:39',
'2004-02-19 05:52:39', '2004-02-19 06:02:39', '2004-02-19 06:12:39',
'2004-02-19 06:22:39'],
dtype='object', length=984)
###Markdown
We will split into training and test sets: - The **training set** corresponds to the first part of the time serie (25% approximately), where bearing status is healthy - It will train the **Autoencoder model** - So the training step will provide with the **baseline** that we will use to flag anomalies later - The **test set** covers the remaining 75% of the of the serie (right part) - We will apply on it the threshold value provided by the autoencoder model (baseline) - Then we will flag as anomalous every point whose score is above the threshold
###Code
dataset_train = dataset_scaled[:'2004-02-13 23:52:39']
dataset_test = dataset_scaled['2004-02-14 00:02:39':]
# Random shuffle training data
dataset_train.sample(frac=1)
print("Train dataset has lenght", dataset_train.shape[0], "while test dataset is", dataset_test.shape[0],
"TOTAL=", dataset_train.shape[0]+dataset_test.shape[0])
x_ticks_span = 50
dataset_train.plot(figsize = (6,6), title ='Left time series with "normal" data (normalized signals)')
plt.xticks(np.arange(0, dataset_train.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylim(0,1)
plt.legend(loc="upper left")
plt.show()
dataset_test.plot(figsize = (18,6), title='Right time series with "normal" & "anomalous" data (normalized signals)')
plt.xticks(np.arange(0, dataset_test.shape[0], x_ticks_span), fontsize=10, rotation = 30)
plt.ylim(0,1)
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Scatter plot with two components (PCA) for visualization purposesTraining of the model will use the 4 bearings data, not the PCA.In fact, the Autoencoder model will have a central (hidden) layer with two nodes that is similar to PCA concept, with the improvement that is able to deal with non linear models.
###Code
from sklearn.decomposition import PCA
pca = PCA(2)
x_pca = pca.fit_transform(dataset_train)
x_pca = pd.DataFrame(x_pca)
x_pca.columns=['PC1','PC2']
# Plot
plt.scatter(x_pca['PC1'], x_pca['PC2'])
plt.title('Training dataset projected onto 2 Principal Components')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
###Output
_____no_output_____
###Markdown
Build autoencoder modelhttps://pyod.readthedocs.io/en/latest/pyod.models.htmlmodule-pyod.models.auto_encoderWe don't need to apply dimensional reduction, it's done by the Autoencoder model (central layer of two nodes in the Neural Network are the equivalent to the 2 Principal Components)
###Code
clf1 = AutoEncoder(hidden_neurons =[10, 2, 10], contamination=0.05, epochs=150, batch_size=30)
clf1.fit(dataset_train)
# These are the default parameters used:
'''
AutoEncoder(batch_size=32, contamination=0.1, dropout_rate=0.2, epochs=100,
hidden_activation='relu', hidden_neurons=[10, 2, 10],
l2_regularizer=0.1,
loss=<function mean_squared_error at 0x7ffacc1150e0>,
optimizer='adam', output_activation='sigmoid', preprocessing=True,
random_state=None, validation_size=0.1, verbose=1)
'''
from keras.utils.vis_utils import model_to_dot
from IPython.display import SVG
SVG(model_to_dot(clf1.model_, dpi=60, show_shapes=True, show_layer_names=True, rankdir='TB').create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
Evaluate the model: validation vs. training loss
###Code
plt.plot(clf1.history_['loss'], 'b', label='Training loss')
plt.plot(clf1.history_['val_loss'], 'r', label='Validation loss')
plt.legend(loc='upper right')
# plt.xlabel('Epochs')
plt.ylabel('Loss, [mse]')
plt.show()
###Output
_____no_output_____
###Markdown
Inferring the anomaly decision logic from the Autoencoder modelPyOD autoencoder model provides us directly with:- Prediction scores- Anomaly threshold- Anomaly points labels ("1" if score > threshold, "0" otherwise)This applies for the baseline signal (training set).
###Code
# Get the outliers' scores for the training data
y_training_scores = clf1.decision_scores_ # = clf1.decision_function(dataset_train)
# Threshold value is based on default parameter `contamination=0.1`
threshold = clf1.threshold_
# Outliers are labeled with "1", the rest with "0"
y_training_pred = clf1.labels_
print("Points whose score is greater than", "{:.2f}".format(threshold), "would be labeled as anomalies")
###Output
Points whose score is greater than 3.10 would be labeled as anomalies
###Markdown
Look af the scores's histogram to visually check for the threshold.
###Code
figure(figsize=(10, 3), dpi=80)
plt.hist(y_training_scores, bins='auto')
plt.title("Histogram for Model Clf1 Anomaly Scores")
plt.xticks(np.arange(0, 10, 0.5), fontsize=10, rotation = 30)
plt.xlim(0,10)
plt.ylabel("frequency")
plt.xlabel("score")
plt.show()
###Output
_____no_output_____
###Markdown
In the following section we will inspect the results to decide on the threshold value we will apply to the test set:- In doing this, one can make sure that this threshold is set above the “noise level” of the baseline signal, and that any flagged anomalies should be statistically significant above the noise background (**see the figure below**).- Then by applying the logic `score > threshold` for the reamining 75% of time series (test set) we will be able to flag outliers, because we have implicitly subtracted the noise background present in the *healthy* signal (training set) with that selected threshold.**NOTE:** In the baseline signal (training set) the unequality`score > threshold` has detected outliers as background noise), i.e. because higher scores corresponds to low frequency occurrences.
###Code
figure(figsize=(10, 4), dpi=80)
plt.plot(y_training_scores)
plt.axhline(y=threshold, c='r', ls='dotted', label='threshoold')
plt.title('Anomaly Scores with Autoencoder model calculated threshold')
plt.show()
###Output
_____no_output_____
###Markdown
An outlier is a point that is distant from others, so the **score** value can be understood *as a distance*. Let's add a column in the training set to flag the anomalies. Decision on the threshold valueHere you need to decide on the precise cut point for your actual problem by taking into account its context:* **Increase** the threshold if you want to be *more risky* (be as close as possible to the bearing break point). - This way you delay the bearig replacement based on your prediction* Keep as it is, or slightly **decrease** the threshold value. - This way you will be more conservative and can anticipate the replacement for safety operations' reasons, avoiding unexpected breakage (it might happen before what the model predicts) We decide to **set to 3.66** so that results are comparable to the calculation based on the [PCA method](https://www.kaggle.com/brjapon/nasabearingdataset-pca-outliers-detection)
###Code
threshold = 3.66 # Increased to avoid false positive at the initial part of the test set
###Output
_____no_output_____
###Markdown
Get the scores of both datasets
###Code
y_training_scores = clf1.decision_function(dataset_train)
y_testing_scores = clf1.decision_function(dataset_test)
###Output
_____no_output_____
###Markdown
Add anomaly flags to dataset_train & dataset_test
###Code
# Do it on not normalized dataframes, so that there are no step between training and test (remember that they were scaled independently)
# - With _ is the original dataset (non scaled)
# - Without _ is the scaled dataset (range 0-1)
dataset_train['score'] = y_training_scores
dataset_train['threshold'] = threshold
# We have to re-calculate anomaly flags (y_training_pred) since we changed the threshold
dataset_train_anomalies = dataset_train['score'] > threshold
dataset_train['anomaly'] = dataset_train_anomalies
dataset_train.tail()
dataset_test['score'] = y_testing_scores
dataset_test['threshold'] = threshold
dataset_test_anomalies = dataset_test['score'] > threshold
dataset_test['anomaly'] = dataset_test_anomalies
dataset_test.tail()
###Output
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
after removing the cwd from sys.path.
###Markdown
Retain in `X_train_anomalies` and `X_test_anomalies` only the anomalies for plotting below
###Code
print("There are", dataset_train[ dataset_train['score'] > threshold ].shape[0], "anomalies in the training set out of", dataset_train.shape[0], "points")
print("There are", dataset_test[ dataset_test['score'] > threshold ].shape[0], "anomalies in the training set out of", dataset_test.shape[0], "points")
###Output
There are 6 anomalies in the training set out of 225 points
There are 519 anomalies in the training set out of 759 points
###Markdown
Predict the degradation point
###Code
anomaly_train = dataset_train[['score', 'threshold', 'anomaly']]
anomaly_train.plot(logy=True, figsize = (15,6), ylim = [1e-1,1e3], color = ['green','red'])
plt.xticks(np.arange(0, anomaly_train.shape[0], 50), fontsize=10, rotation = 30)
plt.title('Baseline plot against anomaly threshold')
plt.show()
anomaly = dataset_test[['score', 'threshold', 'anomaly']]
anomaly_alldata = pd.concat([anomaly_train, anomaly])
anomaly_alldata.plot(logy=True, figsize = (15,6), ylim = [1e-1,1e3], color = ['green','red'])
plt.xticks(np.arange(0, anomaly_alldata.shape[0], 50), fontsize=10, rotation = 30)
plt.title('Baseline plot against anomaly threshold')
plt.show()
###Output
_____no_output_____ |
section1_handson.ipynb | ###Markdown
1. マルコフ決定過程--- 1-1. 環境強化学習は教師あり学習とは異なり、データを用いません。代わりに**環境(Environment)**が与えられると考えます。ここでは事前に* https://github.com/AkinoriTanaka-phys/HPC-Phys_tutorial_and_hands-on/blob/master/maze.pyに用意した迷路の環境を使って、実装に沿った説明を試みます。まずは迷路の環境を読み込んでみましょう:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
**`Env.render()`**は環境を表示させる関数です。環境の関数名はなるべくOpenAI Gymを参考にしました。> **【補足】** OpenAI Gym はAtari社のブロック崩しゲームを始めとした、数々のゲームやその他の強化学習環境をpythonから呼び出すためのライブラリで、無料です。pipコマンドでダウンロードできます。ここで* ◾は通れない壁* ◆ は迷路のゴール地点を表すとします。早速この迷路のスタート地点に「プレイヤー」を置いてみましょう:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
● が追加されました。これがプレイヤーの位置を表します。その座標(**状態(state)**といいます)は以下で確認できます:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
プレイヤーは [↑、↓、←、→] を各座標で選択します。これを**行動(action)**と言います。行動のリストは:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
後の便宜のため、[↑、↓、←、→] は `[0, 1, 2, 3]` で表現しています:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
試しに ● を動かしてみましょう。それぞれ* `Env.step0(s, a)`:**状態:`s`**に居るときに**行動:`a`**を取ったときの次の**状態**を返す* `Env.step1(s, a, next_s)`:**状態:`s`**に居るときに**行動:`a`**を取り、**状態:`next_s`**に移った時の「**報酬(reward)**」の値を返す
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
真ん中の3行を毎回書くのは面倒なので* `Env.step(a)`:上2つを同時に実行し、(**状態:`next_s`**, **報酬:`next_r`**, 解けたかどうか, 補足)を返すを用意しました:
###Code
"""書いてください"""
###Output
_____no_output_____
###Markdown
● ここまでのまとめ- 登場する集合とその要素 - **時刻**(状態、報酬、行動の添字) $\quad T=\{0,1,2,3, \dots\}=\{t\}$ - **状態**のとり得る集合(迷路だとプレイヤーのとり得るすべての座標\{`(x, y)`\}) $\quad S=\{s\}$ - **報酬**の集合(迷路だと\{`0, 1`\} = \{解けてない, 解けた \}) $\quad R=\{r\}$ - **行動**の集合(迷路だと\{`0, 1, 2, 3` \} = \{↑, ↓, ←, → \}) $\quad A=\{a\}$- **環境**の持つ性質 (実装上は`Env.step(a)`で同時に計算) - $s_{t+1} = \text{step}_0(s_t, a_t)$ - $r_{t+1} = \text{step}_1(s_t, a_t, s_{t+1})$ ● より一般の環境について上に書いた$\text{step}_{0, 1}$は関数なので、入力値が定まれば出力値は確定しています。しかし、一般にはこれらが確定していない場合もあります。> **【補足】** 例えば囲碁の盤面の**状態**とその時に置いた碁石の位置(**行動**)が何らかの具体的な値$(s_t, a_t)$を取ったからと言って、相手がどう出るかわからないので、次の自分の番での**状態** $s_{t+1}$が確定しているわけではありません。このような場合も考慮に入れるために、確率的な定式化を導入します。P(x)から実際に値をサンプリングすることを$$ x \sim P(x) $$と書くことにすると、$P_s, P_r$をそれぞれ状態と報酬が与えられる確率だとして、- **環境**の持つ性質(一般) - $s_{t+1} \sim P_s(s_{t+1}|s_t, a_t)$ - $r_{t+1} \sim P_r(r_{t+1}|s_t, a_t, s_{t+1})$と書けます。迷路のように決定している場合はデルタ関数などで表現できます。 --- 1-2. エージェントここまでは手で操作してきました。つまり* エージェント=あなた自身だったわけです。あなた自身が迷路ゲームをプレイするとき、気分によって同じ座標に居ても↑だったり↓だったり選択するので、ゲームのプレイ方針は確率的といえるでしょう。このような「エージェントが持っているゲームのプレイ方針を記述する確率」を**方策(Policy)**といいます。あなた自身の何らかの**方策**に基づいてエージェントを操作していたわけですが、強化学習ではその部分を機械に置き換えたいわけです。そうすると、機械のエージェントの実装に必要なのは**方策** と、それに従うゲームのプレイ=**行動**のサンプリング、ですから
###Code
class Agent():
def __init__(self, Policy):
self.Policy = Policy
def play(self):
"""
return a number in [0,1,2,3] corresponding to [up, down, left, right]
"""
return self.Policy.sample()
###Output
_____no_output_____
###Markdown
のような実装になるでしょう。ここで**方策**も何らかの条件付き確率で与えられることを前提としています:- エージェントが持つべき性質 - **方策**をあらわす条件付き確率 $\quad \pi(a_t|s_t)$ - そこからのサンプリング $\quad a_t \sim \pi(a_t|s_t)$**`Policy`** はこの確率を記述するオブジェクトであり、**`Policy.sample()`**はサンプリングを記述するものです。従って **`Policy`** は **`sample()`** 関数を持った何らかのオブジェクトとして
###Code
class Policy():
def __init__(self):
pass
def sample(self):
"""
プロトタイプなので pass とか適当でいいですが、後に
[0,1,2,3] = [up, down, left, right] から一つ数を返す用に実装
"""
action = None
return action
###Output
_____no_output_____
###Markdown
のようなものを想定しています。たとえば、完全にランダムな方策$$\pi_\text{random}(a|s)=\frac{1}{|A|},\quadA = \{a\}$$は
###Code
class Random(Policy):
"""書いてください"""
###Output
_____no_output_____
###Markdown
のように書けます。実際にこの方策を用いてゲームを1回プレイさせてみます:
###Code
Agt = Agent(Policy=Random(Env)) # Random方策に基づく機械エージェント
Env.reset()
Env.render()
action = Agt.play()
print(a2m[action])
Env.step(action)
Env.render()
###Output
_____no_output_____
###Markdown
--- 1-3. マルコフ決定過程ここまでで* **環境**:$\{ P_s(s_{t+1}|s_t, a_t), \ P_r(r_{t+1}|s_t, a_t, s_{t+1})\}$ = \{**状態**の時間発展, **即時報酬**の時間発展 \}* **エージェント**:$\{ \pi(a_t|s_t)\}$ = \{ **行動**の時間発展 \}と3種類の確率変数$\{ s, r, a\}$についての時間発展を定義してきました。強化学習では、この3種類の確率変数の時間発展をゲームが終わるまで行います:$$\left. \begin{array}{ll:ll:ll:ll}s_0 \overset{\pi(\cdot|s_0)}{\to}&a_0 \overset{P_s(\cdot|s_0, a_0)}{\to} &s_1 &\overset{\pi(\cdot|s_1)}{\to}a_1 \overset{P_s(\cdot|s_1, a_1)}{\to} &s_2&\overset{\pi(\cdot|s_2)}{\to}a_2\overset{P_s(\cdot|s_2, a_2)}{\to} & \cdots\\\downarrow_{P_r(\cdot|-, -, s_0)} &&\downarrow_{P_r(\cdot|s_0, a_0, s_1)} &&\downarrow_{P_r(\cdot|s_1, a_1, s_2)} \\r_0&&r_1&&r_2\end{array} \right.\tag{1.3}$$これを**マルコフ決定過程(Markov Decision Process, MDP)**といいます。ゲームが始まってから終わるまでの1単位(**MDP**の1つのサンプル系列)を**エピソード(episode)**と呼びます。 強化学習の理論的な部分は主にこの**MDP**に基づいた確率論で記述されます。
###Code
# 迷路のMDPから1エピソードサンプル再生
Agt = Agent(Policy=Random(Env)) # Random方策に基づく機械エージェント
Env.reset()
%matplotlib notebook
Env.play_interactive(Agt)
%matplotlib inline
###Output
_____no_output_____ |
Course 4 - Convolutional Neural Networks/2. Keras Tutorial/Keras - Tutorial - Happy House v2.ipynb | ###Markdown
Keras tutorial - the Happy HouseWelcome to the first assignment of week 2. In this assignment, you will:1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm.Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models. In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
###Code
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. 1 - The Happy House For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness. **Figure 1** : **the Happy House**As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy. You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labeled. Run the following code to normalize the dataset and learn about its shapes.
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)
###Markdown
**Details of the "Happy" dataset**:- Images are of shape (64,64,3)- Training: 600 pictures- Test: 150 picturesIt is now time to solve the "Happy" Challenge. 2 - Building a model in KerasKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.Here is an example of a model in Keras:```pythondef model(input_shape): Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model```Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as `X`, `Z1`, `A1`, `Z2`, `A2`, etc. for the computations for the different layers, in Keras code each line above just reassigns `X` to a new value using `X = ...`. In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable `X`. The only exception was `X_input`, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (`model = Model(inputs = X_input, ...)` above). **Exercise**: Implement a `HappyModel()`. This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
###Code
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. Then come back and also try out other
# network architectures as well.
X_input = Input(input_shape, name='in0')
X = ZeroPadding2D((1,1), name='pad0')(X_input)
X = Conv2D(16, (3,3), strides=(1,1), name='conv0')(X)
X = BatchNormalization(axis=3, name='bn0')(X)
X = Activation('relu', name='activ0')(X)
X = Dropout(0.2, noise_shape=None, seed=None, name='drop0')(X)
X = MaxPooling2D((2,2), name='max_pool0')(X) #HWC = 32,32,16
X = ZeroPadding2D((1,1), name='pad1')(X)
X = Conv2D(32, (3,3), strides=(1,1), name='conv1')(X)
X = BatchNormalization(axis=3, name='bn1')(X)
X = Activation('relu', name='activ1')(X)
# X = Dropout(0.1, noise_shape=None, seed=None, name='drop1')(X)
X = MaxPooling2D((2,2), name='max_pool1')(X) #HWC = 16,16,32
# X = ZeroPadding2D((1,1), name='pad2')(X)
# X = Conv2D(64, (3,3), strides=(1,1), name='conv2')(X)
# X = BatchNormalization(axis=3, name='bn2')(X)
# X = Activation('relu', name='activ2')(X)
# X = MaxPooling2D((2,2), name='max_pool2')(X) #HWC = 8,8,128
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
model = Model(input=X_input, output=X, name='HappyModel')
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:1. Create the model by calling the function above2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])`3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)`4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)`If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/).**Exercise**: Implement step 1, i.e. create the model.
###Code
### START CODE HERE ### (1 line)
happyModel = HappyModel((64, 64, 3))
### END CODE HERE ###
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel/__main__.py:44: UserWarning: Update your `Model` call to the Keras 2 API: `Model(name="HappyModel", inputs=Tensor("in..., outputs=Tensor("fc...)`
###Markdown
**Exercise**: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of `compile()` wisely. Hint: the Happy Challenge is a binary classification problem.
###Code
### START CODE HERE ### (1 line)
happyModel.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
**Exercise**: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
###Code
### START CODE HERE ### (1 line)
happyModel.fit(x = X_train, y = Y_train, epochs=5, batch_size=16)
### END CODE HERE ###
###Output
Epoch 1/5
600/600 [==============================] - 7s - loss: 0.1852 - acc: 0.7317 - ETA
Epoch 2/5
600/600 [==============================] - 7s - loss: 0.0816 - acc: 0.8917 - ETA: 2s - loss: 0.0897 -
Epoch 3/5
600/600 [==============================] - 7s - loss: 0.0423 - acc: 0.9483
Epoch 4/5
600/600 [==============================] - 7s - loss: 0.0295 - acc: 0.9650 - ETA: 3s - loss: 0.0319 - acc: - ETA: 2s - loss: 0.0344 -
Epoch 5/5
600/600 [==============================] - 7s - loss: 0.0325 - acc: 0.9633
###Markdown
Note that if you run `fit()` again, the `model` will continue to train with the parameters it has already learnt instead of reinitializing them.**Exercise**: Implement step 4, i.e. test/evaluate the model.
###Code
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x = X_test, y = Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
150/150 [==============================] - 1s
Loss = 0.145772468845
Test Accuracy = 0.933333337307
###Markdown
If your `happyModel()` function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare. If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:- Try using blocks of CONV->BATCHNORM->RELU such as:```pythonX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)X = BatchNormalization(axis = 3, name = 'bn0')(X)X = Activation('relu')(X)```until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.- You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.- Change your optimizer. We find Adam works well. - If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)- Run on more epochs, until you see the train accuracy plateauing. Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here. 3 - ConclusionCongratulations, you have solved the Happy House challenge! Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here. **What we would like you to remember from this assignment:**- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras? - Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test. 4 - Test with your own image (Optional)Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
###Code
### START CODE HERE ###
img_path = 'images/Frown.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
###Output
[[ 1.]]
###Markdown
5 - Other useful functions in Keras (Optional)Two other basic features of Keras that you'll find useful are:- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.Run the following code.
###Code
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
Models/Catboost/Catboost_retweet_classifier.ipynb | ###Markdown
Proof of concept of catboost
###Code
%%time
columns = [
'tweet_timestamp',
'creator_follower_count',
'creator_following_count',
'creator_is_verified',
'creator_creation_timestamp',
'engager_follower_count',
'engager_following_count',
'engager_is_verified',
'engager_creation_timestamp',
'engagement_creator_follows_engager',
'number_of_photo',
'number_of_gif',
'number_of_video',
'engagement_retweet_timestamp',
]
dask_df = dd.read_parquet("/Users/arcangelopisa/Downloads/sample_dataset", engine='pyarrow', columns=columns)
dask_df = dask_df.sample(0.8)
dask_df['engagement_retweet_timestamp'] = (dask_df['engagement_retweet_timestamp'] != -1).astype(np.uint8)
pandas_df = dask_df.compute()
del dask_df
pandas_df.info()
train, test = train_test_split(pandas_df, train_size=0.8)
X_train = train.drop(['engagement_retweet_timestamp'], axis=1)
y_train = train['engagement_retweet_timestamp']
X_test = test.drop(['engagement_retweet_timestamp'], axis=1)
y_test = test['engagement_retweet_timestamp']
del pandas_df, train, test
%%time
classifier = CatBoostClassifier(iterations=150,
depth=12,
learning_rate=0.25,
loss_function='CrossEntropy',
verbose = True)
classifier.fit(X_train, y_train, verbose = True)
classifier.save_model('retweet_classifier', format = "cbm")
%%time
y_pred = classifier.predict_proba(X_test)
y_pred
getFirstValuePrediction(y_pred)
result = getBooleanList(y_pred)
result
print('RCE is {}'.format(compute_rce(result, y_test)))
print('Average precision is {}'.format(average_precision_score(y_test, result)))
###Output
_____no_output_____ |
app/notebooks/jane-street-neural-network-starter.ipynb | ###Markdown
Jane Street: Neural Network StarterI try implementing a simple Tensorflow Keras neural network here. Train in Version 17.**Caution:** The GroupCV method applied in this notebook may cause time leakage problem. Please use [Purged Time-Series CV][1] instead.[1]: https://www.kaggle.com/marketneutral/purged-time-series-cv-xgboost-optuna
###Code
import warnings
warnings.filterwarnings('ignore')
import os, gc
# import cudf
import pandas as pd
import numpy as np
# import cupy as cp
import janestreet
import xgboost as xgb
from hyperopt import hp, fmin, tpe, Trials
from hyperopt.pyll.base import scope
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import GroupKFold
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from joblib import dump, load
import tensorflow as tf
tf.random.set_seed(42)
import tensorflow.keras.backend as K
import tensorflow.keras.layers as layers
from tensorflow.keras.callbacks import Callback, ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# print('Loading...')
# train = cudf.read_csv('/kaggle/input/jane-street-market-prediction/train.csv')
train = pd.read_csv('/kaggle/input/jane-street-market-prediction/train.csv', nrows = 3)
features = [c for c in train.columns if 'feature' in c]
# print('Filling...')
# f_mean = train[features[1:]].mean()
# train = train.query('weight > 0').reset_index(drop = True)
# train[features[1:]] = train[features[1:]].fillna(f_mean)
# train['action'] = (train['resp'] > 0).astype('int')
# print('Converting...')
# train = train.to_pandas()
# f_mean = f_mean.values.get()
# np.save('f_mean.npy', f_mean)
# print('Finish.')
###Output
_____no_output_____
###Markdown
Training
###Code
def create_mlp(num_columns, num_labels, hidden_units, dropout_rates, label_smoothing, learning_rate):
inp = tf.keras.layers.Input(shape = (num_columns, ))
x = tf.keras.layers.BatchNormalization()(inp)
x = tf.keras.layers.Dropout(dropout_rates[0])(x)
for i in range(len(hidden_units)):
x = tf.keras.layers.Dense(hidden_units[i])(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation(tf.keras.activations.swish)(x)
x = tf.keras.layers.Dropout(dropout_rates[i+1])(x)
x = tf.keras.layers.Dense(num_labels)(x)
out = tf.keras.layers.Activation('sigmoid')(x)
model = tf.keras.models.Model(inputs = inp, outputs = out)
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = learning_rate),
loss = tf.keras.losses.BinaryCrossentropy(label_smoothing = label_smoothing),
metrics = tf.keras.metrics.AUC(name = 'AUC'),
)
return model
batch_size = 4096
hidden_units = [384, 896, 896, 394]
dropout_rates = [0.10143786981358652, 0.19720339053599725, 0.2703017847244654, 0.23148340929571917, 0.2357768967777311]
label_smoothing = 1e-2
learning_rate = 1e-3
# oof = np.zeros(len(train['action']))
# gkf = GroupKFold(n_splits = 5)
# for fold, (tr, te) in enumerate(gkf.split(train['action'].values, train['action'].values, train['date'].values)):
# X_tr, X_val = train.loc[tr, features].values, train.loc[te, features].values
# y_tr, y_val = train.loc[tr, 'action'].values, train.loc[te, 'action'].values
# ckp_path = f'JSModel_{fold}.hdf5'
# model = create_mlp(X_tr.shape[1], 1, hidden_units, dropout_rates, label_smoothing, learning_rate)
# rlr = ReduceLROnPlateau(monitor = 'val_AUC', factor = 0.1, patience = 3, verbose = 0,
# min_delta = 1e-4, mode = 'max')
# ckp = ModelCheckpoint(ckp_path, monitor = 'val_AUC', verbose = 0,
# save_best_only = True, save_weights_only = True, mode = 'max')
# es = EarlyStopping(monitor = 'val_AUC', min_delta = 1e-4, patience = 7, mode = 'max',
# baseline = None, restore_best_weights = True, verbose = 0)
# model.fit(X_tr, y_tr, validation_data = (X_val, y_val), epochs = 1000,
# batch_size = batch_size, callbacks = [rlr, ckp, es], verbose = 0)
# oof[te] += model.predict(X_val, batch_size = batch_size * 4).ravel()
# score = roc_auc_score(y_val, oof[te])
# print(f'Fold {fold} ROC AUC:\t', score)
# # Finetune 3 epochs on validation set with small learning rate
# model = create_mlp(X_tr.shape[1], 1, hidden_units, dropout_rates, label_smoothing, learning_rate / 100)
# model.load_weights(ckp_path)
# model.fit(X_val, y_val, epochs = 3, batch_size = batch_size, verbose = 0)
# model.save_weights(ckp_path)
# K.clear_session()
# del model
# rubbish = gc.collect()
# score_oof = roc_auc_score(train['action'].values, oof)
# print(score_oof)
###Output
_____no_output_____
###Markdown
Load Models
###Code
num_models = 2
models = []
for i in range(num_models):
clf = create_mlp(len(features), 1, hidden_units, dropout_rates, label_smoothing, learning_rate)
clf.load_weights(f'../input/js-nn-models/JSModel_{i}.hdf5')
# clf.load_weights(f'./JSModel_{i}.hdf5')
models.append(clf)
f_mean = np.load('../input/js-nn-models/f_mean.npy')
# f_mean = np.load('./f_mean.npy')
###Output
_____no_output_____
###Markdown
SubmittingJust use two models to reduce running time.
###Code
env = janestreet.make_env()
env_iter = env.iter_test()
opt_th = 0.5
for (test_df, pred_df) in tqdm(env_iter):
if test_df['weight'].item() > 0:
x_tt = test_df.loc[:, features].values
if np.isnan(x_tt[:, 1:].sum()):
x_tt[:, 1:] = np.nan_to_num(x_tt[:, 1:]) + np.isnan(x_tt[:, 1:]) * f_mean
pred = 0.
for clf in models:
pred += clf(x_tt, training = False).numpy().item() / num_models
# pred = models[0](x_tt, training = False).numpy().item()
pred_df.action = np.where(pred >= opt_th, 1, 0).astype(int)
else:
pred_df.action = 0
env.predict(pred_df)
###Output
_____no_output_____ |
analysis_scripts/gtex/statistical_analysis/new_rv_egene_analysis.ipynb | ###Markdown
Tissue-specific RV eGenes
###Code
library(data.table)
library(dplyr)
load.data <- function(tissue) {
filename <- paste("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/qvals/", tissue, ".lrt.q", sep="")
return(fread(filename, data.table=F))
}
get.egenes <- function(qvals) {
egenes = qvals$Gene_ID[apply(qvals, 1, function(x) {any(as.numeric(x[-1]) < 0.05)})]
return(egenes)
}
get.tissue.specific.genes <- function(egenes.list) {
res = vector("list", length(egenes.list))
names(res) = names(egenes.list)
for (i in 1:length(egenes.list)) {
res[[i]] = egenes.list[[i]][!egenes.list[[i]] %in% unique(unlist(egenes.list[-i]))]
}
return(res)
}
sample.info = fread("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/tissue.name.match.csv")
tissues = sample.info$tissue
q.data = lapply(tissues, load.data)
names(q.data) = tissues
egenes = lapply(q.data, get.egenes)
res = get.tissue.specific.genes(egenes)
fwrite(as.list(res$Lung), "../tissue_specific_egenes_by_tissue/Lung.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Liver), "../tissue_specific_egenes_by_tissue/Liver.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Whole_Blood), "../tissue_specific_egenes_by_tissue/Whole_Blood.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Skin_Sun_Exposed_Lower_leg), "../tissue_specific_egenes_by_tissue/Skin_Sun_Exposed_Lower_leg.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Skin_Not_Sun_Exposed_Suprapubic), "../tissue_specific_egenes_by_tissue/Skin_Not_Sun_Exposed_Suprapubic.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Heart_Atrial_Appendage), "../tissue_specific_egenes_by_tissue/Heart_Atrial_Appendage.tissue.specifc.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Heart_Left_Ventricle), "../tissue_specific_egenes_by_tissue/Heart_Left_Ventricle.tissue.specifc.rv.egenes.tsv", sep="\n")
###Output
_____no_output_____
###Markdown
Tissue-specific non-RV eGenes
###Code
get.non.egenes <- function(qvals) {
egenes = qvals$Gene_ID[apply(qvals, 1, function(x) {all(as.numeric(x[-1]) >= 0.05)})]
return(egenes)
}
non.egenes = lapply(q.data, get.non.egenes)
res = get.tissue.specific.genes(non.egenes)
fwrite(as.list(res$Lung), "../tissue_specific_egenes_by_tissue/Lung.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Liver), "../tissue_specific_egenes_by_tissue/Liver.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Whole_Blood), "../tissue_specific_egenes_by_tissue/Whole_Blood.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Skin_Sun_Exposed_Lower_leg), "../tissue_specific_egenes_by_tissue/Skin_Sun_Exposed_Lower_leg.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Skin_Not_Sun_Exposed_Suprapubic), "../tissue_specific_egenes_by_tissue/Skin_Not_Sun_Exposed_Suprapubic.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Heart_Atrial_Appendage), "../tissue_specific_egenes_by_tissue/Heart_Atrial_Appendage.tissue.specifc.non.rv.egenes.tsv", sep="\n")
fwrite(as.list(res$Heart_Left_Ventricle), "../tissue_specific_egenes_by_tissue/Heart_Left_Ventricle.tissue.specifc.non.rv.egenes.tsv", sep="\n")
length(non.egenes$Lung)
###Output
_____no_output_____
###Markdown
RV eGenes example outlier
###Code
library(data.table)
library(dplyr)
target.snp = "chr20_59023753_G_A_b38"
geno = fread("/u/project/eeskin2/k8688933/rare_var/genotypes/v8/all_eur_samples_matrix_maf0.05/chr.20.genotypes.matrix.tsv")
indiv = colnames(geno)[which(geno %>% filter(ID == target.snp) != 0)][-1]
print(indiv)
z.heart.lv = fread("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/log2.standardized.corrected.tpm.rv.egenes.only/log2.standardized.corrected.lrt.rv.egenes.tpm.Heart_Left_Ventricle")
z.heart.aa = fread("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/log2.standardized.corrected.tpm.rv.egenes.only/log2.standardized.corrected.lrt.rv.egenes.tpm.Heart_Atrial_Appendage")
z.skin.sun = fread("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/log2.standardized.corrected.tpm.rv.egenes.only/log2.standardized.corrected.lrt.rv.egenes.tpm.Skin_Sun_Exposed_Lower_leg")
z.skin.not.sun = fread("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/log2.standardized.corrected.tpm.rv.egenes.only/log2.standardized.corrected.lrt.rv.egenes.tpm.Skin_Not_Sun_Exposed_Suprapubic")
print(indiv %in% colnames(z.heart.lv)) # this SNP is not in heart left ventricle
print(indiv %in% colnames(z.heart.aa))
print(indiv %in% colnames(z.skin.not.sun))
print(indiv %in% colnames(z.skin.sun))
print("ENSG00000101162.3" %in% z.heart.lv$gene)
print("ENSG00000101162.3" %in% z.heart.aa$gene)
print("ENSG00000101162.3" %in% z.skin.not.sun$gene)
print("ENSG00000101162.3" %in% z.skin.sun$gene)
z.heart.lv %>% filter(gene == "ENSG00000101162.3") %>% select(indiv)
z.heart.aa %>% filter(gene == "ENSG00000101162.3") %>% select(indiv[1])
idx = which(z.skin.sun$gene == "ENSG00000101162.3")
z.skin.sun[idx, -1]
scaled.z.skin.sun = scale(t(as.data.frame(z.skin.sun)[idx, -1]))
colnames(scaled.z.skin.sun) = c("z")
as.data.frame(scaled.z.skin.sun)[indiv, ] #%>% filter(abs(z) > 2)
idx = which(z.skin.sun$gene == "ENSG00000101162.3")
colnames(z.skin.sun)[which(abs(z.skin.sun[idx, -1]) > 2)]
z.skin.not.sun %>% filter(gene == "ENSG00000101162.3") %>% select(indiv[2])
z.skin.sun %>% filter(gene == "ENSG00000101162.3") %>% select(indiv)
###Output
_____no_output_____
###Markdown
RV eGenes example outliers in all tissues
###Code
z.scores = lapply(dir("/u/project/eeskin2/k8688933/rare_var/results/tss_20k_v8/result_summary/log2.standardized.corrected.tpm.rv.egenes.only/",
pattern="log2.standardized.corrected.lrt.rv.egenes.tpm", full.names=T), function(x) {if(file.size(x) > 1) {fread(x, data.table=F)}})
names(z.scores) = fread("../egene.counts.csv")$tissue
z.scores[[17]]
for (i in 1:48) {
z = z.scores[[i]]
if (is.null(z)) {
next
}
if (!any(indiv %in% colnames(z))) {
next
}
if (!"ENSG00000101162.3" %in% z$gene) {
next
}
idx = which(z$gene == "ENSG00000101162.3")
scaled.z = scale(t(as.data.frame(z)[idx, -1]))
colnames(scaled.z) = c("z")
print(names(z.scores)[[i]])
print(as.data.frame(scaled.z)[indiv[which(indiv %in% row.names(scaled.z))], ])
}
###Output
[1] "Artery_Tibial"
[1] 0.3049973
[1] "Breast_Mammary_Tissue"
[1] 0.9585913
[1] "Cells_Cultured_fibroblasts"
[1] -0.9428877
[1] "Muscle_Skeletal"
[1] -0.3236698
[1] "Nerve_Tibial"
[1] -0.9148684
[1] "Pituitary"
[1] 0.9132187 -0.2465423
[1] "Skin_Not_Sun_Exposed_Suprapubic"
[1] 0.2872161
[1] "Skin_Sun_Exposed_Lower_leg"
[1] 1.1471211 -0.7007989
[1] "Thyroid"
[1] -0.06988852
|
analysis/03_numseq_to_aa.ipynb | ###Markdown
result_hogehoge\~ を aa_result_hogehoge~ に変換する fastwy実行後は必ずやる
###Code
%matplotlib inline
# Options
#onlyList = [] # If this is not empty, only datasets for the categories are processed
onlyList = ['all']
skipList = [] # datasets for the categories here are not processed
import pandas as pd
import numpy as np
from IPython.display import display
import sys
from tqdm import tqdm_notebook as tqdm
import matplotlib.pyplot as plt
import seaborn as sns
import re #正規表現
import pickle
import warnings
import os
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, f1_score, roc_curve, roc_auc_score, precision_score, recall_score, accuracy_score
from sklearn import svm
from multiprocessing import Process, Pool
from protein_utility.protein_utility import *
import protein_utility.parameters as parameters
sns.set()
warnings.filterwarnings('ignore') #警告無視注意
SEED=1
pd.set_option("display.max_columns", 100)
aa_list =["A","B","C","D","E","F","G","H","I","K","L","M","N","P","Q","R","S","T","U","V","W","X","Y","Z"]
aa_list =["A","B","C","D","E","F","G","H","I","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z"]
#20種+4
categoryList = ["Bovine", "Buckwheat", "Chicken", "Crab", "Kiwi", "Peanut", "Salmon", "Shrimp", "Soybean", "Wheat"]
categoryList = ["Apple", "Bovine", "Buckwheat", "Carrot", "Chicken", "Corn", "Crab", "Kiwi", "Mustard", "Olive", "Oyster", "Peach", "Peanut", "Potato", "Rice", "Salmon", "Shrimp", "Soybean", "Tomato", "Wheat"]
params_list = []
#params_list.append(parameters.Parameters(FOOD_ONLY = True, op_tail = "C1Z1L1800R10k", Jan2021 = True))
#params_list.append(parameters.Parameters(FOOD_ORDER = True, op_tail = "C1GT1L1800R10k", Jan2021 = True))
params_list.append(parameters.Parameters(FOOD_WITH_MTEC_ORDER = True, op_tail = "C1Ga001T1L1800R10k", Jan2021 = True))
params_list.append(parameters.Parameters(FOOD_WITH_MTEC_ORDER = True, op_tail = "C1GT1L1800R10k", Jan2021 = True))
# "C1Z1L500R10k"
# "C1Z1S1L500R10k"
# "C1Z1M6L6R10k"
# "C1Z1M6S1L6R10k"
for params in params_list:
# for category in categoryList:
for category in ["all"]:
# for category in categoryList+["all"]:
if category in skipList:
continue
if len(onlyList) != 0 and category not in onlyList:
continue
print("--{}--".format(category))
df_pattern = pd.read_csv(params.def_result(category, disp=False))
df_aa_pattern = numseq_to_aa(df_pattern, aa_list = aa_list)
df_pattern["pattern"] = df_aa_pattern["pattern"]
df_pattern.to_csv(params.def_aa_result(category))
###Output
_____no_output_____ |
pipelining/exp-csmm/exp-csmm_csmm_1w_ale_plotting.ipynb | ###Markdown
Experiment Description> This notebook is for experiment \ and data sample \. Initialization
###Code
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-csmm/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
###Output
_____no_output_____
###Markdown
Loading data
###Code
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'csmm'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
###Output
feature_name ale_range ale_importance absolute mean
2 venue 18.187400 7.668481 5.819968
1 abstract 17.298981 7.293891 5.535674
0 title 13.163444 4.162646 2.369420
4 year 1.557226 0.623030 0.514271
5 n_citations 1.384287 0.429228 0.301042
3 authors 0.000000 0.000000 0.000000
###Markdown
ALE Plots
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 20])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
###Output
_____no_output_____ |
Model Zoo Example.ipynb | ###Markdown
Load a sample image
###Code
filename = './img/classification-demo.png'
img = mpimg.imread(filename)
plt.axis('off')
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Load a model from Model Zoo
###Code
model_name = 'ResNet50_v1d'
# Download and load the pre-trained model
net = gluoncv.model_zoo.get_model(model_name, pretrained=True)
###Output
_____no_output_____
###Markdown
Load and transform image
###Code
img = mx.image.imread(filename)
# apply default data preprocessing
transformed_img = gluoncv.data.transforms.presets.imagenet.transform_eval(img)
###Output
_____no_output_____
###Markdown
Prediction
###Code
pred = net(transformed_img)
# Map predicted values to probability by softmax
prob = mx.nd.softmax(pred)[0].asnumpy()
###Output
_____no_output_____
###Markdown
Display top 5 scores
###Code
ind = mx.nd.topk(pred, k=5)[0].astype('int').asnumpy().tolist()
print('The input picture is classified to be')
for i in range(5):
print('- [%s] with probability %.3f.'%(net.classes[ind[i]], prob[ind[i]]))
###Output
The input picture is classified to be
- [Welsh springer spaniel] with probability 0.899.
- [Irish setter] with probability 0.005.
- [Brittany spaniel] with probability 0.003.
- [cocker spaniel] with probability 0.002.
- [Blenheim spaniel] with probability 0.002.
|
examples/notebooks/coupled_qp.ipynb | ###Markdown
Coupled Quadratic Program IntroductionA quadratic program (QP) is an optimization problem with a quadratic objective and affine equality and inequality constraints. We consider a QP in which $L$ variable blocks are coupled through a set of $s$ linear constraints, represented as$$\begin{array}{ll}\text{minimize} & \sum_{l=1}^L z_l^TQ_lz_l+c_l^Tz_l\\\text{subject to} & F_lz_l\leq d_l,\quad l=1,\dots,L,\\& \sum_{l=1}^LG_lz_l=h\end{array}$$with respect to $z=(z_1,\dots,z_L)$, where $z_l \in \mathbf{R}^{q_l}, Q_l\in \mathbf{S}_{+}^{q_l}$ (the set of positive semidefinite matrices), $c_l\in\mathbf{R}^{q_l}, F_l \in \mathbf{R}^{p_l \times q_l}, d_l \in \mathbf{R}^{p_l}, G_l \in \mathbf{R}^{s \times q_l}$, and $h \in \mathbf{R}^s$ for $l = 1,\ldots,L$. Reformulate ProblemGiven a set $C \subseteq \mathbf{R}^q$, define the set indicator function $I_C: \mathbf{R}^q \rightarrow \mathbf{R} \cup \{\infty\}$ to be$$I_C(x) = \begin{cases} 0 & x \in C \\ \infty & \text{otherwise}. \end{cases}$$The coupled QP can be written in standard form with$$f_i(x_i)=x_i^TQ_ix_i+c_i^Tx_i+I_{\{x\,:\,F_ix\leq d_i\}}(x_i), \quad i = 1,\ldots,L,$$$$A = [G_1~\ldots~G_L], \quad b = h.$$ Generate DataWe solve an instance of this problem with $L = 4, s= 10, q_l = 30$, and $p_l = 50$ for $l = 1,\ldots,L$. The entries of $c_l \in \mathbf{R}^{q_l}$, $F_l \in \mathbf{R}^{p_l \times q_l}$, $G_l \in \mathbf{R}^{s \times q_l}$, $\tilde z_l \in \mathbf{R}^{q_l}$, and $H_l \in \mathbf{R}^{q_l \times q_l}$ are all drawn IID from $N(0,1)$. We then form $d_l = F_l\tilde z_l + 0.1, Q_l = H_l^TH_l$, and $h = \sum_{l=1}^L G_l\tilde z_l$.
###Code
import numpy as np
np.random.seed(1)
L = 4 # Number of blocks.
s = 10 # Number of coupling constraints.
ql = 30 # Variable dimension of each QP subproblem.
pl = 50 # Constraint dimension of each QP subproblem.
c_list = [np.random.randn(ql) for l in range(L)]
F_list = [np.random.randn(pl,ql) for l in range(L)]
G_list = [np.random.randn(s,ql) for l in range(L)]
z_tld_list = [np.random.randn(ql) for l in range(L)]
H_list = [np.random.randn(ql,ql) for l in range(L)]
d_list = [F_list[l].dot(z_tld_list[l]) + 0.1 for l in range(L)]
Q_list = [H_list[l].T.dot(H_list[l]) for l in range(L)]
G = np.hstack(G_list)
z_tld = np.hstack(z_tld_list)
h = G.dot(z_tld)
###Output
_____no_output_____
###Markdown
Define Proximal OperatorA2DR requires us to provide a proximal oracle for each $f_i$, which computes $\mathbf{prox}_{tf_i}(v_i)$ given any $v_i \in \mathbf{R}^{q_i}$. To do this, we must solve the quadratic program$$\begin{array}{ll}\text{minimize} & x_i^T\left(Q_i+\frac{1}{2t}I\right)x_i +(c_i-\frac{1}{t}v_i)^Tx_i\\\text{subject to} & F_ix_i\leq d_i\end{array}$$with respect to $x_i \in \mathbf{R}^{q_i}$. There are many QP solvers available. In this example, we will use [OSQP](https://osqp.org/) called via the [CVXPY](https://www.cvxpy.org/) interface.
###Code
import cvxpy
from cvxpy import *
def prox_qp(v, t, Q, c, F, d):
q = Q.shape[0]
I = np.eye(q)
# Construct problem.
x = Variable(q)
obj = quad_form(x, Q + I/(2*t)) + (c - v/t)*x
constr = [F*x <= d]
prob = Problem(Minimize(obj), constr)
# Solve with OSQP.
prob.solve(solver = "OSQP")
return x.value
###Output
_____no_output_____
###Markdown
Solve Problem
###Code
from a2dr import a2dr
# Convert problem to standard form.
def prox_qp_wrapper(l, Q_list, c_list, F_list, d_list):
return lambda v, t: prox_qp(v, t, Q_list[l], c_list[l], F_list[l], d_list[l])
# Use "map" method to define list of proximal operators. This addresses the late binding issue:
# https://stackoverflow.com/questions/3431676/creating-functions-in-a-loop
# https://docs.python-guide.org/writing/gotchas/#late-binding-closures
prox_list = list(map(lambda l: prox_qp_wrapper(l, Q_list, c_list, F_list, d_list), range(L)))
# Solve with A2DR.
a2dr_result = a2dr(prox_list, G_list, h)
a2dr_x = a2dr_result["x_vals"]
# Compute objective and constraint violation.
a2dr_obj = np.sum([a2dr_x[l].dot(Q_list[l]).dot(a2dr_x[l]) + c_list[l].dot(a2dr_x[l]) for l in range(L)])
a2dr_constr_vio = [np.linalg.norm(np.maximum(F_list[l].dot(a2dr_x[l]) - d_list[l], 0))**2 for l in range(L)]
a2dr_constr_vio += [np.linalg.norm(G.dot(np.hstack(a2dr_x)) - h)**2]
a2dr_constr_vio_val = np.sqrt(np.sum(a2dr_constr_vio))
# Print solution.
print("Objective value:", a2dr_obj)
print("Constraint violation:", a2dr_constr_vio_val)
###Output
----------------------------------------------------------------------
a2dr v0.2.3.post3 - Prox-Affine Distributed Convex Optimization Solver
(c) Anqi Fu, Junzi Zhang
Stanford University 2019
----------------------------------------------------------------------
### Preconditioning starts ... ###
### Preconditioning finished. ###
max_iter = 1000, t_init (after preconditioning) = 1.73
eps_abs = 1.00e-06, eps_rel = 1.00e-08, precond = True
ada_reg = True, anderson = True, m_accel = 10
lam_accel = 1.00e-08, aa_method = lstsq, D_safe = 1.00e+06
eps_safe = 1.00e-06, M_safe = 10
variables n = 120, constraints m = 10
nnz(A) = 1200
Setup time: 2.39e-02
----------------------------------------------------
iter | total res | primal res | dual res | time (s)
----------------------------------------------------
0| 2.81e+01 7.02e+00 2.72e+01 1.31e-01
100| 1.64e-01 8.35e-02 1.41e-01 3.65e+00
194| 1.13e-06 1.11e-06 1.95e-07 6.99e+00
----------------------------------------------------
Status: Solved
Solve time: 6.99e+00
Total number of iterations: 195
Best total residual: 1.13e-06; reached at iteration 194
======================================================================
Objective value: 4458.948217739922
Constraint violation: 4.525198767295989e-06
|
sales-analysis/sales_analysis.ipynb | ###Markdown
Best Month for Sales and how much was earned in the following month
###Code
# Grouping datas
results = sales_data.groupby("Month").sum()
# Plotting the sales overview of each months
months = range(1,13)
plt.figure(figsize=(18,10))
plt.title("Sales Overview (Months)")
plt.xticks(months)
plt.xlabel("Months")
plt.ylabel("Sales ($) in millions")
plt.bar(months, results["Sales"])
plt.show()
###Output
_____no_output_____
###Markdown
City with hightest number of sales
###Code
# New Column with City
def get_city(address):
return address.split(",")[1]
def get_state(address):
return address.split(",")[2].split(" ")[1]
sales_data["City"] = sales_data['Purchase Address'].apply(lambda x: get_city(x) + ", " + get_state(x))
#Grouping data with city
city_results = sales_data.groupby("City").sum()
cities = [city for city,df in sales_data.groupby("City")]
city_results
# Plot the sales of city
plt.figure(figsize=(18,10))
plt.title("Sales Overview (City)")
plt.bar(cities, city_results["Sales"])
plt.xticks(cities, rotation="vertical")
plt.xlabel("City")
plt.ylabel("Sales ($) in millions")
plt.show()
###Output
_____no_output_____
###Markdown
What time should we display advertisemens to maximize the likelihood of customer’s buying product?
###Code
# Replacing order date into proper pandas datetime
sales_data['Order Date'] = pd.to_datetime(sales_data["Order Date"])
# Add custom time colums
sales_data["Hour"] = sales_data['Order Date'].dt.hour
# Group by hours
hours = [hour for hour, df in sales_data.groupby("Hour")]
plt.figure(figsize=(18,10))
plt.title("Peak Hour of Sales")
plt.xticks(hours)
plt.xlabel("Hours", size=18)
plt.ylabel("Count", size=18)
plt.plot(hours, sales_data.groupby("Hour").count())
plt.show()
# Recommended time for advertisement can be around 11 am or 7pm according to the graph
###Output
_____no_output_____
###Markdown
Products that are often sold together
###Code
# New dataframe to keep track of duplicate products and combining them together
df = sales_data[sales_data["Order ID"].duplicated(keep=False)]
df["Grouped"] = df.groupby("Order ID")['Product'].transform(lambda x: ",".join(x))
df = df[['Order ID', "Grouped"]].drop_duplicates()
df.head()
from itertools import combinations
from collections import Counter
count = Counter()
# Getting most items sold together
for row in df["Grouped"]:
row_list = row.split(",")
count.update(Counter(combinations(row_list, 2)))
count.most_common(10)
###Output
_____no_output_____
###Markdown
Products that sold the most?
###Code
sales_data.head()
# Group data by products and quantity ordered
product_group = sales_data.groupby("Product")
quantity_ordered = product_group.sum()["Quantity Ordered"]
products = [product for product,df in product_group]
# Plotting the graph
plt.figure(figsize=(18,10))
plt.title("Most Sold Products")
plt.bar(products, quantity_ordered)
plt.xticks(products, rotation="vertical")
plt.xlabel("Products")
plt.ylabel("Total Sales")
plt.show()
# Most Ordered/Sold product was AAA Batteries
prices = sales_data.groupby("Product").mean()["Price Each"]
fig, ax1 = plt.subplots()
fig.set_figwidth(18)
fig.set_figheight(10)
fig.suptitle("Most Sold Products with Prices")
ax2 = ax1.twinx()
ax1.bar(products, quantity_ordered, color='g')
ax2.plot(products, prices, "b")
ax1.set_xlabel("Products")
ax1.set_ylabel("Quantity Ordered", color="g")
ax2.set_ylabel("Price", color="b")
ax1.set_xticklabels(products, rotation="vertical")
plt.show()
###Output
/tmp/ipykernel_37141/3842263779.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator
ax1.set_xticklabels(products, rotation="vertical")
|
ML/training/blstm/blstm_on_natural_data/train.ipynb | ###Markdown
Load deps
###Code
import numpy as np
import tensorflow as tf
import sys
import os
sys.path.insert(0, f"{os.getcwd()}/../../../../ML/preprocess/encoding/letter")
from encoding import encode
###Output
_____no_output_____
###Markdown
Get data
###Code
x_train, y_train = encode()
print(x_train[100][:][:])
print(np.shape(x_train[0][:][:]))
print(y_train[100][:][:])
print(np.shape(y_train[0][:][:]))
NB_SAMPLES = 100000
# Take rand portion of the data train samples
np.random.seed(2022)
rand_indices = np.random.permutation(len(x_train))
rand_train_indices = rand_indices[:NB_SAMPLES]
rand_test_indices = rand_indices[NB_SAMPLES : NB_SAMPLES + 300]
xs = x_train[rand_train_indices]
ys = y_train[rand_train_indices]
x_test = x_train[rand_test_indices]
y_test = y_train[rand_test_indices]
###Output
_____no_output_____
###Markdown
Build model
###Code
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
Bidirectional,
LSTM,
Flatten,
Dropout,
Dense,
)
TIMESTEPS = 36 # max word length
DATA_DIM = 27 # number of letters + 1 (catch-all char)
DROPOUT = 0.8
input_shape = (TIMESTEPS, DATA_DIM)
forward_layer = LSTM(TIMESTEPS, return_sequences=True)
backward_layer = LSTM(TIMESTEPS, return_sequences=True, go_backwards=True)
model = Sequential()
model.add(
Bidirectional(forward_layer, backward_layer=backward_layer, input_shape=input_shape)
)
model.add(Flatten())
model.add(Dense(570, activation="gelu"))
model.add(Dropout(DROPOUT))
model.add(Dense(15, activation="gelu"))
model.add(Dense(1))
model.compile(
optimizer="Adam", loss="mean_absolute_error", metrics=["mean_absolute_error"],
)
model.summary()
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
BATCH_SIZE = 30
EPOCHS = 100
# Add early stopping and tensorboard callbacks
early_stopping_callback = EarlyStopping(monitor="loss", patience=10)
tensorboard_callback = TensorBoard(".logdir")
callbacks = [tensorboard_callback, early_stopping_callback]
model.fit(
x=xs,
y=ys,
batch_size=BATCH_SIZE,
validation_split=0.1,
shuffle=True,
epochs=EPOCHS,
callbacks=[callbacks],
)
model.save("../../../../models/blstm_on_natural_data_model.h5")
###Output
Epoch 1/100
3000/3000 [==============================] - 118s 38ms/step - loss: 0.0914 - mean_absolute_error: 0.0914 - val_loss: 0.0651 - val_mean_absolute_error: 0.0651
Epoch 2/100
3000/3000 [==============================] - 110s 37ms/step - loss: 0.0774 - mean_absolute_error: 0.0774 - val_loss: 0.0627 - val_mean_absolute_error: 0.0627
Epoch 3/100
3000/3000 [==============================] - 112s 37ms/step - loss: 0.0696 - mean_absolute_error: 0.0696 - val_loss: 0.0618 - val_mean_absolute_error: 0.0618
Epoch 4/100
3000/3000 [==============================] - 113s 38ms/step - loss: 0.0608 - mean_absolute_error: 0.0608 - val_loss: 0.0448 - val_mean_absolute_error: 0.0448
Epoch 5/100
3000/3000 [==============================] - 120s 40ms/step - loss: 0.0560 - mean_absolute_error: 0.0560 - val_loss: 0.0466 - val_mean_absolute_error: 0.0466
Epoch 6/100
3000/3000 [==============================] - 116s 39ms/step - loss: 0.0517 - mean_absolute_error: 0.0517 - val_loss: 0.0394 - val_mean_absolute_error: 0.0394
Epoch 7/100
3000/3000 [==============================] - 118s 39ms/step - loss: 0.0490 - mean_absolute_error: 0.0490 - val_loss: 0.0390 - val_mean_absolute_error: 0.0390
Epoch 8/100
3000/3000 [==============================] - 118s 39ms/step - loss: 0.0472 - mean_absolute_error: 0.0472 - val_loss: 0.0338 - val_mean_absolute_error: 0.0338
Epoch 9/100
3000/3000 [==============================] - 118s 39ms/step - loss: 0.0454 - mean_absolute_error: 0.0454 - val_loss: 0.0351 - val_mean_absolute_error: 0.0351
Epoch 10/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0438 - mean_absolute_error: 0.0438 - val_loss: 0.0335 - val_mean_absolute_error: 0.0335
Epoch 11/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0428 - mean_absolute_error: 0.0428 - val_loss: 0.0410 - val_mean_absolute_error: 0.0410
Epoch 12/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0413 - mean_absolute_error: 0.0413 - val_loss: 0.0322 - val_mean_absolute_error: 0.0322
Epoch 13/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0402 - mean_absolute_error: 0.0402 - val_loss: 0.0313 - val_mean_absolute_error: 0.0313
Epoch 14/100
3000/3000 [==============================] - 116s 39ms/step - loss: 0.0393 - mean_absolute_error: 0.0393 - val_loss: 0.0297 - val_mean_absolute_error: 0.0297
Epoch 15/100
3000/3000 [==============================] - 116s 39ms/step - loss: 0.0382 - mean_absolute_error: 0.0382 - val_loss: 0.0327 - val_mean_absolute_error: 0.0327
Epoch 16/100
3000/3000 [==============================] - 119s 40ms/step - loss: 0.0368 - mean_absolute_error: 0.0368 - val_loss: 0.0321 - val_mean_absolute_error: 0.0321
Epoch 17/100
3000/3000 [==============================] - 120s 40ms/step - loss: 0.0362 - mean_absolute_error: 0.0362 - val_loss: 0.0280 - val_mean_absolute_error: 0.0280
Epoch 18/100
3000/3000 [==============================] - 119s 40ms/step - loss: 0.0355 - mean_absolute_error: 0.0355 - val_loss: 0.0293 - val_mean_absolute_error: 0.0293
Epoch 19/100
3000/3000 [==============================] - 119s 40ms/step - loss: 0.0351 - mean_absolute_error: 0.0351 - val_loss: 0.0277 - val_mean_absolute_error: 0.0277
Epoch 20/100
3000/3000 [==============================] - 118s 39ms/step - loss: 0.0345 - mean_absolute_error: 0.0345 - val_loss: 0.0285 - val_mean_absolute_error: 0.0285
Epoch 21/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0340 - mean_absolute_error: 0.0340 - val_loss: 0.0278 - val_mean_absolute_error: 0.0278
Epoch 22/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0338 - mean_absolute_error: 0.0338 - val_loss: 0.0314 - val_mean_absolute_error: 0.0314
Epoch 23/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0329 - mean_absolute_error: 0.0329 - val_loss: 0.0253 - val_mean_absolute_error: 0.0253lute_error:
Epoch 24/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0328 - mean_absolute_error: 0.0328 - val_loss: 0.0260 - val_mean_absolute_error: 0.0260
Epoch 25/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0327 - mean_absolute_error: 0.0327 - val_loss: 0.0253 - val_mean_absolute_error: 0.0253
Epoch 26/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0320 - mean_absolute_error: 0.0320 - val_loss: 0.0239 - val_mean_absolute_error: 0.0239
Epoch 27/100
3000/3000 [==============================] - 116s 39ms/step - loss: 0.0321 - mean_absolute_error: 0.0321 - val_loss: 0.0293 - val_mean_absolute_error: 0.0293
Epoch 28/100
3000/3000 [==============================] - 117s 39ms/step - loss: 0.0316 - mean_absolute_error: 0.0316 - val_loss: 0.0246 - val_mean_absolute_error: 0.0246
Epoch 29/100
3000/3000 [==============================] - 116s 39ms/step - loss: 0.0311 - mean_absolute_error: 0.0311 - val_loss: 0.0253 - val_mean_absolute_error: 0.0253
Epoch 30/100
3000/3000 [==============================] - 115s 38ms/step - loss: 0.0309 - mean_absolute_error: 0.0309 - val_loss: 0.0295 - val_mean_absolute_error: 0.0295
Epoch 31/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0307 - mean_absolute_error: 0.0307 - val_loss: 0.0244 - val_mean_absolute_error: 0.0244
Epoch 32/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0304 - mean_absolute_error: 0.0304 - val_loss: 0.0255 - val_mean_absolute_error: 0.0255
Epoch 33/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0304 - mean_absolute_error: 0.0304 - val_loss: 0.0242 - val_mean_absolute_error: 0.0242
Epoch 34/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0297 - mean_absolute_error: 0.0297 - val_loss: 0.0292 - val_mean_absolute_error: 0.0292
Epoch 35/100
3000/3000 [==============================] - 113s 38ms/step - loss: 0.0297 - mean_absolute_error: 0.0297 - val_loss: 0.0216 - val_mean_absolute_error: 0.0216
Epoch 36/100
3000/3000 [==============================] - 118s 39ms/step - loss: 0.0295 - mean_absolute_error: 0.0295 - val_loss: 0.0231 - val_mean_absolute_error: 0.0231
Epoch 37/100
3000/3000 [==============================] - 113s 38ms/step - loss: 0.0294 - mean_absolute_error: 0.0294 - val_loss: 0.0246 - val_mean_absolute_error: 0.0246
Epoch 38/100
3000/3000 [==============================] - 113s 38ms/step - loss: 0.0291 - mean_absolute_error: 0.0291 - val_loss: 0.0237 - val_mean_absolute_error: 0.0237
Epoch 39/100
3000/3000 [==============================] - 112s 37ms/step - loss: 0.0292 - mean_absolute_error: 0.0292 - val_loss: 0.0234 - val_mean_absolute_error: 0.0234
Epoch 40/100
3000/3000 [==============================] - 112s 37ms/step - loss: 0.0290 - mean_absolute_error: 0.0290 - val_loss: 0.0281 - val_mean_absolute_error: 0.0281
Epoch 41/100
3000/3000 [==============================] - 115s 38ms/step - loss: 0.0287 - mean_absolute_error: 0.0287 - val_loss: 0.0274 - val_mean_absolute_error: 0.0274
Epoch 42/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0285 - mean_absolute_error: 0.0285 - val_loss: 0.0219 - val_mean_absolute_error: 0.0219
Epoch 43/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0283 - mean_absolute_error: 0.0283 - val_loss: 0.0223 - val_mean_absolute_error: 0.0223
Epoch 44/100
3000/3000 [==============================] - 115s 38ms/step - loss: 0.0282 - mean_absolute_error: 0.0282 - val_loss: 0.0224 - val_mean_absolute_error: 0.0224
Epoch 45/100
3000/3000 [==============================] - 114s 38ms/step - loss: 0.0284 - mean_absolute_error: 0.0284 - val_loss: 0.0226 - val_mean_absolute_error: 0.0226
Epoch 46/100
3000/3000 [==============================] - 112s 37ms/step - loss: 0.0282 - mean_absolute_error: 0.0282 - val_loss: 0.0217 - val_mean_absolute_error: 0.0217
Epoch 47/100
3000/3000 [==============================] - 112s 37ms/step - loss: 0.0281 - mean_absolute_error: 0.0281 - val_loss: 0.0216 - val_mean_absolute_error: 0.0216
Epoch 48/100
3000/3000 [==============================] - 110s 37ms/step - loss: 0.0278 - mean_absolute_error: 0.0278 - val_loss: 0.0220 - val_mean_absolute_error: 0.0220
Epoch 49/100
3000/3000 [==============================] - 105s 35ms/step - loss: 0.0278 - mean_absolute_error: 0.0278 - val_loss: 0.0244 - val_mean_absolute_error: 0.0244
Epoch 50/100
3000/3000 [==============================] - 106s 35ms/step - loss: 0.0277 - mean_absolute_error: 0.0277 - val_loss: 0.0249 - val_mean_absolute_error: 0.0249
Epoch 51/100
3000/3000 [==============================] - 107s 36ms/step - loss: 0.0276 - mean_absolute_error: 0.0276 - val_loss: 0.0220 - val_mean_absolute_error: 0.0220
Epoch 52/100
3000/3000 [==============================] - 105s 35ms/step - loss: 0.0274 - mean_absolute_error: 0.0274 - val_loss: 0.0231 - val_mean_absolute_error: 0.02310.0274 -
Epoch 53/100
3000/3000 [==============================] - 106s 35ms/step - loss: 0.0274 - mean_absolute_error: 0.0274 - val_loss: 0.0210 - val_mean_absolute_error: 0.0210
Epoch 54/100
3000/3000 [==============================] - 102s 34ms/step - loss: 0.0272 - mean_absolute_error: 0.0272 - val_loss: 0.0228 - val_mean_absolute_error: 0.0228
Epoch 55/100
3000/3000 [==============================] - 95s 32ms/step - loss: 0.0272 - mean_absolute_error: 0.0272 - val_loss: 0.0218 - val_mean_absolute_error: 0.0218
Epoch 56/100
3000/3000 [==============================] - 97s 32ms/step - loss: 0.0268 - mean_absolute_error: 0.0268 - val_loss: 0.0245 - val_mean_absolute_error: 0.0245
Epoch 57/100
3000/3000 [==============================] - 108s 36ms/step - loss: 0.0269 - mean_absolute_error: 0.0269 - val_loss: 0.0210 - val_mean_absolute_error: 0.0210
Epoch 58/100
3000/3000 [==============================] - 101s 34ms/step - loss: 0.0269 - mean_absolute_error: 0.0269 - val_loss: 0.0248 - val_mean_absolute_error: 0.0248
Epoch 59/100
3000/3000 [==============================] - 102s 34ms/step - loss: 0.0269 - mean_absolute_error: 0.0269 - val_loss: 0.0208 - val_mean_absolute_error: 0.0208
Epoch 60/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0264 - mean_absolute_error: 0.0264 - val_loss: 0.0259 - val_mean_absolute_error: 0.0259
Epoch 61/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0266 - mean_absolute_error: 0.0266 - val_loss: 0.0225 - val_mean_absolute_error: 0.0225
Epoch 62/100
3000/3000 [==============================] - 110s 37ms/step - loss: 0.0264 - mean_absolute_error: 0.0264 - val_loss: 0.0274 - val_mean_absolute_error: 0.0274
Epoch 63/100
3000/3000 [==============================] - 101s 34ms/step - loss: 0.0265 - mean_absolute_error: 0.0265 - val_loss: 0.0218 - val_mean_absolute_error: 0.0218
Epoch 64/100
3000/3000 [==============================] - 115s 38ms/step - loss: 0.0264 - mean_absolute_error: 0.0264 - val_loss: 0.0213 - val_mean_absolute_error: 0.0213
Epoch 65/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0264 - mean_absolute_error: 0.0264 - val_loss: 0.0206 - val_mean_absolute_error: 0.0206
Epoch 66/100
3000/3000 [==============================] - 105s 35ms/step - loss: 0.0261 - mean_absolute_error: 0.0261 - val_loss: 0.0195 - val_mean_absolute_error: 0.0195
Epoch 67/100
3000/3000 [==============================] - 95s 32ms/step - loss: 0.0260 - mean_absolute_error: 0.0260 - val_loss: 0.0201 - val_mean_absolute_error: 0.0201
Epoch 68/100
3000/3000 [==============================] - 96s 32ms/step - loss: 0.0259 - mean_absolute_error: 0.0259 - val_loss: 0.0231 - val_mean_absolute_error: 0.0231
Epoch 69/100
3000/3000 [==============================] - 103s 34ms/step - loss: 0.0262 - mean_absolute_error: 0.0262 - val_loss: 0.0228 - val_mean_absolute_error: 0.0228
Epoch 70/100
3000/3000 [==============================] - 95s 32ms/step - loss: 0.0258 - mean_absolute_error: 0.0258 - val_loss: 0.0238 - val_mean_absolute_error: 0.0238
Epoch 71/100
3000/3000 [==============================] - 97s 32ms/step - loss: 0.0262 - mean_absolute_error: 0.0262 - val_loss: 0.0205 - val_mean_absolute_error: 0.0205
Epoch 72/100
3000/3000 [==============================] - 105s 35ms/step - loss: 0.0258 - mean_absolute_error: 0.0258 - val_loss: 0.0244 - val_mean_absolute_error: 0.0244
Epoch 73/100
3000/3000 [==============================] - 94s 31ms/step - loss: 0.0257 - mean_absolute_error: 0.0257 - val_loss: 0.0217 - val_mean_absolute_error: 0.0217
Epoch 74/100
3000/3000 [==============================] - 93s 31ms/step - loss: 0.0256 - mean_absolute_error: 0.0256 - val_loss: 0.0204 - val_mean_absolute_error: 0.0204
Epoch 75/100
3000/3000 [==============================] - 94s 31ms/step - loss: 0.0256 - mean_absolute_error: 0.0256 - val_loss: 0.0218 - val_mean_absolute_error: 0.0218
Epoch 76/100
3000/3000 [==============================] - 94s 31ms/step - loss: 0.0256 - mean_absolute_error: 0.0256 - val_loss: 0.0212 - val_mean_absolute_error: 0.0212
Epoch 77/100
3000/3000 [==============================] - 98s 33ms/step - loss: 0.0252 - mean_absolute_error: 0.0252 - val_loss: 0.0202 - val_mean_absolute_error: 0.0202
Epoch 78/100
3000/3000 [==============================] - 93s 31ms/step - loss: 0.0252 - mean_absolute_error: 0.0252 - val_loss: 0.0196 - val_mean_absolute_error: 0.0196
Epoch 79/100
3000/3000 [==============================] - 94s 31ms/step - loss: 0.0254 - mean_absolute_error: 0.0254 - val_loss: 0.0205 - val_mean_absolute_error: 0.0205
Epoch 80/100
3000/3000 [==============================] - 108s 36ms/step - loss: 0.0250 - mean_absolute_error: 0.0250 - val_loss: 0.0235 - val_mean_absolute_error: 0.0235
Epoch 81/100
3000/3000 [==============================] - 115s 38ms/step - loss: 0.0252 - mean_absolute_error: 0.0252 - val_loss: 0.0202 - val_mean_absolute_error: 0.0202
Epoch 82/100
3000/3000 [==============================] - 101s 34ms/step - loss: 0.0248 - mean_absolute_error: 0.0248 - val_loss: 0.0207 - val_mean_absolute_error: 0.0207
Epoch 83/100
3000/3000 [==============================] - 99s 33ms/step - loss: 0.0251 - mean_absolute_error: 0.0251 - val_loss: 0.0213 - val_mean_absolute_error: 0.0213
Epoch 84/100
3000/3000 [==============================] - 107s 36ms/step - loss: 0.0248 - mean_absolute_error: 0.0248 - val_loss: 0.0236 - val_mean_absolute_error: 0.0236
Epoch 85/100
3000/3000 [==============================] - 113s 38ms/step - loss: 0.0246 - mean_absolute_error: 0.0246 - val_loss: 0.0202 - val_mean_absolute_error: 0.0202
Epoch 86/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0246 - mean_absolute_error: 0.0246 - val_loss: 0.0212 - val_mean_absolute_error: 0.0212
Epoch 87/100
3000/3000 [==============================] - 110s 37ms/step - loss: 0.0246 - mean_absolute_error: 0.0246 - val_loss: 0.0229 - val_mean_absolute_error: 0.0229loss: 0.0246 - mean_abso
Epoch 88/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0246 - mean_absolute_error: 0.0246 - val_loss: 0.0201 - val_mean_absolute_error: 0.0201
Epoch 89/100
3000/3000 [==============================] - 110s 37ms/step - loss: 0.0246 - mean_absolute_error: 0.0246 - val_loss: 0.0219 - val_mean_absolute_error: 0.0219
Epoch 90/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0245 - mean_absolute_error: 0.0245 - val_loss: 0.0208 - val_mean_absolute_error: 0.0208
Epoch 91/100
3000/3000 [==============================] - 108s 36ms/step - loss: 0.0245 - mean_absolute_error: 0.0245 - val_loss: 0.0194 - val_mean_absolute_error: 0.0194
Epoch 92/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0244 - mean_absolute_error: 0.0244 - val_loss: 0.0198 - val_mean_absolute_error: 0.0198
Epoch 93/100
3000/3000 [==============================] - 104s 35ms/step - loss: 0.0242 - mean_absolute_error: 0.0242 - val_loss: 0.0206 - val_mean_absolute_error: 0.0206
Epoch 94/100
3000/3000 [==============================] - 100s 33ms/step - loss: 0.0244 - mean_absolute_error: 0.0244 - val_loss: 0.0206 - val_mean_absolute_error: 0.0206
Epoch 95/100
3000/3000 [==============================] - 107s 36ms/step - loss: 0.0241 - mean_absolute_error: 0.0241 - val_loss: 0.0196 - val_mean_absolute_error: 0.0196
Epoch 96/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0240 - mean_absolute_error: 0.0240 - val_loss: 0.0207 - val_mean_absolute_error: 0.0207
Epoch 97/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0240 - mean_absolute_error: 0.0240 - val_loss: 0.0199 - val_mean_absolute_error: 0.0199
Epoch 98/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0240 - mean_absolute_error: 0.0240 - val_loss: 0.0222 - val_mean_absolute_error: 0.0222
Epoch 99/100
3000/3000 [==============================] - 109s 36ms/step - loss: 0.0241 - mean_absolute_error: 0.0241 - val_loss: 0.0190 - val_mean_absolute_error: 0.0190
Epoch 100/100
3000/3000 [==============================] - 108s 36ms/step - loss: 0.0240 - mean_absolute_error: 0.0240 - val_loss: 0.0186 - val_mean_absolute_error: 0.0186
###Markdown
Test model
###Code
from tensorflow.keras.models import load_model
print("Evaluate on test data")
loaded_model = load_model("../../../../models/blstm_on_natural_data_model.h5")
results = loaded_model.evaluate(x_test, y_test, batch_size=30)
print("test loss, test acc:", results)
###Output
Evaluate on test data
10/10 [==============================] - 1s 13ms/step - loss: 0.0228 - mean_absolute_error: 0.0228
test loss, test acc: [0.022822052240371704, 0.022822052240371704]
###Markdown
Try this model with a simulated user_test_word. Change user_test_word to any word you like. It's specially revealing if you use English words that are not in the syllableCountDict (such as pogchamp).
###Code
from tensorflow.keras.models import load_model
from encoding import encode_word
def predict(word):
encoded_word = encode_word(word)
loaded_model = load_model("../../../../models/blstm_on_natural_data_model.h5")
out_prediction = loaded_model.predict(np.array([encoded_word]))
return out_prediction
user_test_word = "pogchamp" # Change this to simulate arbitrary user input.
print(predict(user_test_word))
###Output
[[1.9606566]]
|
src/datacleaning/Chapter 5/3_grouped_boxplots.ipynb | ###Markdown
Table of Contents1 Import the pandas, matplotlib, and seaborn libraries2 View the median, and first and third quartile values for weeks worked for each degree attainment level3 Do a boxplot of weeks worked by highest degree earned4 View the minimum, maximum, median, and first and third quartile values for total cases per million by region5 Do boxplots of cases per million by region6 Show the most extreme values for cases per million7 Redo the boxplots without the extreme values Import the pandas, matplotlib, and seaborn libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# pd.set_option('display.width', 80)
# pd.set_option('display.max_columns', 7)
# pd.set_option('display.max_rows', 200)
# pd.options.display.float_format = '{:,.0f}'.format
import watermark
%load_ext watermark
%watermark -n -i -iv
nls97 = pd.read_csv('data/nls97.csv')
nls97.set_index('personid', inplace=True)
covidtotals = pd.read_csv('data/covidtotals.csv', parse_dates=['lastdate'])
covidtotals.set_index('iso_code', inplace=True)
###Output
_____no_output_____
###Markdown
View the median, and first and third quartile values for weeks worked for each degree attainment level
###Code
def gettots(x):
out = {}
out['min'] = x.min()
out['qr1'] = x.quantile(0.25)
out['med'] = x.median()
out['qr3'] = x.quantile(0.75)
out['max'] = x.max()
out['count'] = x.count()
return pd.Series(out)
nls97.groupby(['highestdegree'])['weeksworked17'].apply(gettots).unstack()
nls97.groupby(['highestdegree'])['weeksworked17'].apply(gettots)
###Output
_____no_output_____
###Markdown
Do a boxplot of weeks worked by highest degree earned
###Code
myplt = sns.boxplot('highestdegree',
'weeksworked17',
data=nls97,
order=sorted(nls97['highestdegree'].dropna().unique()))
myplt.set_title('Boxplots of Weeks Worked by Highest Degree')
myplt.set_xlabel('Highest Degree Attained')
myplt.set_ylabel('Weeks Worked 2017')
myplt.set_xticklabels(myplt.get_xticklabels(),
rotation=60,
horizontalalignment='right')
plt.tight_layout()
plt.show()
###Output
D:\ProgramData\Anaconda3\envs\dsn\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
View the minimum, maximum, median, and first and third quartile values for total cases per million by region
###Code
covidtotals.groupby(['region'])['total_cases_pm'].apply(gettots).unstack()
###Output
_____no_output_____
###Markdown
Do boxplots of cases per million by region
###Code
plt.figure(figsize=(12, 12))
sns.boxplot('total_cases_pm', 'region', data=covidtotals)
sns.swarmplot(y='region',
x='total_cases_pm',
data=covidtotals,
size=2,
color='0.3',
linewidth=0)
plt.title('Boxplots of Total Cases Per Million by Region')
plt.xlabel('Cases Per Million')
plt.ylabel('Region')
plt.tight_layout()
plt.show()
###Output
D:\ProgramData\Anaconda3\envs\dsn\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
Show the most extreme values for cases per million
###Code
covidtotals.loc[covidtotals['total_cases_pm'] >= 14000,
['location', 'total_cases_pm']]
###Output
_____no_output_____
###Markdown
Redo the boxplots without the extreme values
###Code
plt.figure(figsize=(12, 12))
sns.boxplot('total_cases_pm',
'region',
data=covidtotals.loc[covidtotals['total_cases_pm'] < 14000])
sns.swarmplot(y='region',
x='total_cases_pm',
data=covidtotals.loc[covidtotals['total_cases_pm'] < 14000],
size=3,
color='0.3',
linewidth=0)
plt.title('Total Cases Without Extreme Values')
plt.xlabel('Cases Per Million')
plt.ylabel('Region')
plt.tight_layout()
plt.show()
###Output
D:\ProgramData\Anaconda3\envs\dsn\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
|
simulator/data_generator.ipynb | ###Markdown
Output Comparision
###Code
import json
with open('simulator_results.json', 'r') as f:
sim_res = json.load(f)
print(len(sim_res))
###Output
_____no_output_____ |
Section-08-Discretisation/08.04-Discretisation-plus-Encoding.ipynb | ###Markdown
Discretisation plus EncodingWhat shall we do with the variable after discretisation? should we use the buckets as a numerical variable? or should we use the intervals as categorical variable?The answer is, you can do either.If you are building decision tree based algorithms and the output of the discretisation are integers (each integer referring to a bin), then you can use those directly, as decision trees will pick up non-linear relationships between the discretised variable and the target.If you are building linear models instead, the bins may not necessarily hold a linear relationship with the target. In this case, it may help improve model performance to treat the bins as categories and to one hot encoding, or target guided encodings like mean encoding, weight of evidence, or target guided ordinal encoding.We can easily do so by combining feature-engine's discretisers and encoders. In this demoWe will perform equal frequency discretisation followed by target guided orginal encoding using the titanic datasetIf instead you would like to do weight of evidence or mean target encoding, you need only replace the Feature-engine's encoder.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.discretisation import EqualFrequencyDiscretiser
from feature_engine.encoding import OrdinalEncoder
# load the the Titanic Dataset
data = pd.read_csv('../titanic.csv',
usecols=['age', 'fare', 'survived'])
data.head()
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
The variables Age and Fare contain missing data, that I will fill by extracting a random sample of the variable.
###Code
def impute_na(data, variable):
df = data.copy()
# random sampling
df[variable + '_random'] = df[variable]
# extract the random sample to fill the na
random_sample = X_train[variable].dropna().sample(
df[variable].isnull().sum(), random_state=0)
# pandas needs to have the same index in order to merge datasets
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(), variable + '_random'] = random_sample
return df[variable + '_random']
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
# let's explore the distribution of age
X_train[['age', 'fare']].hist(bins=30, figsize=(8,4))
plt.show()
###Output
_____no_output_____
###Markdown
Equal frequency discretisation with Feature-Engine
###Code
# set up the equal frequency discretiser
# to encode variables we need them returned as objects for feature-engine
disc = EqualFrequencyDiscretiser(
q=10, variables=['age', 'fare'], return_object=True)
# find the intervals
disc.fit(X_train)
# transform train and text
train_t = disc.transform(X_train)
test_t = disc.transform(X_test)
train_t.dtypes
train_t.head()
# let's explore if the bins have a linear relationship
# with the target:
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
None of the variables show a monotonic relationship between the intervals of the discrete variable and the mean of survival. We can encode the intervals to return a monotonic relationship: Ordinal encoding with Feature-Engine
###Code
enc = OrdinalEncoder(encoding_method = 'ordered')
enc.fit(train_t, y_train)
train_t = enc.transform(train_t)
test_t = enc.transform(test_t)
# in the map, we map bin to position
enc.encoder_dict_
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
Discretisation plus EncodingWhat shall we do with the variable after discretisation? should we use the buckets as a numerical variable? or should we use the intervals as categorical variable?The answer is, you can do either.If you are building decision tree based algorithms and the output of the discretisation are integers (each integer referring to a bin), then you can use those directly, as decision trees will pick up non-linear relationships between the discretised variable and the target.If you are building linear models instead, the bins may not necessarily hold a linear relationship with the target. In this case, it may help improve model performance to treat the bins as categories and to one hot encoding, or target guided encodings like mean encoding, weight of evidence, or target guided ordinal encoding.We can easily do so by combining feature-engine's discretisers and encoders. In this demoWe will perform equal frequency discretisation followed by target guided orginal encoding using the titanic datasetIf instead you would like to do weight of evidence or mean target encoding, you need only replace the Feature-engine's encoder.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.discretisation import EqualFrequencyDiscretiser
from feature_engine.encoding import OrdinalEncoder
# load the the Titanic Dataset
data = pd.read_csv('../titanic.csv',
usecols=['age', 'fare', 'survived'])
data.head()
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
The variables Age and Fare contain missing data, that I will fill by extracting a random sample of the variable.
###Code
def impute_na(data, variable):
df = data.copy()
# random sampling
df[variable + '_random'] = df[variable]
# extract the random sample to fill the na
random_sample = X_train[variable].dropna().sample(
df[variable].isnull().sum(), random_state=0)
# pandas needs to have the same index in order to merge datasets
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(), variable + '_random'] = random_sample
return df[variable + '_random']
# replace NA in both train and test sets
X_train['age'] = impute_na(X_train, 'age')
X_test['age'] = impute_na(X_test, 'age')
X_train['fare'] = impute_na(X_train, 'fare')
X_test['fare'] = impute_na(X_test, 'fare')
# let's explore the distribution of age
X_train[['age', 'fare']].hist(bins=30, figsize=(8,4))
plt.show()
###Output
_____no_output_____
###Markdown
Equal frequency discretisation with Feature-Engine
###Code
# set up the equal frequency discretiser
# to encode variables we need them returned as objects for feature-engine
disc = EqualFrequencyDiscretiser(
q=10, variables=['age', 'fare'], return_object=True)
# find the intervals
disc.fit(X_train)
# transform train and text
train_t = disc.transform(X_train)
test_t = disc.transform(X_test)
train_t.dtypes
train_t.head()
# let's explore if the bins have a linear relationship
# with the target:
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
None of the variables show a monotonic relationship between the intervals of the discrete variable and the mean of survival. We can encode the intervals to return a monotonic relationship: Ordinal encoding with Feature-Engine
###Code
enc = OrdinalEncoder(encoding_method = 'ordered')
enc.fit(train_t, y_train)
train_t = enc.transform(train_t)
test_t = enc.transform(test_t)
# in the map, we map bin to position
enc.encoder_dict_
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
Discretisation plus EncodingWhat shall we do with the variable after discretisation? should we use the buckets as a numerical variable? or should we use the intervals as categorical variable?The answer is, you can do either.If you are building decision tree based algorithms and the output of the discretisation are integers (each integer referring to a bin), then you can use those directly, as decision trees will pick up non-linear relationships between the discretised variable and the target.If you are building linear models instead, the bins may not necessarily hold a linear relationship with the target. In this case, it may help improve model performance to treat the bins as categories and to one hot encoding, or target guided encodings like mean encoding, weight of evidence, or target guided ordinal encoding.We can easily do so by combining feature-engine's discretisers and encoders. In this demoWe will perform equal frequency discretisation followed by target guided orginal encoding using the titanic datasetIf instead you would like to do weight of evidence or mean target encoding, you need only replace the Feature-engine's encoder.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.discretisers import EqualFrequencyDiscretiser
from feature_engine.categorical_encoders import OrdinalCategoricalEncoder
# load the the Titanic Dataset
data = pd.read_csv('../titanic.csv',
usecols=['age', 'fare', 'survived'])
data.head()
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
The variables Age and Fare contain missing data, that I will fill by extracting a random sample of the variable.
###Code
def impute_na(data, variable):
df = data.copy()
# random sampling
df[variable + '_random'] = df[variable]
# extract the random sample to fill the na
random_sample = X_train[variable].dropna().sample(
df[variable].isnull().sum(), random_state=0)
# pandas needs to have the same index in order to merge datasets
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(), variable + '_random'] = random_sample
return df[variable + '_random']
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
# let's explore the distribution of age
X_train[['age', 'fare']].hist(bins=30, figsize=(8,4))
plt.show()
###Output
_____no_output_____
###Markdown
Equal frequency discretisation with Feature-Engine
###Code
# set up the equal frequency discretiser
# to encode variables we need them returned as objects for feature-engine
disc = EqualFrequencyDiscretiser(
q=10, variables=['age', 'fare'], return_object=True)
# find the intervals
disc.fit(X_train)
# transform train and text
train_t = disc.transform(X_train)
test_t = disc.transform(X_test)
train_t.dtypes
train_t.head()
# let's explore if the bins have a linear relationship
# with the target:
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
None of the variables show a monotonic relationship between the intervals of the discrete variable and the mean of survival. We can encode the intervals to return a monotonic relationship: Ordinal encoding with Feature-Engine
###Code
enc = OrdinalCategoricalEncoder(encoding_method = 'ordered')
enc.fit(train_t, y_train)
train_t = enc.transform(train_t)
test_t = enc.transform(test_t)
# in the map, we map bin to position
enc.encoder_dict_
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
Discretisation plus EncodingWhat shall we do with the variable after discretisation? should we use the buckets as a numerical variable? or should we use the intervals as categorical variable?The answer is, you can do either.If you are building decision tree based algorithms and the output of the discretisation are integers (each integer referring to a bin), then you can use those directly, as decision trees will pick up non-linear relationships between the discretised variable and the target.If you are building linear models instead, the bins may not necessarily hold a linear relationship with the target. In this case, it may help improve model performance to treat the bins as categories and to one hot encoding, or target guided encodings like mean encoding, weight of evidence, or target guided ordinal encoding.We can easily do so by combining feature-engine's discretisers and encoders. In this demoWe will perform equal frequency discretisation followed by target guided orginal encoding using the titanic datasetIf instead you would like to do weight of evidence or mean target encoding, you need only replace the Feature-engine's encoder.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.discretisation import EqualFrequencyDiscretiser
from feature_engine.encoding import OrdinalEncoder
# load the the Titanic Dataset
data = pd.read_csv('../titanic.csv',
usecols=['age', 'fare', 'survived'])
data.head()
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
The variables Age and Fare contain missing data, that I will fill by extracting a random sample of the variable.
###Code
def impute_na(data, variable):
df = data.copy()
# random sampling
df[variable + '_random'] = df[variable]
# extract the random sample to fill the na
random_sample = X_train[variable].dropna().sample(
df[variable].isnull().sum(), random_state=0)
# pandas needs to have the same index in order to merge datasets
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(), variable + '_random'] = random_sample
return df[variable + '_random']
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
# let's explore the distribution of age
X_train[['age', 'fare']].hist(bins=30, figsize=(8,4))
plt.show()
###Output
_____no_output_____
###Markdown
Equal frequency discretisation with Feature-Engine
###Code
# set up the equal frequency discretiser
# to encode variables we need them returned as objects for feature-engine
disc = EqualFrequencyDiscretiser(
q=10, variables=['age', 'fare'], return_object=True)
# find the intervals
disc.fit(X_train)
# transform train and text
train_t = disc.transform(X_train)
test_t = disc.transform(X_test)
train_t.dtypes
train_t.head()
# let's explore if the bins have a linear relationship
# with the target:
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____
###Markdown
None of the variables show a monotonic relationship between the intervals of the discrete variable and the mean of survival. We can encode the intervals to return a monotonic relationship: Ordinal encoding with Feature-Engine
###Code
enc = OrdinalEncoder(encoding_method = 'ordered')
enc.fit(train_t, y_train)
train_t = enc.transform(train_t)
test_t = enc.transform(test_t)
# in the map, we map bin to position
enc.encoder_dict_
pd.concat([train_t, y_train], axis=1).groupby('age')['survived'].mean().plot()
plt.ylabel('mean of survived')
pd.concat([train_t, y_train], axis=1).groupby('fare')['survived'].mean().plot()
plt.ylabel('mean of survived')
###Output
_____no_output_____ |
code/.ipynb_checkpoints/LASSO_run_FS_comp-checkpoint.ipynb | ###Markdown
Lasso Feature Selection
###Code
for R2_TARGET in [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]:
Alphas = np.logspace(-3,2,50).tolist()
vars_OC = []
for OUTCOME in outcomes.columns.values:
y = outcomes[OUTCOME]
y = y.dropna()
x_all = data_all.loc[(np.in1d(list(data_all.index),list(y.index))),:]
r_2 = []
for L in Alphas:
reg = linear_model.Lasso(alpha = L)
reg.fit(x_all,y)
r_2.append(reg.score(x_all,y))
reg = linear_model.Lasso()
path = reg.path(x_all,y, alphas = Alphas)
n = [np.sum(path[1][:,n] != 0) for n in range(0,len(Alphas))]
r_2.reverse()
Alphas.reverse()
temp = [abs(i-R2_TARGET) for i in r_2]
Alpha_O = Alphas[temp.index(min(temp))]
coeff = pd.DataFrame(path[1][:,temp.index(min(temp))],index = x_all.columns.values)
feature_index = coeff != 0
features = x_all.loc[:,feature_index.iloc[:,0]]
x_lars = features.columns.values
vars_OC.extend(x_lars)
vars_UNIQUE = data_all.loc[:,np.unique(vars_OC)]
vars_UNIQUE.to_csv(('../output/LASSO_ALL/Lasso_'+str(R2_TARGET)+'_Vars.csv'))
###Output
_____no_output_____ |
tutorials/tutorial-07-graph-node_classification.ipynb | ###Markdown
Node Classification in GraphsIn this notebook, we will use *ktrain* to perform node classificaiton on the PubMed Diabetes citation graph. In the PubMed graph, each node represents a paper pertaining to one of three topics: *Diabetes Mellitus - Experimental*, *Diabetes Mellitus - Type 1*, and *Diabetes Mellitus - Type 2*. Links represent citations between papers. The attributes or features assigned to each node are in the form of a vector of words in each paper and their corresponding TF-IDF scores. The dataset is available [here](https://linqs-data.soe.ucsc.edu/public/Pubmed-Diabetes.tgz).*ktrain* expects two files for node classification problems. The first is comma or tab delimited file listing the edges in the graph, where each row contains the node IDs forming the edge. The second is a comma or tab delimted file listing the features or attributes associated with each node in the graph. The first column in this file is the User ID and the last column should be string representing the target or label of the node. All other nodes should be numerical features assumed to be standardized appropriately and non-null. We must prepare the raw data to conform to the above before we begin. Preparing the DataThe code below will create two files that can be processed directly by *ktrain*:- `/tmp/pubmed-nodes.tab`- `/tmp/pubmed-edges.tab`
###Code
# set this to the location of the downloaded Pubmed-Diabetes data
DATADIR = 'data/pubmed/Pubmed-Diabetes/data'
import os.path
import pandas as pd
import itertools
# process links
edgelist = pd.read_csv(os.path.join(DATADIR, 'Pubmed-Diabetes.DIRECTED.cites.tab'),
skiprows=2, header=None,delimiter='\t')
edgelist.drop(columns=[0,2], inplace=True)
edgelist.columns = ['source', 'target']
edgelist['source'] = edgelist['source'].map(lambda x: x.lstrip('paper:'))
edgelist['target'] = edgelist['target'].map(lambda x: x.lstrip('paper:'))
edgelist.head()
edgelist.to_csv('/tmp/pubmed-edges.tab', sep='\t', header=None, index=False )
# process nodes and their attributes
nodes_as_dict = []
with open(os.path.join(os.path.expanduser(DATADIR), "Pubmed-Diabetes.NODE.paper.tab")) as fp:
for line in itertools.islice(fp, 2, None):
line_res = line.split("\t")
pid = line_res[0]
feat_name = ['pid'] + [l.split("=")[0] for l in line_res[1:]][:-1] # delete summary
feat_value = [l.split("=")[1] for l in line_res[1:]][:-1] # delete summary
feat_value = [pid] + [ float(x) for x in feat_value ] # change to numeric from str
row = dict(zip(feat_name, feat_value))
nodes_as_dict.append(row)
colnames = set()
for row in nodes_as_dict:
colnames.update(list(row.keys()))
colnames = list(colnames)
colnames.sort()
colnames.remove('label')
colnames.append('label')
target_dict = {1:'Diabetes_Mellitus-Experimental', 2: 'Diabetes_Mellitus-Type_1', 3:'Diabetes_Mellitus-Type_2', }
with open('/tmp/pubmed-nodes.tab', 'w') as fp:
#fp.write("\t".join(colnames)+'\n')
for row in nodes_as_dict:
feats = []
for col in colnames:
feats.append(row.get(col, 0.0))
feats = [str(feat) for feat in feats]
feats[-1] = round(float(feats[-1]))
feats[-1] = target_dict[feats[-1]]
fp.write("\t".join(feats) + '\n')
###Output
_____no_output_____
###Markdown
STEP 1: Load and Preprocess DataWe will hold out 20% of the nodes as test nodes by setting `holdout_pct=0.2`. Since we specified `holdout_for_inductive=True`, these heldout nodes are removed from the graph in order to later simulate making predicitions on new nodes added to the graph later (or *inductive inference*). If `holdout_for_inductive=False`, the features (not labels) of these nodes are accessible to the model during training. Of the remaining nodes, 5% will be used for training and the remaining nodes will be used for validation (or *transductive inference*). More information on transductive and inductive inference and the return values `df_holdout` and `df_complete` are provided below.Note that if there are any unlabeled nodes in the graph, these will be automatically used as heldout nodes for which predictions can be made once the model is trained. See the [twitter example notebook](https://github.com/amaiya/ktrain/blob/master/examples/graphs/hateful_twitter_users-GraphSAGE.ipynb) for an example of this.
###Code
(train_data, val_data, preproc,
df_holdout, G_complete) = gr.graph_nodes_from_csv('/tmp/pubmed-nodes.tab',
'/tmp/pubmed-edges.tab',
sample_size=10, holdout_pct=0.2, holdout_for_inductive=True,
train_pct=0.05, sep='\t')
###Output
Largest subgraph statistics: 19717 nodes, 44327 edges
Size of training graph: 15774 nodes
Training nodes: 788
Validation nodes: 14986
Nodes treated as unlabeled for testing/inference: 3943
Size of graph with added holdout nodes: 19717
Holdout node features are not visible during training (inductive_inference)
###Markdown
The `preproc` object includes a reference to the training graph and a dataframe showing the features and target for each node in the graph (both training and validation nodes).
###Code
preproc.df.target.value_counts()
###Output
_____no_output_____
###Markdown
STEP 2: Build a Model and Wrap in Learner Object
###Code
gr.print_node_classifiers()
learner = ktrain.get_learner(model=gr.graph_node_classifier('graphsage', train_data),
train_data=train_data,
val_data=val_data,
batch_size=64)
###Output
Is Multi-Label? False
done
###Markdown
STEP 3: Estimate LR Given the small number of batches per epoch, a larger number of epochs is required to estimate the learning rate. We will cap it at 100 here.
###Code
learner.lr_find(max_epochs=100)
learner.lr_plot()
###Output
_____no_output_____
###Markdown
STEP 4: Train the ModelWe will train the model using `autofit`, which uses a triangular learning rate policy. The training will automatically stop when the validation loss no longer improves.
###Code
learner.autofit(0.01)
###Output
early_stopping automatically enabled at patience=5
reduce_on_plateau automatically enabled at patience=2
begin training using triangular learning rate policy with max lr of 0.01...
Epoch 1/1024
13/13 [==============================] - 6s 484ms/step - loss: 1.0057 - acc: 0.4807 - val_loss: 0.8324 - val_acc: 0.6990
Epoch 2/1024
13/13 [==============================] - 6s 425ms/step - loss: 0.8001 - acc: 0.7077 - val_loss: 0.6512 - val_acc: 0.7795
Epoch 3/1024
13/13 [==============================] - 6s 438ms/step - loss: 0.6322 - acc: 0.8045 - val_loss: 0.5574 - val_acc: 0.7875
Epoch 4/1024
13/13 [==============================] - 6s 430ms/step - loss: 0.5251 - acc: 0.8237 - val_loss: 0.5077 - val_acc: 0.8106
Epoch 5/1024
13/13 [==============================] - 6s 476ms/step - loss: 0.4407 - acc: 0.8600 - val_loss: 0.5061 - val_acc: 0.8086
Epoch 6/1024
13/13 [==============================] - 6s 454ms/step - loss: 0.3857 - acc: 0.8697 - val_loss: 0.5033 - val_acc: 0.8046
Epoch 7/1024
13/13 [==============================] - 6s 453ms/step - loss: 0.3682 - acc: 0.8528 - val_loss: 0.4966 - val_acc: 0.8058
Epoch 8/1024
13/13 [==============================] - 6s 462ms/step - loss: 0.3110 - acc: 0.8938 - val_loss: 0.4791 - val_acc: 0.8254
Epoch 9/1024
13/13 [==============================] - 6s 444ms/step - loss: 0.2822 - acc: 0.9035 - val_loss: 0.4873 - val_acc: 0.8160
Epoch 10/1024
13/13 [==============================] - 6s 443ms/step - loss: 0.2734 - acc: 0.9035 - val_loss: 0.4955 - val_acc: 0.8101
Epoch 00010: Reducing Max LR on Plateau: new max lr will be 0.005 (if not early_stopping).
Epoch 11/1024
13/13 [==============================] - 6s 435ms/step - loss: 0.2361 - acc: 0.9264 - val_loss: 0.4898 - val_acc: 0.8214
Epoch 12/1024
13/13 [==============================] - 6s 498ms/step - loss: 0.2292 - acc: 0.9155 - val_loss: 0.5074 - val_acc: 0.8174
Epoch 00012: Reducing Max LR on Plateau: new max lr will be 0.0025 (if not early_stopping).
Epoch 13/1024
13/13 [==============================] - 6s 442ms/step - loss: 0.1969 - acc: 0.9421 - val_loss: 0.5203 - val_acc: 0.8132
Restoring model weights from the end of the best epoch
Epoch 00013: early stopping
Weights from best epoch have been loaded into model.
###Markdown
Evaluate Validate
###Code
learner.validate(class_names=preproc.get_classes())
###Output
precision recall f1-score support
Diabetes_Mellitus-Experimental 0.76 0.82 0.79 3113
Diabetes_Mellitus-Type_1 0.84 0.81 0.82 5943
Diabetes_Mellitus-Type_2 0.85 0.84 0.85 5930
accuracy 0.83 14986
macro avg 0.82 0.82 0.82 14986
weighted avg 0.83 0.83 0.83 14986
###Markdown
Create a Predictor Object
###Code
p = ktrain.get_predictor(learner.model, preproc)
###Output
_____no_output_____
###Markdown
Transductive Inference: Making Predictions for Unlabeled Nodes in Original Training GraphIn transductive inference, we make predictions for unlabeled nodes whose features are visible during training. Making predictions on validation nodes in the training graph is transductive inference.Let's see how well our prediction is for the first validation example.
###Code
p.predict_transductive(val_data.ids[0:1], return_proba=True)
val_data[0][1][0]
###Output
_____no_output_____
###Markdown
Let's make predictions for all validation nodes and visually compare some of them with ground truth.
###Code
y_pred = p.predict_transductive(val_data.ids, return_proba=False)
y_true = preproc.df[preproc.df.index.isin(val_data.ids)]['target'].values
import pandas as pd
pd.DataFrame(zip(y_true, y_pred), columns=['Ground Truth', 'Predicted']).head()
###Output
_____no_output_____
###Markdown
Inductive Inference: Making Predictions for New Nodes Not in the Original Training GraphIn inductive inference, we make predictions for entirely new nodes that were not present in the traning graph. The features or attributes of these nodes were **not** visible during training. We consider a graph where the heldout nodes are added back into the training graph, which yields the original graph of 19,717 nodes. This graph, `G_complete` was returned as the last return value of `graph_nodes_from_csv`.
###Code
y_pred = p.predict_inductive(df_holdout, G_complete, return_proba=False)
y_true = df_holdout['target'].values
import numpy as np
(y_true == np.array(y_pred)).mean()
###Output
_____no_output_____ |
autovc_mod/vocals_synthesis_v3.ipynb | ###Markdown
Accompaniment Generator
###Code
g_accom = Generator(160, 0, 512, 20)
g_accom.load_state_dict(torch.load('model_latest_accom.pth'))
###Output
_____no_output_____
###Markdown
Dataset
###Code
dataset = SpecsCombined('~/Data/segments_combined', len_crop=860)
###Output
_____no_output_____
###Markdown
Data Loading
###Code
accom_spec, vocals_spec = dataset[500]
accom_spec = accom_spec.unsqueeze(0)
vocals_spec = vocals_spec.unsqueeze(0)
print(accom_spec.shape, vocals_spec.shape)
_, vocals_spec_2 = dataset[2]
vocals_spec_2 = vocals_spec_2.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Accompaniment Latent Vector Generation
###Code
accom_vec = g_accom(accom_spec, return_encoder_output=True)
accom_vec.shape
###Output
_____no_output_____
###Markdown
Random Input
###Code
x = torch.randn(1, 860, 80)
# x = torch.sin(x)
plt.imshow(x.squeeze(0))
# x_noise = torch.FloatTensor(1, 860, 320).uniform_(-0.06, 0.06)
# plt.imshow(x_noise.squeeze(0))
###Output
_____no_output_____
###Markdown
Real Input
###Code
x = np.load('example_vocals-feats.npy')
x = torch.from_numpy(x)
x = x[:860, :].unsqueeze(0)
x.shape
###Output
_____no_output_____
###Markdown
Vocals Network
###Code
g_vocals = GeneratorV2(160, 0, 512, 20, 860, 128)
g_vocals.load_state_dict(torch.load('model_lowest_val_vae.pth'))
###Output
_____no_output_____
###Markdown
Random Latent Vector Generation
###Code
condition_vec = g_vocals.cond_proj(accom_vec.flatten(start_dim=1))
latent_vec = torch.cat((torch.rand(1, 128), condition_vec), dim=-1)
###Output
_____no_output_____
###Markdown
Seeded Latent Vector Generation
###Code
condition_vec = g_vocals.cond_proj(accom_vec.flatten(start_dim=1))
vocal_vec_1 = g_vocals.vocals_proj(g_vocals(vocals_spec, return_encoder_output=True).flatten(start_dim=1))
vocal_vec_2 = g_vocals.vocals_proj(g_vocals(vocals_spec_2, return_encoder_output=True).flatten(start_dim=1))
# # Take the average of the two
# vocal_vec = (vocal_vec_1 + vocal_vec_2) / 2
# vocal_vec = (vocal_vec_1 * 0.5) + (vocal_vec_2 * 0.5)
# vocal_vec = vocal_vec_1 + (vocal_vec_2 * 0.5)
latent_vec = torch.cat((vocal_vec_1, condition_vec), dim=-1)
###Output
_____no_output_____
###Markdown
Encoding
###Code
# Reparameterization trick
mu = g_vocals.mu_fc(latent_vec)
logvar = g_vocals.logvar_fc(latent_vec)
std = torch.exp(logvar / 2)
q = torch.distributions.Normal(mu, std)
z = q.rsample()
encoder_outputs = g_vocals.latent_proj(z)
encoder_outputs = encoder_outputs.reshape(1, 860, 320)
plt.imshow(vocals_spec.squeeze(0))
###Output
_____no_output_____
###Markdown
Synthesis
###Code
mel_outputs = g_vocals.decoder(encoder_outputs)
mel_outputs_postnet = g_vocals.postnet(mel_outputs.transpose(2,1))
mel_outputs_postnet = mel_outputs + mel_outputs_postnet.transpose(2,1)
plt.imshow(mel_outputs_postnet.squeeze(0).squeeze(0).detach().numpy())
###Output
_____no_output_____
###Markdown
WaveNet
###Code
def build_model():
if is_mulaw_quantize(hparams.input_type):
if hparams.out_channels != hparams.quantize_channels:
raise RuntimeError(
"out_channels must equal to quantize_chennels if input_type is 'mulaw-quantize'")
if hparams.upsample_conditional_features and hparams.cin_channels < 0:
s = "Upsample conv layers were specified while local conditioning disabled. "
s += "Notice that upsample conv layers will never be used."
print(s)
upsample_params = hparams.upsample_params
upsample_params["cin_channels"] = hparams.cin_channels
upsample_params["cin_pad"] = hparams.cin_pad
model = WaveNet(
out_channels=hparams.out_channels,
layers=hparams.layers,
stacks=hparams.stacks,
residual_channels=hparams.residual_channels,
gate_channels=hparams.gate_channels,
skip_out_channels=hparams.skip_out_channels,
cin_channels=hparams.cin_channels,
gin_channels=hparams.gin_channels,
n_speakers=hparams.n_speakers,
dropout=hparams.dropout,
kernel_size=hparams.kernel_size,
cin_pad=hparams.cin_pad,
upsample_conditional_features=hparams.upsample_conditional_features,
upsample_params=upsample_params,
scalar_input=is_scalar_input(hparams.input_type),
output_distribution=hparams.output_distribution,
)
return model
def batch_wavegen(model, c=None, g=None, fast=True, tqdm=tqdm):
assert c is not None
B = c.shape[0]
model.eval()
if fast:
model.make_generation_fast_()
# Transform data to GPU
g = None if g is None else g.to(device)
c = None if c is None else c.to(device)
if hparams.upsample_conditional_features:
length = (c.shape[-1] - hparams.cin_pad * 2) * audio.get_hop_size()
else:
# already dupulicated
length = c.shape[-1]
with torch.no_grad():
y_hat = model.incremental_forward(
c=c, g=g, T=length, tqdm=tqdm, softmax=True, quantize=True,
log_scale_min=hparams.log_scale_min)
if is_mulaw_quantize(hparams.input_type):
# needs to be float since mulaw_inv returns in range of [-1, 1]
y_hat = y_hat.max(1)[1].view(B, -1).float().cpu().data.numpy()
for i in range(B):
y_hat[i] = P.inv_mulaw_quantize(y_hat[i], hparams.quantize_channels - 1)
elif is_mulaw(hparams.input_type):
y_hat = y_hat.view(B, -1).cpu().data.numpy()
for i in range(B):
y_hat[i] = P.inv_mulaw(y_hat[i], hparams.quantize_channels - 1)
else:
y_hat = y_hat.view(B, -1).cpu().data.numpy()
if hparams.postprocess is not None and hparams.postprocess not in ["", "none"]:
for i in range(B):
y_hat[i] = getattr(audio, hparams.postprocess)(y_hat[i])
if hparams.global_gain_scale > 0:
for i in range(B):
y_hat[i] /= hparams.global_gain_scale
return y_hat
def to_int16(x):
if x.dtype == np.int16:
return x
assert x.dtype == np.float32
assert x.min() >= -1 and x.max() <= 1.0
return (x * 32767).astype(np.int16)
device = torch.device("cuda")
model = build_model().to(device)
checkpoint = torch.load("/wavenet_vocoder/checkpoints/checkpoint_latest_ema.pth")
model.load_state_dict(checkpoint["state_dict"])
# outputs = (mel_outputs_postnet/2) + (accom_spec/2)
# c = outputs.squeeze(0).detach()
num_chunks = 20
# Original vocals
# c = vocals_spec.squeeze(0).detach()
# Vocal output
c = mel_outputs_postnet.squeeze(0).detach()
# Accom output
# c = accom_spec.squeeze(0).detach()
# Split c into chunks across the 0th dimension
length = c.shape[0]
c = c.T
c_chunks = c.reshape(80, length//num_chunks, num_chunks)
c_chunks = c_chunks.permute(1, 0, 2)
c = c_chunks
# # Resize c to 1, 80, 866
# print(c.shape)
# c = TF.resize(c, (80, 866))
# c = c[:, :, :50]
# print(c.shape)
# Generate
y_hats = batch_wavegen(model, c=c, g=None, fast=True, tqdm=tqdm)
y_hats = torch.from_numpy(y_hats).flatten().unsqueeze(0).numpy()
gen = y_hats[0]
gen = np.clip(gen, -1.0, 1.0)
wavfile.write('test.wav', hparams.sample_rate, to_int16(gen))
# Save the vocals models
# torch.save(g_vocals.state_dict(), './model_v3_7k.pth')
###Output
_____no_output_____
###Markdown
T-SNE Visualization of Song Distrib
###Code
device = 'cuda'
g_accom.to(device)
g_vocals.to(device)
vecs = []
for i in tqdm(range(len(dataset))):
accom_spec, vocals_spec = dataset[i]
accom_spec = accom_spec.unsqueeze(0).to(device)
vocals_spec = vocals_spec.unsqueeze(0).to(device)
accom_vec = g_accom(accom_spec, return_encoder_output=True)
condition_vec = g_vocals.cond_proj(accom_vec.flatten(start_dim=1))
vocal_vec = g_vocals.vocals_proj(g_vocals(vocals_spec, return_encoder_output=True).flatten(start_dim=1))
latent_vec = torch.cat((vocal_vec, condition_vec), dim=-1)
vecs.append(latent_vec.detach().cpu().numpy())
# Generate labels from the file list in the dataset
file_list = dataset.files
name_to_label = {}
labels = []
for file in file_list:
name = file.split('/')[-1].split('_')[0]
if name not in name_to_label:
name_to_label[name] = len(name_to_label)
labels.append(name_to_label[name])
# Stack numpy list into a single numpy array
vec_stack = np.vstack(vecs)
# Get a list of numbers from 0 - 178 as a numpy array
# num_list = np.arange(0, 178)
print("Number of songs:", len(np.unique(labels)))
print("Number of labels:", len(labels))
print("Number of vectors:", len(vecs))
# Filter vectors for first 10 songs
num_songs = 80
offset = 70
filtered_vecs = []
filtered_labels = []
for i in range(len(labels)):
if labels[i] < num_songs and labels[i] >= offset:
filtered_vecs.append(vec_stack[i])
filtered_labels.append(labels[i])
filtered_vec_stack = np.vstack(filtered_vecs)
filtered_labels = np.array(filtered_labels)
n_components = 2
tsne = TSNE(n_components, learning_rate='auto', init='pca')
tsne_result = tsne.fit_transform(filtered_vec_stack)
tsne_result.shape
tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': filtered_labels})
fig, ax = plt.subplots(1)
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120, palette="tab10")
lim = (tsne_result.min()-5, tsne_result.max()+5)
ax.set_xlim(lim)
ax.set_ylim(lim)
ax.set_aspect('equal')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
###Output
_____no_output_____
###Markdown
Album-based Labelling
###Code
song_meta = pd.read_csv('song_meta.csv')
song, artist, writer, album, year, ref = song_meta.iloc[0]
# Match song title to album
name_to_album = {}
for i in range(len(name_to_label.keys())):
song_name = list(name_to_label.keys())[i].lower()
# Loop through all songs in song_meta and store the album name
# if a song name matches the song name in the dataset
for j in range(len(song_meta)):
song, artist, writer, album, year, ref = song_meta.iloc[j]
# if album not in ('1989', 'Taylor Swift'):
# continue
song = song.lower().replace('"', '')
album = album.replace('(Deluxe edition)', '').split(' ')[0]
if song in song_name:
name_to_album[song_name] = album
album_labels = []
album_vecs = []
for i in range(len(file_list)):
file = file_list[i]
vec = vecs[i]
name = file.split('/')[-1].split('_')[0]
name = name.lower()
if name in name_to_album:
album_labels.append(name_to_album[name])
album_vecs.append(vec)
album_vecs = np.vstack(album_vecs)
n_components = 2
tsne = TSNE(n_components, learning_rate='auto', init='pca')
tsne_result = tsne.fit_transform(album_vecs)
tsne_result.shape
tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': album_labels})
fig, ax = plt.subplots(1)
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120)
lim = (tsne_result.min()-5, tsne_result.max()+5)
ax.set_xlim(lim)
ax.set_ylim(lim)
ax.set_aspect('equal')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
###Output
_____no_output_____
###Markdown
Mean-Vector Song Labelling
###Code
name_to_mean_vec = {}
for file in file_list:
name = file.split('/')[-1].split('_')[0]
if name not in name_to_mean_vec:
name_to_mean_vec[name] = []
name_to_mean_vec[name].append(vecs[i])
mean_vec_labels = []
mean_vecs = []
for name in name_to_mean_vec:
mean_vec_labels.append(name)
mean_vecs.append(np.mean(name_to_mean_vec[name], axis=0))
mean_vecs = np.vstack(mean_vecs)
mean_vecs.shape, len(mean_vec_labels)
# Filter vectors for first 10 songs
num_songs = 170
filtered_mean_vec_labels = mean_vec_labels[:num_songs]
filtered_mean_vecs = mean_vecs[:num_songs]
n_components = 2
tsne = TSNE(n_components, learning_rate=200, init='pca')
tsne_result = tsne.fit_transform(filtered_mean_vecs)
tsne_result.shape
tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': filtered_mean_vec_labels})
fig, ax = plt.subplots(1)
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120)
lim = (tsne_result.min()-100, tsne_result.max()+100)
ax.set_xlim(lim)
ax.set_ylim(lim)
ax.set_aspect('equal')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
for i in range(len(tsne_result)):
print(i, tsne_result[i])
start_id = 34
print(filtered_mean_vec_labels[start_id], filtered_mean_vec_labels[start_id + 1])
###Output
_____no_output_____
###Markdown
Mean-Vectors Labelled by Album
###Code
album_mean_vec_labels = []
album_mean_vecs = []
for i in range(len(mean_vec_labels)):
name = mean_vec_labels[i]
name = name.lower()
vec = mean_vecs[i]
if name in name_to_album:
album_mean_vec_labels.append(name_to_album[name])
album_mean_vecs.append(vec)
album_mean_vecs = np.vstack(album_mean_vecs)
n_components = 2
tsne = TSNE(n_components, learning_rate=200, init='pca')
tsne_result = tsne.fit_transform(album_mean_vecs)
print(tsne_result.shape)
tsne_result_df = pd.DataFrame({'tsne_1': tsne_result[:,0], 'tsne_2': tsne_result[:,1], 'label': album_mean_vec_labels})
fig, ax = plt.subplots(1)
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=tsne_result_df, ax=ax,s=120, palette="tab10")
lim = (tsne_result.min()-250, tsne_result.max()+250)
ax.set_xlim(lim)
ax.set_ylim(lim)
ax.set_aspect('equal')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
album_mean_vecs.shape, len(album_mean_vec_labels)
plt.scatter(tsne_result[:,0], tsne_result[:,1], cmap='tab10')
###Output
_____no_output_____ |
docs/source/user_guide/extending/extending_elementwise_expr.ipynb | ###Markdown
Extending Ibis Part 1: Adding a New Elementwise Expression This notebook will show you how to add a new elementwise operation to an existing backend.We are going to add `julianday`, a function supported by the SQLite database, to the SQLite Ibis backend.The Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the [Julian day](https://en.wikipedia.org/wiki/Julian_day) wikipedia page. Step 1: Define the Operation Let's define the `julianday` operation as a function that takes one string input argument and returns a float.```pythondef julianday(date: str) -> float: """Julian date"""```
###Code
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp, Arg
class JulianDay(ValueOp):
arg = Arg(rlz.string)
output_type = rlz.shape_like('arg', 'float')
###Output
_____no_output_____
###Markdown
We just defined a `JulianDay` class that takes one argument of type string or binary, and returns a float. Step 2: Define the API Because we know the output type of the operation, to make an expression out of ``JulianDay`` we simply need to construct it and call its `ibis.expr.types.Node.to_expr` method.We still need to add a method to `StringValue` and `BinaryValue` (this needs to work on both scalars and columns).When you add a method to any of the expression classes whose name matches `*Value` both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.We can do this by defining a function and assigning it to the appropriate classof expressions.
###Code
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
###Output
_____no_output_____
###Markdown
Interlude: Create some expressions with `sha1`
###Code
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
###Output
_____no_output_____
###Markdown
Step 3: Turn the Expression into SQL
###Code
import sqlalchemy as sa
@ibis.sqlite.add_operation(JulianDay)
def _julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
###Output
_____no_output_____
###Markdown
Step 4: Putting it all Together
###Code
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
###Output
_____no_output_____
###Markdown
Create and execute a `julianday` expression
###Code
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
###Output
_____no_output_____
###Markdown
Because we've defined our operation on `StringValue`, and not just on `StringColumn` we get operations on both string scalars *and* string columns for free
###Code
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
###Output
_____no_output_____
###Markdown
Extending Ibis Part 1: Adding a New Elementwise Expression This notebook will show you how to add a new elementwise operation to an existing backend.We are going to add `julianday`, a function supported by the SQLite database, to the SQLite Ibis backend.The Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the [Julian day](https://en.wikipedia.org/wiki/Julian_day) wikipedia page. Step 1: Define the Operation Let's define the `julianday` operation as a function that takes one string input argument and returns a float.```pythondef julianday(date: str) -> float: """Julian date"""```
###Code
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp
class JulianDay(ValueOp):
arg = rlz.string
output_type = rlz.shape_like('arg', 'float')
###Output
_____no_output_____
###Markdown
We just defined a `JulianDay` class that takes one argument of type string or binary, and returns a float. Step 2: Define the API Because we know the output type of the operation, to make an expression out of ``JulianDay`` we simply need to construct it and call its `ibis.expr.types.Node.to_expr` method.We still need to add a method to `StringValue` and `BinaryValue` (this needs to work on both scalars and columns).When you add a method to any of the expression classes whose name matches `*Value` both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.We can do this by defining a function and assigning it to the appropriate classof expressions.
###Code
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
###Output
_____no_output_____
###Markdown
Interlude: Create some expressions with `sha1`
###Code
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
###Output
_____no_output_____
###Markdown
Step 3: Turn the Expression into SQL
###Code
import sqlalchemy as sa
@ibis.sqlite.add_operation(JulianDay)
def _julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
###Output
_____no_output_____
###Markdown
Step 4: Putting it all Together
###Code
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
###Output
_____no_output_____
###Markdown
Create and execute a `julianday` expression
###Code
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
###Output
_____no_output_____
###Markdown
Because we've defined our operation on `StringValue`, and not just on `StringColumn` we get operations on both string scalars *and* string columns for free
###Code
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
###Output
_____no_output_____
###Markdown
Extending Ibis Part 1: Adding a New Elementwise Expression This notebook will show you how to add a new elementwise operation to an existing backend.We are going to add `julianday`, a function supported by the SQLite database, to the SQLite Ibis backend.The Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the [Julian day](https://en.wikipedia.org/wiki/Julian_day) wikipedia page. Step 1: Define the Operation Let's define the `julianday` operation as a function that takes one string input argument and returns a float.```pythondef julianday(date: str) -> float: """Julian date"""```
###Code
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp
from ibis.expr.signature import Argument as Arg
class JulianDay(ValueOp):
arg = Arg(rlz.string)
output_type = rlz.shape_like('arg', 'float')
###Output
_____no_output_____
###Markdown
We just defined a `JulianDay` class that takes one argument of type string or binary, and returns a float. Step 2: Define the API Because we know the output type of the operation, to make an expression out of ``JulianDay`` we simply need to construct it and call its `ibis.expr.types.Node.to_expr` method.We still need to add a method to `StringValue` and `BinaryValue` (this needs to work on both scalars and columns).When you add a method to any of the expression classes whose name matches `*Value` both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.We can do this by defining a function and assigning it to the appropriate classof expressions.
###Code
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
###Output
_____no_output_____
###Markdown
Interlude: Create some expressions with `sha1`
###Code
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
###Output
_____no_output_____
###Markdown
Step 3: Turn the Expression into SQL
###Code
import sqlalchemy as sa
@ibis.sqlite.add_operation(JulianDay)
def _julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
###Output
_____no_output_____
###Markdown
Step 4: Putting it all Together
###Code
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
###Output
_____no_output_____
###Markdown
Create and execute a `julianday` expression
###Code
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
###Output
_____no_output_____
###Markdown
Because we've defined our operation on `StringValue`, and not just on `StringColumn` we get operations on both string scalars *and* string columns for free
###Code
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
###Output
_____no_output_____
###Markdown
Extending Ibis Part 1: Adding a New Elementwise Expression This notebook will show you how to add a new elementwise operation to an existing backend.We are going to add `julianday`, a function supported by the SQLite database, to the SQLite Ibis backend.The Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the [Julian day](https://en.wikipedia.org/wiki/Julian_day) wikipedia page. Step 1: Define the Operation Let's define the `julianday` operation as a function that takes one string input argument and returns a float.```pythondef julianday(date: str) -> float: """Julian date"""```
###Code
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp, Arg
class JulianDay(ValueOp):
arg = Arg(rlz.string)
output_type = rlz.shape_like('arg', 'float')
###Output
_____no_output_____
###Markdown
We just defined a `JulianDay` class that takes one argument of type string or binary, and returns a float. Step 2: Define the API Because we know the output type of the operation, to make an expression out of ``JulianDay`` we simply need to construct it and call its `ibis.expr.types.Node.to_expr` method.We still need to add a method to `StringValue` and `BinaryValue` (this needs to work on both scalars and columns).When you add a method to any of the expression classes whose name matches `*Value` both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.We can do this by defining a function and assigning it to the appropriate classof expressions.
###Code
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
###Output
_____no_output_____
###Markdown
Interlude: Create some expressions with `sha1`
###Code
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
###Output
_____no_output_____
###Markdown
Step 3: Turn the Expression into SQL
###Code
import sqlalchemy as sa
@ibis.sqlite.compiler.compiles(JulianDay)
def compile_julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
###Output
_____no_output_____
###Markdown
Step 4: Putting it all Together
###Code
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
###Output
_____no_output_____
###Markdown
Create and execute a `julianday` expression
###Code
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
###Output
_____no_output_____
###Markdown
Because we've defined our operation on `StringValue`, and not just on `StringColumn` we get operations on both string scalars *and* string columns for free
###Code
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
###Output
_____no_output_____
###Markdown
Extending Ibis Part 1: Adding a New Elementwise Expression This notebook will show you how to add a new elementwise operation to an existing backend.We are going to add `julianday`, a function supported by the SQLite database, to the SQLite Ibis backend.The Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the [Julian day](https://en.wikipedia.org/wiki/Julian_day) wikipedia page. Step 1: Define the Operation Let's define the `julianday` operation as a function that takes one string input argument and returns a float.```pythondef julianday(date: str) -> float: """Julian date"""```
###Code
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp, Arg
class JulianDay(ValueOp):
arg = Arg(rlz.string)
output_type = rlz.shape_like('arg', 'float')
###Output
_____no_output_____
###Markdown
We just defined a `JulianDay` class that takes one argument of type string or binary, and returns a float. Step 2: Define the API Because we know the output type of the operation, to make an expression out of ``JulianDay`` we simply need to construct it and call its `ibis.expr.types.Node.to_expr` method.We still need to add a method to `StringValue` and `BinaryValue` (this needs to work on both scalars and columns).When you add a method to any of the expression classes whose name matches `*Value` both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.We can do this by defining a function and assigning it to the appropriate classof expressions.
###Code
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
###Output
_____no_output_____
###Markdown
Interlude: Create some expressions with `sha1`
###Code
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
###Output
_____no_output_____
###Markdown
Step 3: Turn the Expression into SQL
###Code
import sqlalchemy as sa
@ibis.backends.sqlite.compiler.compiles(JulianDay)
def compile_julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
###Output
_____no_output_____
###Markdown
Step 4: Putting it all Together
###Code
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
###Output
_____no_output_____
###Markdown
Create and execute a `julianday` expression
###Code
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
###Output
_____no_output_____
###Markdown
Because we've defined our operation on `StringValue`, and not just on `StringColumn` we get operations on both string scalars *and* string columns for free
###Code
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
###Output
_____no_output_____ |
5.deployment.ipynb | ###Markdown
Module 5. Amazon SageMaker Deployment ---본 모듈에서는 SageMaker에서 호스팅 엔드포인트를 배포하는 법을 알아봅니다. AWS Managed Inference ContainerSageMaker 추론은 각 프레임워크별에 적합한 배포 컨테이너들이 사전에 빌드되어 있으며, TensorFlow는 텐서플로 서빙, 파이토치는 torchserve, MXNet은 MMS(Multi Model Server), scikit learn은 Flask가 내장되어 있습니다. PyTorch, 기존에는 MMS가 내장되어 있었지만, 2020년 말부터 Amazon과 facebook이 공동으로 개발한 torchserve를 내장하기 시작했습니다. 배포 컨테이너를 구동할 때에는 추론을 위한 http 요청을 받아들일 수 있는 RESTful API를 실행하는 serve 명령어가 자동으로 실행되면서 엔드포인트가 시작됩니다. 엔드포인트를 시작할 때, SageMaker는 도커 컨테이너에서 사용 가능한 외부의 모델 아티팩트, 데이터, 그리고 기타 환경 설정 정보 등을 배포 인스턴스의 /opt/ml 폴더로 로딩합니다. 도커 파일은 오픈 소스로 공개되어 있으며, AWS에서는 구 버전부터 최신 버전까지 다양한 버전을 제공하고 있습니다.각 프레임워크의 도커 파일은 아래 링크를 참조하십시오.- TensorFlow containes: https://github.com/aws/sagemaker-tensorflow-containers - PyTorch container: https://github.com/aws/sagemaker-pytorch-container - MXNet containes: https://github.com/aws/sagemaker-mxnet-containers- Chainer container: https://github.com/aws/sagemaker-chainer-container - Scikit-learn container: https://github.com/aws/sagemaker-scikit-learn-container- SparkML serving container: https://github.com/aws/sagemaker-sparkml-serving-container또한, AWS CLI를 사용하여 프레임워크별로 지원되는 버전을 간단하게 확인 가능합니다.```sh$ aws ecr list-images --repository-name tensorflow-inference --registry-id 76310435188$ aws ecr list-images --repository-name pytorch-inference --registry-id 763104351884$ aws ecr list-images --repository-name mxnet-inference --registry-id 763104351884 EIA(Elastic Inference)$ aws ecr list-images --repository-name tensorflow-inference-eia --registry-id 763104351884$ aws ecr list-images --repository-name pytorch-inference-eia --registry-id 763104351884$ aws ecr list-images --repository-name mxnet-inference-eia --registry-id 763104351884``` 1. Inference script---아래 코드 셀은 `src` 디렉토리에 SageMaker 추론 스크립트인 `inference.py`를 저장합니다.이 스크립트는 SageMaker 상에서 호스팅 엔드포인트를 쉽게 배포할 수 이는 high-level 툴킷인 SageMaker inference toolkit의 인터페이스를사용하고 있으며, 여러분께서는 인터페이스에 정의된 핸들러(handler) 함수들만 구현하시면 됩니다. 아래 인터페이스는 텐서플로를 제외한 프레임워크들에서 공용으로 사용됩니다. - `model_fn()`: S3나 model zoo에 저장된 모델을 추론 인스턴스의 메모리로 로드 후, 모델을 리턴하는 방법을 정의하는 전처리 함수입니다.- `input_fn()`: 사용자로부터 입력받은 내용을 모델 추론에 적합하게 변환하는 전처리 함수로, content_type 인자값을 통해 입력값 포맷을 확인할 수 있습니다.- `predict_fn()`: model_fn()에서 리턴받은 모델과 input_fn()에서 변환된 데이터로 추론을 수행합니다.- `output_fn()`: 추론 결과를 반환하는 후처리 함수입니다.Tip: `input_fn(), predict_fn(), output_fn()`을 각각 구현하는 대신, 세 함수들을 한꺼번에 묶어서 `transform()` 함수에 구현하는 것도 가능합니다. 아래 Code snippet 예시를 참조하십시오.```python Option 1def model_fn(model_dir): model = Your_Model() return modeldef input_fn(request_body, content_type): if content_type == 'text/csv' ... else: pass: def predict_fn(request_body, content_type): Preform prediction return model(input_data) def output_fn(prediction, content_type): Serialize the prediction result ``````python Option 2def model_fn(model_dir): model = Your_Model() return modeldef transform_fn(model, input_data, content_type, accept): All-in-one function, including input_fn, predict_fn(), and output_fn()``` SageMaker 훈련 컨테이너에서 1.6.0을 사용하였기 때문에, 로컬 추론 테스트 시에도 동일한 버전으로 추론합니다.
###Code
%load_ext autoreload
%autoreload 2
%%writefile ./src/inference.py
from __future__ import absolute_import
import argparse
import json
import logging
import os
import sys
import time
import random
from os.path import join
import numpy as np
import io
import tarfile
from PIL import Image
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import lr_scheduler
import torch.optim as optim
import torchvision
import copy
import torch.utils.data
import torch.utils.data.distributed
from torchvision import datasets, transforms, models
from torch import topk
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler(sys.stdout))
JSON_CONTENT_TYPE = 'application/json'
# Loads the model into memory from storage and return the model.
def model_fn(model_dir):
logger.info("==> model_dir : {}".format(model_dir))
model = models.resnet18(pretrained=True)
last_hidden_units = model.fc.in_features
model.fc = torch.nn.Linear(last_hidden_units, 186)
model.load_state_dict(torch.load(os.path.join(model_dir, 'model.pth')))
return model
# Deserialize the request body
def input_fn(request_body, request_content_type='application/x-image'):
print('An input_fn that loads a image tensor')
print(request_content_type)
if request_content_type == 'application/x-image':
img = np.array(Image.open(io.BytesIO(request_body)))
elif request_content_type == 'application/x-npy':
img = np.frombuffer(request_body, dtype='uint8').reshape(137, 236)
else:
raise ValueError(
'Requested unsupported ContentType in content_type : ' + request_content_type)
img = 255 - img
img = img[:,:,np.newaxis]
img = np.repeat(img, 3, axis=2)
test_transforms = transforms.Compose([
transforms.ToTensor()
])
img_tensor = test_transforms(img)
return img_tensor
# Predicts on the deserialized object with the model from model_fn()
def predict_fn(input_data, model):
logger.info('Entering the predict_fn function')
start_time = time.time()
input_data = input_data.unsqueeze(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
model.eval()
input_data = input_data.to(device)
result = {}
with torch.no_grad():
logits = model(input_data)
pred_probs = F.softmax(logits, dim=1).data.squeeze()
outputs = topk(pred_probs, 5)
result['score'] = outputs[0].detach().cpu().numpy()
result['class'] = outputs[1].detach().cpu().numpy()
print("--- Elapsed time: %s secs ---" % (time.time() - start_time))
return result
# Serialize the prediction result into the response content type
def output_fn(pred_output, accept=JSON_CONTENT_TYPE):
return json.dumps({'score': pred_output['score'].tolist(),
'class': pred_output['class'].tolist()}), accept
###Output
Overwriting ./src/inference.py
###Markdown
2. Local Endpoint Inference---충분한 검증 및 테스트 없이 훈련된 모델을 곧바로 실제 운영 환경에 배포하기에는 많은 위험 요소들이 있습니다. 따라서, 로컬 모드를 사용하여 실제 운영 환경에 배포하기 위한 추론 인스턴스를 시작하기 전에 노트북 인스턴스의 로컬 환경에서 모델을 배포하는 것을 권장합니다. 이를 로컬 모드 엔드포인트(Local Mode Endpoint)라고 합니다.먼저, 로컬 모드 엔드포인트의 컨테이너 배포 이전에 로컬 환경 상에서 직접 추론을 수행하여 결과를 확인하고, 곧바로 로컬 모드 엔드포인트를 배포해 보겠습니다. Local Inference `content_type='application/x-image'` 일 경우 추론을 수행하는 예시입니다.
###Code
from src.inference import model_fn, input_fn, predict_fn, output_fn
from PIL import Image
import numpy as np
import json
file_path = 'test_imgs/test_0.jpg'
with open(file_path, mode='rb') as file:
img_byte = bytearray(file.read())
data = input_fn(img_byte)
model = model_fn('./model')
result = predict_fn(data, model)
print(result)
###Output
An input_fn that loads a image tensor
application/x-image
==> model_dir : ./model
Entering the predict_fn function
--- Elapsed time: 3.429753065109253 secs ---
{'score': array([0.5855128 , 0.3301886 , 0.01439991, 0.01150947, 0.00949198],
dtype=float32), 'class': array([ 3, 2, 64, 179, 168])}
###Markdown
`content_type='application/x-npy'` 일 경우 추론을 수행하는 예시이며, numpy 행렬을 그대로 전송하게 됩니다. 속도는 `content_type='application/x-image'` 보다 더 빠르지만, `tobytes()`로 변환하여 전송할 경우 numpy 행렬의 `dtype`과 행렬 `shape`이 보존되지 않으므로 별도의 처리가 필요합니다.
###Code
img_arr = np.array(Image.open(file_path))
data = input_fn(img_arr.tobytes(), request_content_type='application/x-npy')
model = model_fn('./model')
result = predict_fn(data, model)
print(result)
###Output
An input_fn that loads a image tensor
application/x-npy
==> model_dir : ./model
Entering the predict_fn function
--- Elapsed time: 0.019454479217529297 secs ---
{'score': array([0.5855128 , 0.3301886 , 0.01439991, 0.01150947, 0.00949198],
dtype=float32), 'class': array([ 3, 2, 64, 179, 168])}
###Markdown
Local Mode Endpoint
###Code
import os
import time
import sagemaker
from sagemaker.pytorch.model import PyTorchModel
role = sagemaker.get_execution_role()
###Output
_____no_output_____
###Markdown
아래 코드 셀을 실행 후, 로그를 확인해 보세요. MMS에 대한 세팅값들을 확인하실 수 있습니다.```bashAttaching to z1ciqmehg6-algo-1-wida0z1ciqmehg6-algo-1-wida0 | ['torchserve', '--start', '--model-store', '/.sagemaker/ts/models', '--ts-config', '/etc/sagemaker-ts.properties', '--log-config', '/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_serving_container/etc/log4j.properties', '--models', 'model.mar']z1ciqmehg6-algo-1-wida0 | 2021-04-22 04:09:27,813 [INFO ] main org.pytorch.serve.ModelServer - z1ciqmehg6-algo-1-wida0 | Torchserve version: 0.2.1z1ciqmehg6-algo-1-wida0 | TS Home: /opt/conda/lib/python3.6/site-packagesz1ciqmehg6-algo-1-wida0 | Current directory: /z1ciqmehg6-algo-1-wida0 | Temp directory: /home/model-server/tmpz1ciqmehg6-algo-1-wida0 | Number of GPUs: 0z1ciqmehg6-algo-1-wida0 | Number of CPUs: 8z1ciqmehg6-algo-1-wida0 | Max heap size: 15352 Mz1ciqmehg6-algo-1-wida0 | Python executable: /opt/conda/bin/pythonz1ciqmehg6-algo-1-wida0 | Config file: /etc/sagemaker-ts.propertiesz1ciqmehg6-algo-1-wida0 | Inference address: http://0.0.0.0:8080z1ciqmehg6-algo-1-wida0 | Management address: http://0.0.0.0:8080z1ciqmehg6-algo-1-wida0 | Metrics address: http://127.0.0.1:8082z1ciqmehg6-algo-1-wida0 | Model Store: /.sagemaker/ts/models...``` 디버깅 Tip만약 로컬에서 추론이 잘 되는데, 엔드포인트 배포에서 에러가 발생하면 프레임워크 버전이 맞지 않거나 컨테이너 환경 변수 설정이 잘못되었을 가능성이 높습니다.예를 들어, PyTorch 1.6.0에서 훈련한 모델은 PyTorch 1.3.1에서 추론이 되지 않습니다.프레임워크 버전은 가급적 동일한 버전으로 통일하되, 버전 통일이 불가능하면 가장 비슷한 버전을 사용해 보세요. 예를 들어, PyTorch 1.6.0으로 훈련한 모델을 1.5.0 버전 상에서 배포할 수 있습니다.만약 비슷한 버전에서도 추론이 되지 않는다면, BYOC(Bring Your Own Container)로 Amazon ECR에 동일한 버전의 컨테이너를 등록할 수도 있습니다.
###Code
local_model_path = f'file://{os.getcwd()}/model/model.tar.gz'
endpoint_name = "local-endpoint-bangali-classifier-{}".format(int(time.time()))
local_pytorch_model = PyTorchModel(model_data=local_model_path,
role=role,
entry_point='./src/inference.py',
framework_version='1.6.0',
py_version='py3')
local_pytorch_model.deploy(instance_type='local',
initial_instance_count=1,
endpoint_name=endpoint_name,
wait=True)
###Output
Attaching to z1ciqmehg6-algo-1-wida0
[36mz1ciqmehg6-algo-1-wida0 |[0m ['torchserve', '--start', '--model-store', '/.sagemaker/ts/models', '--ts-config', '/etc/sagemaker-ts.properties', '--log-config', '/opt/conda/lib/python3.6/site-packages/sagemaker_pytorch_serving_container/etc/log4j.properties', '--models', 'model.mar']
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:27,813 [INFO ] main org.pytorch.serve.ModelServer -
[36mz1ciqmehg6-algo-1-wida0 |[0m Torchserve version: 0.2.1
[36mz1ciqmehg6-algo-1-wida0 |[0m TS Home: /opt/conda/lib/python3.6/site-packages
[36mz1ciqmehg6-algo-1-wida0 |[0m Current directory: /
[36mz1ciqmehg6-algo-1-wida0 |[0m Temp directory: /home/model-server/tmp
[36mz1ciqmehg6-algo-1-wida0 |[0m Number of GPUs: 0
[36mz1ciqmehg6-algo-1-wida0 |[0m Number of CPUs: 8
[36mz1ciqmehg6-algo-1-wida0 |[0m Max heap size: 15352 M
[36mz1ciqmehg6-algo-1-wida0 |[0m Python executable: /opt/conda/bin/python
[36mz1ciqmehg6-algo-1-wida0 |[0m Config file: /etc/sagemaker-ts.properties
[36mz1ciqmehg6-algo-1-wida0 |[0m Inference address: http://0.0.0.0:8080
[36mz1ciqmehg6-algo-1-wida0 |[0m Management address: http://0.0.0.0:8080
[36mz1ciqmehg6-algo-1-wida0 |[0m Metrics address: http://127.0.0.1:8082
[36mz1ciqmehg6-algo-1-wida0 |[0m Model Store: /.sagemaker/ts/models
[36mz1ciqmehg6-algo-1-wida0 |[0m Initial Models: model.mar
[36mz1ciqmehg6-algo-1-wida0 |[0m Log dir: /logs
[36mz1ciqmehg6-algo-1-wida0 |[0m Metrics dir: /logs
[36mz1ciqmehg6-algo-1-wida0 |[0m Netty threads: 0
[36mz1ciqmehg6-algo-1-wida0 |[0m Netty client threads: 0
[36mz1ciqmehg6-algo-1-wida0 |[0m Default workers per model: 8
[36mz1ciqmehg6-algo-1-wida0 |[0m Blacklist Regex: N/A
[36mz1ciqmehg6-algo-1-wida0 |[0m Maximum Response Size: 6553500
[36mz1ciqmehg6-algo-1-wida0 |[0m Maximum Request Size: 6553500
[36mz1ciqmehg6-algo-1-wida0 |[0m Prefer direct buffer: false
[36mz1ciqmehg6-algo-1-wida0 |[0m Custom python dependency for model allowed: false
[36mz1ciqmehg6-algo-1-wida0 |[0m Metrics report format: prometheus
[36mz1ciqmehg6-algo-1-wida0 |[0m Enable metrics API: true
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:27,853 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: model.mar
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:28,720 [INFO ] main org.pytorch.serve.archive.ModelArchive - eTag 78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:28,732 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model model loaded.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:28,749 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,015 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9005
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,018 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9003
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,016 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9000
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,016 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9006
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,015 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9001
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,019 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]50
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,019 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9002
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,019 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]51
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,018 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]54
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,020 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,020 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,018 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]52
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,018 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9007
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,022 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,022 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,022 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,022 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,020 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,020 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]57
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,023 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,023 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,023 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,026 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]56
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,027 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]53
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,027 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,027 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,027 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,027 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,035 [INFO ] W-9003-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9003
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,035 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9004
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,035 [INFO ] W-9006-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9006
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9000-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9000
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9002-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9002
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]55
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9007-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9007
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9004-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9004
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9005-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9005
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,037 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.13
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,036 [INFO ] W-9001-model_1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9001
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,067 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,067 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,070 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9003.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9000.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9007.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9001.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9004.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9006.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,078 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9005.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,080 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9002.
[36mz1ciqmehg6-algo-1-wida0 |[0m Model server started.
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,349 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,352 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:83.26641464233398|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,353 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:9.648555755615234|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,355 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:10.4|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,357 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:46706.67578125|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,358 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:14016.5078125|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:29,371 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:23.9|#Level:Host|#hostname:698503716b79,timestamp:1619064569
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,124 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,126 [INFO ] W-9005-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,127 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,142 [INFO ] W-9000-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,159 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,174 [INFO ] W-9002-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,192 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,193 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,195 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,196 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,223 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,226 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,226 [INFO ] W-9003-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,258 [INFO ] W-9006-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,366 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:30,366 [INFO ] W-9004-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - ==> model_dir : /home/model-server/tmp/models/78d4c7da070c452b88785461ef0555ba
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,036 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,036 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,038 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,038 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,046 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,046 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,048 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,048 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,048 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,049 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,055 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,055 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,057 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,057 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,138 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,140 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,146 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,149 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,150 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,155 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,157 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,183 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /root/.cache/torch/hub/checkpoints/resnet18-5c106cde.pth
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,183 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle -
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,240 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 10%|▉ | 4.31M/44.7M [00:00<00:00, 44.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,241 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 10%|▉ | 4.38M/44.7M [00:00<00:00, 45.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,249 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 8%|▊ | 3.37M/44.7M [00:00<00:01, 35.3MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,250 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 10%|▉ | 4.42M/44.7M [00:00<00:00, 46.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,251 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 11%|█ | 4.81M/44.7M [00:00<00:00, 49.8MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,283 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 0%| | 0.00/44.7M [00:00<?, ?B/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,285 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 12%|█▏ | 5.49M/44.7M [00:00<00:00, 57.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,287 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 12%|█▏ | 5.45M/44.7M [00:00<00:00, 57.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,350 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 15%|█▌ | 6.73M/44.7M [00:00<00:01, 34.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,351 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 21%|██ | 9.25M/44.7M [00:00<00:00, 48.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,355 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 21%|██▏ | 9.50M/44.7M [00:00<00:00, 50.1MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,356 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 21%|██▏ | 9.57M/44.7M [00:00<00:00, 49.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,363 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 21%|██ | 9.19M/44.7M [00:00<00:00, 48.0MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,386 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 25%|██▍ | 11.0M/44.7M [00:00<00:00, 48.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,387 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 24%|██▍ | 10.9M/44.7M [00:00<00:00, 48.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,395 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 19%|█▉ | 8.44M/44.7M [00:00<00:00, 88.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,450 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 31%|███ | 13.9M/44.7M [00:00<00:00, 52.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,451 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 31%|███ | 13.9M/44.7M [00:00<00:00, 46.3MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,455 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 32%|███▏ | 14.3M/44.7M [00:00<00:00, 48.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,456 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 32%|███▏ | 14.3M/44.7M [00:00<00:00, 48.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,464 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 31%|███ | 13.8M/44.7M [00:00<00:00, 43.3MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,486 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 36%|███▋ | 16.3M/44.7M [00:00<00:00, 51.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,488 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 36%|███▋ | 16.2M/44.7M [00:00<00:00, 51.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,537 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 38%|███▊ | 16.9M/44.7M [00:00<00:00, 82.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,561 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 45%|████▌ | 20.2M/44.7M [00:00<00:00, 53.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,563 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 45%|████▌ | 20.1M/44.7M [00:00<00:00, 57.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,566 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 46%|████▌ | 20.6M/44.7M [00:00<00:00, 55.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,570 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 47%|████▋ | 20.9M/44.7M [00:00<00:00, 56.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,580 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 46%|████▋ | 20.7M/44.7M [00:00<00:00, 54.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,588 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 50%|████▉ | 22.2M/44.7M [00:00<00:00, 55.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,588 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 48%|████▊ | 21.2M/44.7M [00:00<00:00, 51.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,661 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 55%|█████▌ | 24.8M/44.7M [00:00<00:00, 69.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,663 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 57%|█████▋ | 25.4M/44.7M [00:00<00:00, 53.1MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,664 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 57%|█████▋ | 25.6M/44.7M [00:00<00:00, 55.1MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,666 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 58%|█████▊ | 25.9M/44.7M [00:00<00:00, 52.8MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,670 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 59%|█████▉ | 26.3M/44.7M [00:00<00:00, 53.8MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,681 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 58%|█████▊ | 26.0M/44.7M [00:00<00:00, 51.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,688 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 61%|██████ | 27.4M/44.7M [00:00<00:00, 56.0MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,690 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 62%|██████▏ | 27.6M/44.7M [00:00<00:00, 55.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,762 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 69%|██████▉ | 30.8M/44.7M [00:00<00:00, 54.1MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,764 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 71%|███████ | 31.5M/44.7M [00:00<00:00, 57.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,770 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 72%|███████▏ | 31.9M/44.7M [00:00<00:00, 56.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,771 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 72%|███████▏ | 32.2M/44.7M [00:00<00:00, 56.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,776 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 71%|███████ | 31.6M/44.7M [00:00<00:00, 65.1MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,780 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 73%|███████▎ | 32.6M/44.7M [00:00<00:00, 57.3MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,789 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 75%|███████▍ | 33.3M/44.7M [00:00<00:00, 58.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,796 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 75%|███████▍ | 33.4M/44.7M [00:00<00:00, 56.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,862 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 83%|████████▎ | 37.2M/44.7M [00:00<00:00, 58.4MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,864 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 84%|████████▍ | 37.7M/44.7M [00:00<00:00, 59.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,866 [WARN ] W-9007-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 87%|████████▋ | 38.7M/44.7M [00:00<00:00, 59.3MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,867 [WARN ] W-9000-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 99%|█████████▉| 44.4M/44.7M [00:00<00:00, 63.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,867 [WARN ] W-9001-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 100%|█████████▉| 44.5M/44.7M [00:00<00:00, 63.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,868 [WARN ] W-9003-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 88%|████████▊ | 39.4M/44.7M [00:00<00:00, 59.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,870 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 84%|████████▍ | 37.6M/44.7M [00:00<00:00, 56.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,871 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 84%|████████▍ | 37.6M/44.7M [00:00<00:00, 56.5MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,875 [WARN ] W-9006-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 99%|█████████▉| 44.1M/44.7M [00:00<00:00, 60.2MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,876 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 85%|████████▍ | 37.9M/44.7M [00:00<00:00, 62.6MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,876 [WARN ] W-9004-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 100%|██████████| 44.7M/44.7M [00:00<00:00, 64.8MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,876 [WARN ] W-9005-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 87%|████████▋ | 38.9M/44.7M [00:00<00:00, 55.9MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:31,880 [WARN ] W-9002-model_1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - 97%|█████████▋| 43.4M/44.7M [00:00<00:00, 57.7MB/s]
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:32,034 [INFO ] W-9007-model_1 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2861
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:32,035 [INFO ] W-9007-model_1 TS_METRICS - W-9007-model_1.ms:3291|#Level:Host|#hostname:698503716b79,timestamp:1619064572
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:32,394 [INFO ] pool-1-thread-9 ACCESS_LOG - /172.19.0.1:60114 "GET /ping HTTP/1.1" 200 11
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:32,395 [INFO ] pool-1-thread-9 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:698503716b79,timestamp:null
!
###Markdown
로컬에서 컨테이너를 배포했기 때문에 컨테이너가 현재 실행 중임을 확인할 수 있습니다.
###Code
!docker ps
###Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
698503716b79 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.6.0-cpu-py3 "python /usr/local/b…" 18 seconds ago Up 16 seconds 0.0.0.0:8080->8080/tcp, 8081/tcp z1ciqmehg6-algo-1-wida0
###Markdown
SageMaker SDK `predict()` 메서드로 추론을 수행할 수도 있지만, 이번에는 boto3의 `invoke_endpoint()` 메서드로 추론을 수행해 보겠습니다.Boto3는 서비스 레벨의 low-level SDK로, ML 실험에 초점을 맞춰 일부 기능들이 추상화된 high-level SDK인 SageMaker SDK와 달리SageMaker API를 완벽하게 제어할 수 있습으며, 프로덕션 및 자동화 작업에 적합합니다.참고로 `invoke_endpoint()` 호출을 위한 런타임 클라이언트 인스턴스 생성 시, 로컬 배포 모드에서는 `sagemaker.local.LocalSagemakerRuntimeClient()`를 호출해야 합니다.
###Code
client = sagemaker.local.LocalSagemakerClient()
runtime_client = sagemaker.local.LocalSagemakerRuntimeClient()
endpoint_name = local_pytorch_model.endpoint_name
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='application/x-npy',
Accept='application/json',
Body=img_arr.tobytes()
)
print(response['Body'].read().decode())
###Output
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,180 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - An input_fn that loads a image tensor
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,180 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - application/x-npy
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,180 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Entering the predict_fn function
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,180 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Entering the predict_fn function
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,216 [INFO ] W-9007-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - --- Elapsed time: 0.0353550910949707 secs ---
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,217 [INFO ] W-9007-model_1 org.pytorch.serve.wlm.WorkerThread - Backend response time: 39
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,216 [INFO ] W-9007-model_1-stdout MODEL_METRICS - PredictionTime.Milliseconds:36.83|#ModelName:model,Level:Model|#hostname:698503716b79,requestID:ee8b7769-d51f-431d-986f-a8a5b6b77f4f,timestamp:1619064584
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,218 [INFO ] W-9007-model_1 ACCESS_LOG - /172.19.0.1:60120 "POST /invocations HTTP/1.1" 200 48
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:44,218 [INFO ] W-9007-model_1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:698503716b79,timestamp:null
{"score": [0.5855128169059753, 0.3301886022090912, 0.014399887062609196, 0.011509465985000134, 0.00949197169393301], "class": [3, 2, 64, 179, 168]}
###Markdown
ContentType을 x-image로 추론하는 예시입니다. 동일한 결과가 출력되는 것을 확인할 수 있습니다.
###Code
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='application/x-image',
Accept='application/json',
Body=img_byte
)
print(json.loads(response['Body'].read().decode()))
###Output
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,039 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - An input_fn that loads a image tensor
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,040 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - application/x-image
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,040 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Entering the predict_fn function
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,040 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Entering the predict_fn function
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,092 [INFO ] W-9001-model_1 org.pytorch.serve.wlm.WorkerThread - Backend response time: 62
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,092 [INFO ] W-9001-model_1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - --- Elapsed time: 0.05186748504638672 secs ---
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,092 [INFO ] W-9001-model_1-stdout MODEL_METRICS - PredictionTime.Milliseconds:60.75|#ModelName:model,Level:Model|#hostname:698503716b79,requestID:c1305b6e-9baf-4774-a8c2-3bcc2062438d,timestamp:1619064587
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,092 [INFO ] W-9001-model_1 ACCESS_LOG - /172.19.0.1:60120 "POST /invocations HTTP/1.1" 200 63
[36mz1ciqmehg6-algo-1-wida0 |[0m 2021-04-22 04:09:47,093 [INFO ] W-9001-model_1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:698503716b79,timestamp:null
{'score': [0.5855128169059753, 0.3301886022090912, 0.014399887062609196, 0.011509465985000134, 0.00949197169393301], 'class': [3, 2, 64, 179, 168]}
###Markdown
Local Mode Endpoint Clean-up엔드포인트를 계속 사용하지 않는다면, 엔드포인트를 삭제해야 합니다. SageMaker SDK에서는 `delete_endpoint()` 메소드로 간단히 삭제할 수 있습니다.
###Code
def delete_endpoint(client, endpoint_name):
response = client.describe_endpoint_config(EndpointConfigName=endpoint_name)
model_name = response['ProductionVariants'][0]['ModelName']
client.delete_model(ModelName=model_name)
client.delete_endpoint(EndpointName=endpoint_name)
client.delete_endpoint_config(EndpointConfigName=endpoint_name)
print(f'--- Deleted model: {model_name}')
print(f'--- Deleted endpoint: {endpoint_name}')
print(f'--- Deleted endpoint_config: {endpoint_name}')
delete_endpoint(client, endpoint_name)
###Output
Gracefully stopping... (press Ctrl+C again to force)
--- Deleted model: pytorch-inference-2021-04-22-04-09-21-366
--- Deleted endpoint: local-endpoint-bangali-classifier-1619064557
--- Deleted endpoint_config: local-endpoint-bangali-classifier-1619064557
###Markdown
컨테이너가 삭제된 것을 확인할 수 있습니다.
###Code
!docker ps
###Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
###Markdown
3. SageMaker Hosted Endpoint Inference---이제 실제 운영 환경에 엔드포인트 배포를 수행해 보겠습니다. 로컬 모드 엔드포인트와 대부분의 코드가 동일하며, 모델 아티팩트 경로(`model_data`)와 인스턴스 유형(`instance_type`)만 변경해 주시면 됩니다. SageMaker가 관리하는 배포 클러스터를 프로비저닝하는 시간이 소요되기 때문에 추론 서비스를 시작하는 데에는 약 5~10분 정도 소요됩니다.
###Code
import boto3
client = boto3.client('sagemaker')
runtime_client = boto3.client('sagemaker-runtime')
def get_model_path(sm_client, max_results=1, name_contains='pytorch-training'):
training_job = sm_client.list_training_jobs(MaxResults=max_results,
NameContains=name_contains,
SortBy='CreationTime',
SortOrder='Descending')
training_job_name = training_job['TrainingJobSummaries'][0]['TrainingJobName']
training_job_description = sm_client.describe_training_job(TrainingJobName=training_job_name)
model_path = training_job_description['ModelArtifacts']['S3ModelArtifacts']
return model_path
%%time
model_path = get_model_path(client, max_results=3)
endpoint_name = "endpoint-bangali-classifier-{}".format(int(time.time()))
print(model_path)
pytorch_model = PyTorchModel(model_data=model_path,
role=role,
entry_point='./src/inference.py',
framework_version='1.6.0',
py_version='py3')
predictor = pytorch_model.deploy(instance_type='ml.m5.xlarge',
initial_instance_count=1,
endpoint_name=endpoint_name,
wait=True)
endpoint_name = pytorch_model.endpoint_name
client.describe_endpoint(EndpointName = endpoint_name)
###Output
_____no_output_____
###Markdown
추론을 수행합니다. 로컬 모드의 코드와 동일합니다.
###Code
response = runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='application/x-image',
Accept='application/json',
Body=img_byte
)
print(json.loads(response['Body'].read().decode()))
###Output
{'score': [0.5855132341384888, 0.33018821477890015, 0.01439987774938345, 0.011509452946484089, 0.009491969831287861], 'class': [3, 2, 64, 179, 168]}
###Markdown
SageMaker Hosted Endpoint Clean-up엔드포인트를 계속 사용하지 않는다면, 불필요한 과금을 피하기 위해 엔드포인트를 삭제해야 합니다. SageMaker SDK에서는 `delete_endpoint()` 메소드로 간단히 삭제할 수 있으며, UI에서도 쉽게 삭제할 수 있습니다.
###Code
delete_endpoint(client, endpoint_name)
###Output
_____no_output_____ |
appendix/algo_app/topological_quantum_walk.ipynb | ###Markdown
Trusted Notebook" width="500 px" align="left"> _*Topological Quantum Walks on IBM Q*_This notebook is based on the paper of Radhakrishnan Balu, Daniel Castillo, and George Siopsis, "Physical realization of topological quantum walks on IBM-Q and beyond" arXiv:1710.03615 \[quant-ph\](2017). ContributorsKeita Takeuchi (Univ. of Tokyo) and Rudy Raymond (IBM Research - Tokyo)*** Introduction: challenges in implementing topological walkIn this section, we introduce one model of quantum walk called *split-step topological quantum walk*. We define Hilbert space of quantum walker states and coin states as $\mathcal{H}_{\mathcal{w}}=\{\vert x \rangle, x\in\mathbb{Z}_N\}, \mathcal{H}_{\mathcal{c}}=\{\vert 0 \rangle, \vert 1 \rangle\}$, respectively. Then, step operator is defined as$$S^+ := \vert 0 \rangle_c \langle 0 \vert \otimes L^+ + \vert 1 \rangle_c \langle 1 \vert \otimes \mathbb{I}\\S^- := \vert 0 \rangle_c \langle 0 \vert \otimes \mathbb{I} + \vert 1 \rangle_c \langle 1 \vert \otimes L^-,$$where$$L^{\pm}\vert x \rangle_{\mathcal w} := \vert (x\pm1)\ \rm{mod}\ N \rangle_{\mathcal w}$$is a shift operator. The boundary condition is included.Also, we define the coin operator as$$T(\theta):=e^{-i\theta Y} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}.$$One step of quantum walk is the unitary operator defined as below that uses two mode of coins, i.e., $\theta_1$ and $\theta_2$: $$W := S^- T(\theta_2)S^+ T(\theta_1).$$Intuitively speaking, the walk consists of flipping coin states and based on the outcome of the coins, the shifting operator is applied to determine the next position of the walk. Next, we consider a walk with two phases that depend on the current position:$$(\theta_1,\theta_2) = \begin{cases} (\theta_{1}^{-},\ \theta_{2}^{-}) & 0 \leq x < \frac{N}{2} \\ (\theta_{1}^{+},\ \theta_{2}^{+}) & \frac{N}{2} \leq x < N. \end{cases}$$Then, two coin operators are rewritten as$$\mathcal T_i = \sum^{N-1}_{x=0}e^{-i\theta_i(x) Y_c}\otimes \vert x \rangle_w \langle x \vert,\ i=1,2.$$By using this, one step of quantum walk is equal to$$W = S^- \mathcal T_2 S^+ \mathcal T_1.$$ In principle, we can execute the quantum walk by multiplying $W$ many times, but then we need many circuit elements to construct it. This is not possible with the current approximate quantum computers due to large errors produced after each application of circuit elements (gates). Hamiltonian of topological walkAltenatively, we can think of time evolution of the states. The Hamiltonian $H$ is regarded as $H=\lim_{n \to \infty}W^n$(See below for further details.). For example, when $(\theta_1,\ \theta_2) = (0,\ \pi/2)$, the Schrödinger equation is$$i\frac{d}{dt}\vert \Psi \rangle = H_{\rm I} \vert \Psi \rangle,\ H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].$$If Hamiltonian is time independent, the solution of the Schrödinger equation is$$\vert \Psi(t) \rangle = e^{-iHt} \vert \Psi(0) \rangle,$$so we can get the final state at arbitrary time $t$ at once without operating W step by step, if we know the corresponding Hamiltonian.The Hamiltonian can be computed as below.Set $(\theta_1,\ \theta_2) = (\epsilon,\ \pi/2+\epsilon)$, and $\epsilon\to 0$ and the number of step $s\to \infty$while $se=t/2$(finite variable). Then,\begin{align*} H_I&=\lim_{n \to \infty}W^n\\ \rm{(LHS)} &= \mathbb{I}-iH_{I}t+O(t^2)\\ \rm{(RHS)} &= \lim_{\substack{s\to \infty\\ \epsilon\to 0}}(W^4)^{s/4}= \lim_{\substack{s\to \infty\\ \epsilon\to0}}(\mathbb{I}+O(\epsilon))^{s/4}\\ &\simeq \lim_{\substack{s\to \infty\\ \epsilon\to 0}}\mathbb{I}+\frac{s}{4}O(\epsilon)\\ &= \lim_{\epsilon\to 0}\mathbb{I}+iY\otimes [2\mathbb I+L^+ + L^-]t+O(\epsilon).\end{align*}Therefore,$$H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].$$ Computation model In order to check the correctness of results of the implementation of quantum walk by using IBMQ, we investigate two models, which have different features of coin phases. Let the number of positions on the line $n$ is 4.- $\rm I / \rm II:\ (\theta_1,\theta_2) = \begin{cases} (0,\ -\pi/2) & 0 \leq x < 2 \\ (0,\ \pi/2) & 2 \leq x < 4 \end{cases}$- $\rm I:\ (\theta_1,\theta_2)=(0,\ \pi/2),\ 0 \leq x < 4$That is, the former is a quantum walk on a line with two phases of coins, while the latter is that with only one phase of coins.Figure 1. Quantum Walk on a line with two phases The Hamiltonian operators for each of the walk on the line are, respectively, $$H_{\rm I/II} = Y \otimes \mathbb I \otimes \frac{\mathbb I + Z}{2}\\H_{\rm I} = Y\otimes (2\mathbb I\otimes \mathbb I + \mathbb I\otimes X + X \otimes X).$$Then, we want to implement the above Hamiltonian operators with the unitary operators as product of two-qubit gates CNOTs, CZs, and single-qubit gate rotation matrices. Notice that the CNOT and CZ gates are\begin{align*} \rm{CNOT_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes X_t\\ \rm{CZ_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes Z_t.\end{align*}Below is the reference of converting Hamiltonian into unitary operators useful for the topological quantum walk.Table 1. Relation between the unitary operator and product of elementary gates|unitary operator|product of circuit elements||:-:|:-:||$e^{-i\theta X_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CNOT_{cj}}$||$e^{-i\theta X_c Z_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CZ_{cj}}$||$e^{-i\theta Y_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{i\theta Y_c t}\cdot \rm{CNOT_{cj}}$||$e^{-i\theta Y_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Y_c t}\cdot \rm{CNOT_{jc}}$||$e^{-i\theta Z_c X_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_j t}\cdot \rm{CZ_{cj}}$||$e^{-i\theta Z_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Z_c t}\cdot \rm{CNOT_{jc}}$|By using these formula, the unitary operators are represented by only CNOT, CZ, and rotation matrices, so we can implement them by using IBM Q, as below. Phase I/II:\begin{align*} e^{-iH_{I/II}t}=~&e^{-itY_c \otimes \mathbb I_0 \otimes \frac{\mathbb I_1 + Z_1}{2}}\\ =~& e^{-iY_c t}e^{-itY_c\otimes Z_1}\\ =~& e^{-iY_c t}\cdot\rm{CNOT_{1c}}\cdot e^{-i Y_c t}\cdot\rm{CNOT_{1c}}\end{align*}Figure 2. Phase I/II on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:coin,\ q[2]:2^1$ Phase I:\begin{align*} e^{-iH_I t}=~&e^{-itY_c\otimes (2\mathbb I_0\otimes \mathbb I_1 + \mathbb I_0\otimes X_1 + X_0 \otimes X_1)}\\ =~&e^{-2itY_c}e^{-itY_c\otimes X_1}e^{-itY_c\otimes X_0 \otimes X_1}\\ =~&e^{-2iY_c t}\cdot\rm{CNOT_{c1}}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c1}}\end{align*}Figure 3. Phase I on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:2^1,\ q[2]:coin$ Implementation
###Code
#initialization
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing the QISKit
from qiskit import QuantumProgram
try:
sys.path.append("../../") # go to parent dir
import Qconfig
qx_config = {
"APItoken": Qconfig.APItoken,
"url": Qconfig.config['url']}
except:
qx_config = {
"APItoken":"YOUR_TOKEN_HERE",
"url":"https://quantumexperience.ng.bluemix.net/api"}
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
#set api
from IBMQuantumExperience import IBMQuantumExperience
api = IBMQuantumExperience(token=qx_config['APItoken'], config={'url': qx_config['url']})
#prepare backends
from qiskit.backends import discover_local_backends, discover_remote_backends, get_backend_instance
remote_backends = discover_remote_backends(api) #we have to call this to connect to remote backends
local_backends = discover_local_backends()
# Quantum program setup
Q_program = QuantumProgram()
#for plot with 16qubit
import numpy as np
from functools import reduce
from scipy import linalg as la
from collections import Counter
import matplotlib.pyplot as plt
def plot_histogram5(data, number_to_keep=False):
"""Plot a histogram of data.
data is a dictionary of {'000': 5, '010': 113, ...}
number_to_keep is the number of terms to plot and rest is made into a
single bar called other values
"""
if number_to_keep is not False:
data_temp = dict(Counter(data).most_common(number_to_keep))
data_temp["rest"] = sum(data.values()) - sum(data_temp.values())
data = data_temp
labels = sorted(data)
values = np.array([data[key] for key in labels], dtype=float)
pvalues = values / sum(values)
numelem = len(values)
ind = np.arange(numelem) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects = ax.bar(ind, pvalues, width, color='seagreen')
# add some text for labels, title, and axes ticks
ax.set_ylabel('Probabilities', fontsize=12)
ax.set_xticks(ind)
ax.set_xticklabels(labels, fontsize=12, rotation=70)
ax.set_ylim([0., min([1.2, max([1.2 * val for val in pvalues])])])
# attach some text labels
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width() / 2., 1.05 * height,
'%f' % float(height),
ha='center', va='bottom', fontsize=8)
plt.show()
###Output
_____no_output_____
###Markdown
**Quantum walk, phase I/II on $N=4$ lattice$(t=8)$**
###Code
q1_2 = Q_program.create_quantum_register("q1_2", 3)
c1_2 = Q_program.create_classical_register("c1_2", 3)
qw1_2 = Q_program.create_circuit("qw1_2", [q1_2], [c1_2])
t=8 #time
qw1_2.x(q1_2[2])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.measure(q1_2[0], c1_2[0])
qw1_2.measure(q1_2[1], c1_2[2])
qw1_2.measure(q1_2[2], c1_2[1])
print(qw1_2.qasm())
###Output
OPENQASM 2.0;
include "qelib1.inc";
qreg q1_2[3];
creg c1_2[3];
x q1_2[2];
u3(8,0,0) q1_2[1];
cx q1_2[2],q1_2[1];
u3(8,0,0) q1_2[1];
cx q1_2[2],q1_2[1];
measure q1_2[0] -> c1_2[0];
measure q1_2[1] -> c1_2[2];
measure q1_2[2] -> c1_2[1];
###Markdown
Below is the result when executing the circuit on the simulator.
###Code
result = Q_program.execute(["qw1_2"], backend='local_qiskit_simulator', shots=1000)
plot_histogram5(result.get_counts("qw1_2"))
###Output
_____no_output_____
###Markdown
And below is the result when executing the circuit on the real device.
###Code
result = Q_program.execute(["qw1_2"], backend='ibmqx4', shots=1000, max_credits=3, wait=10, timeout=1024)
plot_histogram5(result.get_counts("qw1_2"))
###Output
_____no_output_____
###Markdown
**Conclusion**: The walker is bounded at the initial state, which is the boundary of two phases, when the quantum walk on the line has two phases. **Quantum walk, phase I on $N=4$ lattice$(t=8)$**
###Code
q1 = Q_program.create_quantum_register("q1", 3)
c1 = Q_program.create_classical_register("c1", 3)
qw1 = Q_program.create_circuit("qw1", [q1], [c1])
t=8 #time
qw1.x(q1[1])
qw1.cx(q1[2], q1[1])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.cx(q1[2], q1[1])
qw1.u3(2*t, 0, 0, q1[2])
qw1.measure(q1[0], c1[0])
qw1.measure(q1[1], c1[1])
qw1.measure(q1[2], c1[2])
print(qw1.qasm())
###Output
OPENQASM 2.0;
include "qelib1.inc";
qreg q1[3];
creg c1[3];
x q1[1];
cx q1[2],q1[1];
u3(8,0,0) q1[2];
cx q1[2],q1[0];
u3(8,0,0) q1[2];
cx q1[2],q1[0];
cx q1[2],q1[1];
u3(16,0,0) q1[2];
measure q1[0] -> c1[0];
measure q1[1] -> c1[1];
measure q1[2] -> c1[2];
###Markdown
Below is the result when executing the circuit on the simulator.
###Code
result = Q_program.execute(["qw1"], backend='local_qiskit_simulator')
plot_histogram5(result.get_counts("qw1"))
result = Q_program.execute(["qw1"], backend='ibmqx4', shots=1000, max_credits=3, wait=10, timeout=600)
plot_histogram5(result.get_counts("qw1"))
###Output
_____no_output_____ |
Documentation for Covid Class.ipynb | ###Markdown
Mexico's COVID19 Data description and Analisys
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as snn
import joblib
from datetime import datetime, timedelta
from collections import OrderedDict
# Data helper modules
from Covid_suite import Covid
from constants import *
database = {'confirmed' : 'Casos_Diarios_Estado_Nacional_Confirmados_20200804.csv',
'suspicious' : 'Casos_Diarios_Estado_Nacional_Sospechosos_20200804.csv',
'negatives' : 'Casos_Diarios_Estado_Nacional_Negativos_20200804.csv',
'deaths' : 'Casos_Diarios_Estado_Nacional_Defunciones_20200804.csv',
'patients' : '200804COVID19MEXICO.csv'}
# Update the database by writing the url of the csv's and running the update_data @classmethod:
Covid.update_data(database)
###Output
_____no_output_____
###Markdown
This notebook revolves around the Covid class.Each object of the Covid class correspond to a Mexican State. Here is a list of the states:
###Code
patients_codes['states']
###Output
_____no_output_____
###Markdown
General methods for Covid objects: _ _init_ _() Contains just the name of the state:mexico_city = Covid('cdmx')
###Code
mexico_city = Covid('cdmx')
print(mexico_city)
###Output
Data for DISTRITO FEDERAL,
state code: 9,
population: 9018645
###Markdown
population()This returns the population of the state:
###Code
mexico_city.population()
###Output
_____no_output_____
###Markdown
discrete() and cummulative()This are two different formats for the data contained in the 'confirmed', 'suspicious', 'deaths' and 'negative' databases. Those are the data_type parameter that pass to the method: Mexico city confirmed cases in discrete format:
###Code
mexico_city.discrete('confirmed')
###Output
_____no_output_____
###Markdown
Mexico city confirmed cases in cummulative format:
###Code
mexico_city.cummulative('confirmed')
###Output
_____no_output_____
###Markdown
actives()Actives calculates the record of active Covid19 patients. It uses a default window of 14 days for infection.Since this process could be slow depending of the hardware, active data is store in pkl files by joblib in the '/tmp' folder. So the first time will take more time to calculate.
###Code
mexico_city.actives(window = 14)
###Output
_____no_output_____
###Markdown
patients()This method creates an instance of the inner class **Patients()** We'll see it later. Plotting general data plot_cummulative()**Data** can be a single instance of: cummulative(data_type) or a **list** of several: [ cummulative(data_type1), cummulative(data_type2) ] If data is a list then **names** should also be a list of strings with the same length. **trim** sets the initial x_axis since usually the first days of data are empty
###Code
Covid.plot_cummulative(data =[Covid('cdmx').cummulative('confirmed'),
Covid('cdmx').cummulative('deaths'),
Covid('cdmx').cummulative('suspicious')],
names =['Confirmed', 'Deaths', 'Suspicious'],
title = 'Mexico city Confirmed and Deaths cases',
trim = 70)
###Output
_____no_output_____
###Markdown
plot_discrete()Same especifications as the other one.
###Code
Covid.plot_discrete(data =[Covid('cdmx').discrete('confirmed'),
Covid('cdmx').discrete('suspicious'),
Covid('cdmx').discrete('deaths')],
names =['Confirmed', 'Suspicious','Deaths'],
title = 'Mexico city Confirmed and Deaths cases',
trim = 70)
###Output
_____no_output_____
###Markdown
plot_actives()Same especifications as the precious two methods.
###Code
Covid.plot_actives(data =[Covid('cdmx').actives(),
Covid('MEXICO').actives(),
Covid('TABASCO').actives()],
names =['MexicoCity','MexicoState', 'Toluca'],
title = 'Mexico city, Mexico State and Toluca active Covid19 patients',
trim = 40)
###Output
_____no_output_____
###Markdown
Getting several databases at once get_max_to_min()This function returns an ordered dictionary with data from all states. **dtype** could be: 'active', 'deaths', 'suspicious', 'confirmed' or 'negative' By default National data is ommited since it is much bigger than any other, to include it set **include_national** to True To return the databases ordered from min to max set **max_to_min** to False
###Code
deaths_max_to_min = Covid.get_max_to_min('deaths')
deaths_max_to_min
###Output
_____no_output_____
###Markdown
plot_max_to_min()This method plots max to min data, trough the plot_cummulative() or plot_actives() methods. **n** it's the number of states to plot
###Code
Covid.plot_max_to_min('deaths', n = 16, title = 'States with more deaths',trim = 60)
Covid.plot_max_to_min('actives', n = 16, title = 'States with more active patients of Covid19',trim = 60)
###Output
_____no_output_____
###Markdown
Patients() inner class: This class creates an instance for each state from the patients database:
###Code
mexico_city_patients = Covid('cdmx').patients()
mexico_city_patients.data.head()
###Output
_____no_output_____
###Markdown
Patients objects has filter methods:
###Code
mexico_city_women = Covid('cdmx').patients().women()
mexico_city_man = Covid('cdmx').patients().men()
mexico_city_ages_12_to_45 = Covid('cdmx').patients().age(12,45)
###Output
_____no_output_____
###Markdown
Filters can be chained together: Women alive, from Mexico City, between 20 and 45
###Code
MexicoCity_women_not_infected_alive_between_20_and_45 = Covid('cdmx').patients().women().not_infected().alive().age(start = 20, end = 45)
print(MexicoCity_women_not_infected_alive_between_20_and_45)
MexicoCity_women_not_infected_alive_between_20_and_45.data.age.values[:10]
###Output
31868 Patients data from: DISTRITO FEDERAL
###Markdown
Men dead from Mexico City, between 60 and 80
###Code
MexicoCity_men_dead_between_60_and_80 = Covid('cdmx').patients().men().deaths().age(start = 60, end = 80)
print(MexicoCity_men_dead_between_60_and_80)
MexicoCity_men_dead_between_60_and_80.data.age.values[:10]
###Output
3103 Patients data from: DISTRITO FEDERAL
###Markdown
Patients object has also descriptors: describe()This method returns an pandas df with the proportions for illness, sex, deaths and infection for the data population
###Code
MexicoCity_women_not_infected_alive_between_20_and_45.describe()
MexicoCity_men_dead_between_60_and_80.describe()
###Output
_____no_output_____
###Markdown
illness()This method creates a DataFrame with just the illness, age and sex columns. **age** is normalized with the whole population **sex** 0 == Women 1 == Men For every **illness** else 0 == Non present 1 == Present
###Code
MexicoCity_women_not_infected_alive_between_20_and_45.illness().head()
MexicoCity_men_dead_between_60_and_80.illness().head()
###Output
_____no_output_____
###Markdown
There is also a method to plot the general illness in the entire population:
###Code
Covid('all').patients().plot_illness()
###Output
_____no_output_____
###Markdown
sectors()This method plots the different institutions that takes or took care of the subset of patients.
###Code
print('Mexico institutions that took or take care of patients: ')
Covid('all').patients().sectors()
###Output
Mexico institutions that took or take care of patients:
###Markdown
Regressor:So I tried to train a regressor to predict the probability of diying by passing the illness, age and sex of a particular patient. I didn't work for the reason that every combination of illness, age and sex on dead patients exist also on the survivors, so the features that can predict the deceased for covid19 are not in the database:
###Code
covid19_deads = Covid('all').patients().deaths().infected().illness().values
covid19_survivors = Covid('all').patients().alive().infected().illness().values
print('Covid19 dead patients: ',len(covid19_deads))
same_data = 0
for patient in covid19_deads:
if patient in covid19_survivors:
same_data += 1
print('Number of survivor patients that has the same data that dead ones: ', same_data)
###Output
Covid19 dead patients: 48869
Number of survivor patients that has the same data that dead ones: 48869
###Markdown
Anyway you can train a classifier just for fun:
###Code
model = Covid.xgboost_classifier()
###Output
Starting preprocess of X_train, X_test, y_train, y_test...
###############
Preprocess done...
###############
X_train: 687616
X_test: 338678
Deaths on train set: 42193
Deaths on test set: 20755
###############
###############
Training the model...
###############
Predicting the Test set:
###############
Confusion Matrix:
[ Alive Dead]
[[317741 182]
[ 20668 87]]
###############
Calculating the ROC curves:
###############
No Skill: ROC AUC=0.500
Model: ROC AUC=0.851
|
recombination_problem.ipynb | ###Markdown
Physics 112: Section 1: Professor Holzapfel HW8, Computational problem Saha Equation and Recombination of the Early UniverseCopyright 2021 by the Regents of the University of California. All rights reserved. In this problem we explore the equilibrium ionization fraction of hydrogen as a function of temperature and density and apply it to the study of recombination in the early universe. As shown in Problem 1 of HW8, the Saha equation that we derived in class considering only the ground state of the hydrogen atom is applicable over a wide range of temperatures and densities. The Saha equation relates the equilibrium ionization fraction $x=n_e/(n_p+n_H) = n_e/n_t$ to the temperature $\tau$ and total density $n_t$ as$$\frac{x^2}{1-x}=\frac{n_q}{n_t} \exp\left(-\frac{\epsilon_0}{\tau}\right),$$ where $n_q=(m_e\tau/2\pi\hbar^2)^{3/2}$ and $\epsilon_0=13.6\,$eV. Solving this quadratic equation for x will produce two solutions only one of which will be positive. The positive solution is the physically meaningful one for this problem. You will have to take care in solvinng the quadratic equation for low density and high temperature where numerical precision is critical. a) Plot the ionization fraction as a function of temperature over the range $1,000K<T<20,000K$ for total hydrogen densities of $n=10^3, 10^9, 10^{15}, {\rm and}\, 10^{21}\, {\rm cm}^{-3}$. Note the relatively abrupt change from neutral to fully ionized and how the temperature of this transition changes with density.
###Code
#a) Solution Here
###Output
_____no_output_____
###Markdown
b) Low density gasses become ionized at temperatures much lower than you might have naively anticipated from the electron binding energy, $\frac{\epsilon_0}{k_b} = 157,800\,$K. If we define ionization as occurring when the hydrogen gas is 50% ionized, make a plot of the ionization temperature as a function of the total hydrogen density over the range $1\,{\rm cm}^{-3}<n_t<10^{21}{\rm cm}^{-3}$.
###Code
#b) Solution Here
###Output
_____no_output_____
###Markdown
c) The early Universe is a hot and dense plasma. As the Universe ages, it expands and cools. When the Universe reaches a critical combination of density and temperature, the plasma combines to form a neutral gas of mostly hydrogen. In the study of cosmology, distances are parameterized in terms of the redshift, $z = \frac{\lambda_{obs}-\lambda_{emit}}{\lambda_{emit}}$, where $\lambda_{emit}$ is the wavelength of light originally emitted by a distant source and $\lambda_{obs}$ is the observed wavelength. At a redshift $z$, the Universe was a factor of $(1+z)$ smaller than it is today. The temperature of the Universe was higher $T(z) = T_{0}(1+z)$, where $T_{0}=T_{CMB}=2.73\,$K is the average temperature of the Universe today. The density of hydrogen changes with redhshift as $n_t(z) = n_0 (1+z)^3$ where $n_0\sim 1\,{\rm cm}^{-3}$ is the current average density of the Universe. Using the scaling of the temperature and density with redshift, plot the ionization fraction of the Universe as a function of redshift z. d) If we define recombination as occurring at when the ionization fraction drops below 50%, solve for the redshift where this occurs. Use the value of this redshift to determine the temperature and density of the Universe at the time (redshift) of recombination. The detailed solution of this problem is more involved because the Universe contains helium (~25% of mass) as well as hydrogen, and is evolving in time and therefore can not be considered to be in thermal equilibrium. Nonetheless, the simple treatment here gives results that are roughly correct.
###Code
#d) Solution Here
###Output
_____no_output_____ |
IntroDataScience/ejercicios/19/tsne_digitos.ipynb | ###Markdown
t-SNEt-SNE es la abreviación de t-distributed Stochastic Neighbor Embedding.t-SNE reduce la dimensionalidad de un conjunto de datos.Convierte puntos cercanos (en el sentido Euclidiano) en altas dimensionesen puntos cercanos en dos dimensiones.El algoritmo es estocástico, así que diferentes inicializaciones van a dar diferentes resultados.Un buen lugar para empezar a aprender sobre los detalles es la página de uno de loscreadores del algoritmo: https://lvdmaaten.github.io/tsne/ Los objetivos de aprendizaje de este notebook son:- Aprender a utilizar la implementación de t-SNE para sklearn.- Visualizar los resultados de t-SNE.- Aplicar k-means clustering sobre los resultados de t-SNE Dataset de entrenamientoVamos a utilizar el dataset `sklearn.datasets.load_digits()` que contiene1797 digitos manuscritos representados en imágenes de 8 por 8 pixeles.Es decir, cada elemento vive en 64 dimensiones. El objetivo al aplicar t-SNE es ver si en efecto logran verse 10 clustersque corresponderían a cada uno de los dígitos. Empezamos por cargar los datos. Llamaremos `X` a las imágenes y `Y` a los enteros que marcan a qué dígito corresponde cada imagen.
###Code
numeros = sklearn.datasets.load_digits()
imagenes = numeros['images'] # Hay 1797 digitos representados en imagenes 8x8
n_imagenes = len(imagenes)
X = imagenes.reshape((n_imagenes, -1)) # para volver a tener los datos como imagen basta hacer data.reshape((n_imagenes, 8, 8))
Y = numeros['target']
print(np.shape(X), np.shape(Y))
###Output
(1797, 64) (1797,)
###Markdown
Entrenamiento del algoritmo Inicializamos el objeto TSNE.El parámetro más relevante es `perplexity`. Es equivalente a cambiar el número de primero vecinos que usa el algoritmo para definir su ambiente local.Típicamente se recomienda variarlo entre 5 y 50.
###Code
tsne = sklearn.manifold.TSNE(perplexity=20)
###Output
_____no_output_____
###Markdown
Ahora ejecutamos la fase de aprendizaje
###Code
tsne.fit(X)
###Output
_____no_output_____
###Markdown
Visualización de los resultados Extraemos la representación de los datos en el espacio bidimensional
###Code
embedding = tsne.embedding_
###Output
_____no_output_____
###Markdown
Finalmente graficamos los datos en su nueva representación.Para visualizar la relación de la nueva ubicación de los datos con su label original, utilizamos colores.
###Code
plt.scatter(embedding[:,0], embedding[:,1], c=Y, cmap='Paired', s=1.0)
plt.colorbar(boundaries=np.arange(11)-0.5).set_ticks(np.arange(10))
###Output
_____no_output_____
###Markdown
Efectivamente, vemos que imágenes que corresponden a los mismos dígitos se encuentran ahoraen grupos vecinos. **Es importante notar que puntos cercanos dentro de un cluster son similares, pero puntosen clusters cercanos no son necesariamente similares.**Con esto mente, vamos ahora a aplicar k-means clustering sobre los datos en su nueva representación.Vamos a buscar 10 clusters. k-means clustering sobre los resultados
###Code
# clusters sobre los resultados de tsne
n_clusters = 10
k_means = sklearn.cluster.KMeans(n_clusters=n_clusters)
k_means.fit(embedding) # training
cluster = k_means.predict(embedding) # predice a cual cluster corresponde cada elmento
distance = k_means.transform(embedding) # calcula la distancia de cada elemento al centro de su cluster
###Output
_____no_output_____
###Markdown
Ahora graficamos los puntos coloreados por el cluster predicho por k-means clustering
###Code
plt.scatter(embedding[:,0], embedding[:,1], c=cluster, cmap='Paired', s=1.0)
plt.colorbar(boundaries=np.arange(11)-0.5).set_ticks(np.arange(10))
###Output
_____no_output_____
###Markdown
Visualización de imágnes en el mismo clusterPara terminar vamos a graficar 10 ejemplos de imágenes para cada uno de los clusters
###Code
plt.figure(figsize=(10,12))
for i in range(n_clusters):
ii = np.argsort(distance[:,i]) # es el orden de los objetos de menor a mayor a la distancia de su cluster
n_seq = 10
for l, ind in enumerate(ii[:n_seq]):
plt.subplot(n_clusters,n_seq,i*n_seq +l +1)
plt.imshow(imagenes[ind].reshape(8,8))
plt.title("D = {:.2f}".format(distance[ind,i]))
plt.axis('off')
###Output
_____no_output_____ |
source/AML-classifier_final.ipynb | ###Markdown
Acute Myeloid Leukemia classification using flow cytometry data Note :-- Raw data has been preprocessed with R. - 280 features have been engineered. - Data was subsampled to balance the case and control groups. Basic Imports
###Code
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import export_graphviz
from sklearn.svm import SVC, LinearSVC
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
from ggplot import *
%pylab inline
###Output
_____no_output_____
###Markdown
load prepared data with R
###Code
data = pd.read_table("down_data2.txt", delim_whitespace=True)
data1 = pd.read_table("down_data1.txt", delim_whitespace=True)
data2 = pd.read_table("down_data2.txt", delim_whitespace=True)
data3 = pd.read_table("down_data3.txt", delim_whitespace=True)
data4 = pd.read_table("down_data4.txt", delim_whitespace=True)
data5 = pd.read_table("down_data5.txt", delim_whitespace=True)
data = data.drop("SampleNumber", axis=1)
X = data.drop("Class", axis=1)
Y = data["Class"]
Y = pd.get_dummies(Y).aml
###Output
_____no_output_____
###Markdown
Split the dataset in training and testing
###Code
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.6, random_state=0)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred_log = logreg.predict(X_test)
logreg_accur=accuracy_score(Y_test, Y_pred_log)
logreg.score(X_train, Y_train)
logreg_accur
###Output
_____no_output_____
###Markdown
Support Vector Machines
###Code
## Scaling data for SVM
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
svc = SVC(probability=True)
svc.fit(X_train_scaled, Y_train)
Y_pred_svc = svc.predict(X_test_scaled)
svc.score(X_train_scaled, Y_train)
svc_accur = accuracy_score(Y_test, Y_pred_svc)
svc_accur
###Output
_____no_output_____
###Markdown
Random Forests
###Code
random_forest = RandomForestClassifier(n_estimators=15, random_state=0)
random_forest.fit(X_train, Y_train)
Y_pred_rf = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
rf_accur = accuracy_score(Y_test, Y_pred_rf)
rf_accur
###Output
_____no_output_____
###Markdown
Findings:* According to balanced accurcy, SVM and random forest perform much better logistic regression. * Random forest performs slightly better than SVM.* Although SVM performs comparably well, the random forest method was chosen as final model because it is easier to interespret and doesn't require data scalling step. In addition, the random forest could prevent overfitting naturally. Overfiting could be a potential issue when the number of features is larger than the number of observations.
###Code
plt.figure(0).clf()
preds = logreg.predict_proba(X_test)[:,1]
fpr, tpr, _ = metrics.roc_curve(Y_test, preds)
auc = metrics.roc_auc_score(Y_test, preds)
df_lr = pd.DataFrame(dict(fpr=fpr, tpr=tpr))
plt.plot(fpr,tpr,label="Logistic Regression, auc= %0.3f" %(auc))
preds = random_forest.predict_proba(X_test)[:,1]
fpr, tpr, _ = metrics.roc_curve(Y_test, preds)
auc = metrics.roc_auc_score(Y_test, preds)
df_rf = pd.DataFrame(dict(fpr=fpr, tpr=tpr))
plt.plot(fpr,tpr,label="Random Forest, auc= %0.3f" %(auc))
preds = svc.predict_proba(X_test_scaled)[:, 1]
fpr, tpr, thresholds = metrics.roc_curve(Y_test, preds)
auc = metrics.roc_auc_score(Y_test, preds)
df_svc = pd.DataFrame(dict(fpr=fpr, tpr=tpr))
plt.plot(fpr,tpr,label="SVM, auc= %0.3f" %(auc))
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc=0)
###Output
_____no_output_____
###Markdown
Discussion* ROC cureves indicated that SVM and random forest perform better than logistic regression. * Comments: ROC curves are not smooth because of small sample size.
###Code
accuracy = [logreg_accur, rf_accur, svc_accur]
objects = ('Logistic Regression', 'SVM', 'Random Forest' )
y_pos = np.arange(len(objects))
bar_width = 0.35
plt.bar(y_pos, accuracy, align='center', alpha=0.5, color=['b', 'g', 'r'] )
plt.xticks(y_pos, objects)
plt.ylabel('Accuracy score')
plt.show()
###Output
_____no_output_____
###Markdown
accuracy function to get the score from LR, SVM and RF
###Code
def accurcy(data):
#prepared data for analysis
data = data.drop("SampleNumber", axis=1)
X = data.drop("Class", axis=1)
Y = data["Class"]
Y = pd.get_dummies(Y).aml
# Split the dataset in two equal parts
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.6, random_state=0)
#Logestic regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred_log = logreg.predict(X_test)
logreg_accur=accuracy_score(Y_test, Y_pred_log)
##Support Vector Machines
## Scaling data for SVM
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
svc = SVC(probability=True)
svc.fit(X_train_scaled, Y_train)
Y_pred_svc = svc.predict(X_test_scaled)
svc_accur = accuracy_score(Y_test, Y_pred_svc)
# Random Forests
random_forest = RandomForestClassifier(n_estimators=15, random_state=0)
#random_forest = RandomForestClassifier(n_estimators=100, criterion='entropy', max_depth=10, max_features='sqrt', min_samples_split=5)
random_forest.fit(X_train, Y_train)
Y_pred_rf = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
rf_accur = accuracy_score(Y_test, Y_pred_rf)
return [(logreg_accur, svc_accur, rf_accur)]
###Output
_____no_output_____
###Markdown
To ensure the developed model could be generalized, subsampling has been repeated for 5 times.
###Code
# method comparison with 5 random subsamplings
methodcom = accurcy(data1) + accurcy(data2) + accurcy(data3) + accurcy(data4) + accurcy(data5)
methodcom = pd.DataFrame(methodcom, columns = ["Logistic Regression", "SVM", "Random Forest"])
means = methodcom.mean()*100
errors = methodcom.std()*100
ax = (means.plot(yerr=errors, kind='bar', color=['b', 'g', 'r'], rot=75,
error_kw=dict(ecolor='black', lw=2, capsize=10, capthick=2)))
ax.set_ylabel("Accurcy score (%)", fontsize=16)
ax.set_ylim(50,105)
#plt.axhline(y=91, color='g', linestyle='--')
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(16)
rects = ax.patches
# make labels
labels = round(means,1)
for rect, label, error in zip(rects, labels, errors):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom')
###Output
_____no_output_____
###Markdown
Comments* Error bar indicates the standard deviation of accurcy mean from 5 times randomization. List important features
###Code
importances = random_forest.feature_importances_
indices = np.argsort(importances)[::-1]
feature_labels = X_train.columns
# Top 5 important features
xrange=range(5)
topimportances = importances[indices][:5]
topfeatures = feature_labels[indices][:5]
topfeatures
topfeatures = ['CD16.SD', 'CD16.median', 'SSC.mean', 'CD45.skew',
'CD10.median']
###Output
_____no_output_____
###Markdown
Top 5 important features
###Code
matplotlib.rcParams.update({'font.size': 16})
plt.figure()
plt.title("Feature importances")
plt.bar(xrange, topimportances,
color=['r', 'r', 'b', 'b', 'b'], align="center")
plt.xticks(xrange, topfeatures, rotation=60)
plt.ylabel('Importance')
plt.show()
###Output
_____no_output_____
###Markdown
The top 2 important features are related with CD16. CD16 biomarker information was detected in test D. The next question I asked is whether I could use less biomarker information to classify AML.
###Code
X_train_testD = X_train_scaled[ :, 105:140]
X_train_CD16 = X_train_scaled[ :, 131:135]
X_train_CD34 = X_train_scaled[ :, 200:205]
X_test_testD = X_test_scaled[ :, 105:140]
X_test_CD16 = X_test_scaled[ :, 131:135]
X_test_CD34 = X_test_scaled[ :, 200:205]
# Random Forests with test D
random_forest_testD = RandomForestClassifier(n_estimators=15, random_state=0)
random_forest.fit(X_train_testD, Y_train)
Y_pred_rf_testD = random_forest.predict(X_test_testD)
random_forest.score(X_train_testD, Y_train)
rf_accur_testD = accuracy_score(Y_test, Y_pred_rf_testD)
rf_accur_testD
# Random Forests with CD16 features
random_forest_CD16 = RandomForestClassifier(n_estimators=15, random_state=0)
random_forest.fit(X_train_CD16, Y_train)
Y_pred_rf_CD16 = random_forest.predict(X_test_CD16)
random_forest.score(X_train_CD16, Y_train)
rf_accur_CD16 = accuracy_score(Y_test, Y_pred_rf_CD16)
rf_accur_CD16
# Random Forests with CD34 features
random_forest_CD34 = RandomForestClassifier(n_estimators=15, random_state=0)
random_forest.fit(X_train_CD34, Y_train)
Y_pred_rf_CD34 = random_forest.predict(X_test_CD34)
random_forest.score(X_train_CD34, Y_train)
rf_accur_CD34 = accuracy_score(Y_test, Y_pred_rf_CD34)
rf_accur_CD34
confusion_matrix(Y_test, Y_pred_rf_testD)
###Output
_____no_output_____
###Markdown
Weighted Random forest Besides using subsampling to balance dataset, another way is weighted random forest model.
###Code
alldata = pd.read_table("alldataprep2.txt", delim_whitespace=True)
alldata = alldata.drop("SampleNumber", axis=1)
X = alldata.drop("Label", axis=1)
Y = alldata["Label"]
Y = pd.get_dummies(Y).aml
# Split the dataset in two equal parts
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.6, random_state=100)
# Random Forests
random_forest = RandomForestClassifier(n_estimators=15, random_state=0)
random_forest.fit(X_train, Y_train)
Y_pred_rf = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
rf_accur = accuracy_score(Y_test, Y_pred_rf)
rf_accur
# Random Forests with weight using all data
random_forest = RandomForestClassifier(n_estimators=100, random_state=10, class_weight="balanced")
random_forest.fit(X_train, Y_train)
Y_pred_rf = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
rf_accur = accuracy_score(Y_test, Y_pred_rf)
rf_accur
confusion_matrix(Y_test, Y_pred_rf)
pd.crosstab(Y_test, Y_pred_rf, rownames=['True'], colnames=['Predicted'], margins=True)
###Output
_____no_output_____
###Markdown
Comments: The sensitivity and specificity are 90% and 98.4% for weighted random forest model. Cross validation"By default random forest picks up 2/3rd data for training and rest for testing for regression and almost 70% data for training and rest for testing during classification.By principle since it randomizes the variable selection during each tree split it's not prone to overfit unlike other models".
###Code
from sklearn.cross_validation import cross_val_score
random_forest = RandomForestClassifier(n_estimators=15, random_state=0, class_weight="balanced")
scores = cross_val_score(random_forest, X, Y, cv=5)
scores
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
###Output
_____no_output_____ |
docs/contents/Quick_Guide.ipynb | ###Markdown
Other output forms can be specified with the input argument `to_form`
###Code
q2 = puw.standardize(q, to_form='simtk.unit')
print('q2 is now a {} quantity expressed in {}.'.format(puw.get_form(q2), puw.get_unit(q2)))
###Output
q2 is now a simtk.unit quantity expressed in 0.0019999999999999996 nm.
###Markdown
As you noticed, we have mention that `pyunitwizard.get_standard` and `pyunitwizard.standardize` results with the compatible default standard units. This is combination of standard units are also considered:
###Code
q = puw.quantity('100 angstroms**3')
q = puw.quantity('100 angstroms**3')
print('The standard of q is:', puw.get_standard_units(q))
q = puw.quantity('3000.0 pm/ns')
print('The standard of q is:', puw.get_standard_units(q))
q = puw.quantity('1.4 kJ/mol')
print('The standard of q is:', puw.get_standard_units(q))
###Output
The standard of q is: kilocalorie / mole
###Markdown
Again, and finnally, `pyunitwizard.standardize` can help you to have homogeneous outputs in you library:
###Code
q = puw.quantity(1.4, 'kJ/mol', form='simtk.unit')
output = puw.standardize(q)
print('{} as {} quantity'.format(output, puw.get_form(output)))
###Output
0.3346080305927342 kilocalorie / mole as pint quantity
###Markdown
Quick Guide*-Brief tutorial for those in a hurry-*There are several python libraries to work with physical quantities such as pint, unyt or openmm.unit. Now imagine that your project requires the interaction with different tools, and those tools don't operate with the same physical quantities objects. Wouldn't having a library with a unique API to work with different forms of physical quantities be a relief?PyUnitWizard allows you to work with more than a physical units library in python -such as pint, unyt or openmm.unit- with a unique API. PyUnitWizard works as the man in the middle between your code Import PyUnitWizard and choose the libraries you are going to work with.
###Code
import pyunitwizard as puw
puw.configure.get_libraries_supported()
puw.configure.get_libraries_found()
puw.configure.load_library(['pint', 'openmm.unit'])
puw.configure.get_default_form()
puw.configure.get_default_parser()
###Output
_____no_output_____
###Markdown
Let's play a bit with quantities and unitsLet's make a quantity from its value and unit name:
###Code
q = puw.quantity(2.5, 'nanometers/picoseconds')
q
###Output
_____no_output_____
###Markdown
We can check that **q** is in deed a Pint quantity:
###Code
puw.is_quantity(q)
puw.get_form(q)
###Output
_____no_output_____
###Markdown
Let's extract now its value and units:
###Code
puw.get_value(q)
puw.get_unit(q)
puw.dimensionality(q)
###Output
_____no_output_____
###Markdown
We can now translate **q** from Pint to openmm.unit:
###Code
q2 = puw.convert(q, to_form='openmm.unit')
puw.get_form(q2)
q2
###Output
_____no_output_____
###Markdown
Finally, lets convert `q2` into other units:
###Code
q3 = puw.convert(q2, to_unit='angstroms/picoseconds')
print('{} was converted to angstroms/picoseconds as {}'.format(q2, q3))
puw.dimensionality(q3)
puw.compatibility(q, q3)
###Output
_____no_output_____
###Markdown
Let's make now a unit from its name or symbol:
###Code
u = puw.unit('kJ/mol', form='openmm.unit')
u
###Output
_____no_output_____
###Markdown
We can check that `u` is in deed a openmm.unit unit.
###Code
puw.get_form(u)
puw.is_unit(u)
puw.dimensionality(u)
###Output
_____no_output_____
###Markdown
Units and quantities can be turned into strings:
###Code
puw.convert(u, to_form='string')
###Output
_____no_output_____
###Markdown
Quantities and units can also be created from algebraical expressions mixing values and units:
###Code
q = puw.convert('3.5N/(2.0nm**2)', to_form='openmm.unit')
q
puw.convert(q, to_form='string')
u = puw.convert('K', to_form='pint', to_type='unit')
u
puw.convert(u, to_form='string')
###Output
_____no_output_____
###Markdown
The default quantity form PyUnitWizard takes the first unit library loaded as the default quantity form:
###Code
puw.configure.get_libraries_loaded()
puw.configure.get_default_form()
###Output
_____no_output_____
###Markdown
The default form is taken when a method is invoked with out specifying the quantity or unit form:
###Code
q1 = puw.convert('3.5N/(2.0nm**2)')
q2 = puw.quantity(300.0, 'kelvin')
print('q1 is a {} quantity'.format(puw.get_form(q1)))
print('q2 is a {} quantity'.format(puw.get_form(q2)))
###Output
q1 is a pint quantity
q2 is a pint quantity
###Markdown
The default form can be changed with the following method:
###Code
puw.configure.set_default_form('openmm.unit')
q1 = puw.convert('3.5N/(2.0nm**2)')
q2 = puw.quantity(300.0, 'kelvin')
print('q1 is a {} quantity'.format(puw.get_form(q1)))
print('q2 is a {} quantity'.format(puw.get_form(q2)))
###Output
q1 is a openmm.unit quantity
q2 is a openmm.unit quantity
###Markdown
The standardsPyUnitWizard includes the possibility to define standard units for you library or python script. Let's suppose your quantities will be always expressed in 'nm', 'ps' and 'kcal/mol' as Pint quantities. This two next lines sets this election as the default standards and form:
###Code
puw.configure.set_standard_units(['nm', 'ps', 'kcal', 'mole'])
puw.configure.set_default_form('pint')
###Output
_____no_output_____
###Markdown
We can check that these values were indeed stored:
###Code
puw.configure.get_standard_units()
puw.configure.get_default_form()
###Output
_____no_output_____
###Markdown
The method `pyunitwizard.get_standard()` shows the standardized compatible units of a quantity:
###Code
q = puw.quantity('2.0 pm', form='openmm.unit')
puw.get_standard_units(q)
print('The standard of q is:', puw.get_standard_units(q))
###Output
The standard of q is: nm
###Markdown
And the method `pyunitwizard.standardize()` converts and translates the input quantity into the defined defined default standard compatible units and form:
###Code
q2 = puw.standardize(q)
print('q2 is now a {} quantity expressed in {}.'.format(puw.get_form(q2), puw.get_unit(q2)))
###Output
q2 is now a pint quantity expressed in 0.0019999999999999996 nanometer.
###Markdown
Other output forms can be specified with the input argument `to_form`
###Code
q2 = puw.standardize(q, to_form='openmm.unit')
print('q2 is now a {} quantity expressed in {}.'.format(puw.get_form(q2), puw.get_unit(q2)))
###Output
q2 is now a openmm.unit quantity expressed in 0.0019999999999999996 nm.
###Markdown
As you noticed, we have mention that `pyunitwizard.get_standard` and `pyunitwizard.standardize` results with the compatible default standard units. This is combination of standard units are also considered:
###Code
q = puw.quantity('100 angstroms**3')
q = puw.quantity('100 angstroms**3')
print('The standard of q is:', puw.get_standard_units(q))
q = puw.quantity('3000.0 pm/ns')
print('The standard of q is:', puw.get_standard_units(q))
q = puw.quantity('1.4 kJ/mol')
print('The standard of q is:', puw.get_standard_units(q))
###Output
The standard of q is: kilocalorie / mole
###Markdown
Again, and finnally, `pyunitwizard.standardize` can help you to have homogeneous outputs in you library:
###Code
q = puw.quantity(1.4, 'kJ/mol', form='openmm.unit')
output = puw.standardize(q)
print('{} as {} quantity'.format(output, puw.get_form(output)))
###Output
0.3346080305927342 kilocalorie / mole as pint quantity
###Markdown
Quick Guide*-Brief tutorial for those in a hurry-*Evidency allows you to work with more than a physical units library in python -such as pint, unyt or openmm.unit- with a unique API. PyUnitWizard works as the man in the middle between your code
###Code
import evidence as evi
datum1 = evi.Evidence(3.12)
datum1.add_reference({'database':'DOI', 'id':'AAA'})
datum1.add_reference({'database':'PubMed', 'id':'BBB'})
datum1
print(datum1)
datum1.value
datum1.references
type(datum1.references[0])
datum1.references[0].database
datum1.references[0].id
datum2 = evi.Evidence(3.12)
datum2.add_reference({'database':'PubMed', 'id':'BBB'})
datum3 = evi.Evidence(3.12)
datum3.add_reference({'database':'PubMed', 'id':'CCC'})
datum4 = evi.Evidence(3.12)
datum4.add_reference({'database':'UniProtKB', 'id':'DDD'})
datum4.add_reference({'database':'PDB', 'id':'EEE'})
evi.identity(datum1, datum1)
evi.identity(datum1, datum2)
evi.is_subset(datum2, datum1)
evi.is_subset(datum1, datum2)
evi.same_value([datum1, datum2, datum3, datum4])
datum = evi.join([datum1, datum2, datum3, datum4])
datum
###Output
_____no_output_____
###Markdown
Quick Guide*-Brief tutorial for those in a hurry-*There are several python libraries to work with physical quantities such as pint, unyt or simtk.unit. Now imagine that your project requires the interaction with different tools, and those tools don't operate with the same physical quantities objects. Wouldn't having a library with a unique API to work with different forms of physical quantities be a relief?PyUnitWizard allows you to work with more than a physical units library in python -such as pint, unyt or simtk.unit- with a unique API. PyUnitWizard works as the man in the middle between your code Import PyUnitWizard and choose the libraries you are going to work with.
###Code
import pyunitwizard as puw
puw.configure.get_libraries_supported()
puw.configure.get_libraries_found()
puw.configure.load_library(['pint', 'simtk.unit'])
puw.configure.get_default_form()
puw.configure.get_default_parser()
###Output
_____no_output_____
###Markdown
Let's play a bit with quantities and unitsLet's make a quantity from its value and unit name:
###Code
q = puw.quantity(2.5, 'nanometers/picoseconds')
q
###Output
_____no_output_____
###Markdown
We can check that **q** is in deed a Pint quantity:
###Code
puw.is_quantity(q)
puw.get_form(q)
###Output
_____no_output_____
###Markdown
Let's extract now its value and units:
###Code
puw.get_value(q)
puw.get_unit(q)
puw.dimensionality(q)
###Output
_____no_output_____
###Markdown
We can now translate **q** from Pint to simtk.unit:
###Code
q2 = puw.convert(q, to_form='simtk.unit')
puw.get_form(q2)
q2
###Output
_____no_output_____
###Markdown
Finally, lets convert `q2` into other units:
###Code
q3 = puw.convert(q2, to_unit='angstroms/picoseconds')
print('{} was converted to angstroms/picoseconds as {}'.format(q2, q3))
puw.dimensionality(q3)
puw.compatibility(q, q3)
###Output
_____no_output_____
###Markdown
Let's make now a unit from its name or symbol:
###Code
u = puw.unit('kJ/mol', form='simtk.unit')
u
###Output
_____no_output_____
###Markdown
We can check that `u` is in deed a simtk.unit unit.
###Code
puw.get_form(u)
puw.is_unit(u)
puw.dimensionality(u)
###Output
_____no_output_____
###Markdown
Units and quantities can be turned into strings:
###Code
puw.convert(u, to_form='string')
###Output
_____no_output_____
###Markdown
Quantities and units can also be created from algebraical expressions mixing values and units:
###Code
q = puw.convert('3.5N/(2.0nm**2)', to_form='simtk.unit')
q
puw.convert(q, to_form='string')
u = puw.convert('K', to_form='pint', to_type='unit')
u
puw.convert(u, to_form='string')
###Output
_____no_output_____
###Markdown
The default quantity form PyUnitWizard takes the first unit library loaded as the default quantity form:
###Code
puw.configure.get_libraries_loaded()
puw.configure.get_default_form()
###Output
_____no_output_____
###Markdown
The default form is taken when a method is invoked with out specifying the quantity or unit form:
###Code
q1 = puw.convert('3.5N/(2.0nm**2)')
q2 = puw.quantity(300.0, 'kelvin')
print('q1 is a {} quantity'.format(puw.get_form(q1)))
print('q2 is a {} quantity'.format(puw.get_form(q2)))
###Output
q1 is a pint quantity
q2 is a pint quantity
###Markdown
The default form can be changed with the following method:
###Code
puw.configure.set_default_form('simtk.unit')
q1 = puw.convert('3.5N/(2.0nm**2)')
q2 = puw.quantity(300.0, 'kelvin')
print('q1 is a {} quantity'.format(puw.get_form(q1)))
print('q2 is a {} quantity'.format(puw.get_form(q2)))
###Output
q1 is a simtk.unit quantity
q2 is a simtk.unit quantity
###Markdown
The standardsPyUnitWizard includes the possibility to define standard units for you library or python script. Let's suppose your quantities will be always expressed in 'nm', 'ps' and 'kcal/mol' as Pint quantities. This two next lines sets this election as the default standards and form:
###Code
puw.configure.set_standard_units(['nm', 'ps', 'kcal', 'mole'])
puw.configure.set_default_form('pint')
###Output
_____no_output_____
###Markdown
We can check that these values were indeed stored:
###Code
puw.configure.get_standard_units()
puw.configure.get_default_form()
###Output
_____no_output_____
###Markdown
The method `pyunitwizard.get_standard()` shows the standardized compatible units of a quantity:
###Code
q = puw.quantity('2.0 pm', form='simtk.unit')
puw.get_standard_units(q)
print('The standard of q is:', puw.get_standard_units(q))
###Output
The standard of q is: nm
###Markdown
And the method `pyunitwizard.standardize()` converts and translates the input quantity into the defined defined default standard compatible units and form:
###Code
q2 = puw.standardize(q)
print('q2 is now a {} quantity expressed in {}.'.format(puw.get_form(q2), puw.get_unit(q2)))
###Output
q2 is now a pint quantity expressed in 0.0019999999999999996 nanometer.
|
tutorials/01-basics/pytorch_basics/PyTorch Basics.ipynb | ###Markdown
 Pytorch学习开始 入门的材料来自两个地方:第一个是官网教程:[WELCOME TO PYTORCH TUTORIALS](https://pytorch.org/tutorials/),特别是官网的六十分钟入门教程 [DEEP LEARNING WITH PYTORCH: A 60 MINUTE BLITZ](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)。第二个是韩国大神Yunjey Choi的Repo:[pytorch-tutorial](https://github.com/yunjey/pytorch-tutorial),代码写得干净整洁。 **目的**:我是直接把Yunjey的教程的python代码挪到Jupyter Notebook上来,一方面可以看到运行结果,另一方面可以添加注释和相关资料链接。方便后面查阅。 顺便一题,我的Pytorch的版本是**0.4.1**
###Code
import torch
print(torch.__version__)
###Output
0.4.1
###Markdown
Pytorch 基础篇(1):Pytorch基础**[参考代码](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/01-basics/pytorch_basics/main.py)**
###Code
# 包
import torch
import torchvision
import torch.nn as nn
import numpy as np
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
autograd(自动求导/求梯度) 基础案例1
###Code
# 创建张量(tensors)
x = torch.tensor(1., requires_grad=True)
w = torch.tensor(2., requires_grad=True)
b = torch.tensor(3., requires_grad=True)
# 构建计算图( computational graph):前向计算
y = w * x + b # y = 2 * x + 3
# 反向传播,计算梯度(gradients)
y.backward()
# 输出梯度
print(x.grad) # x.grad = 2
print(w.grad) # w.grad = 1
print(b.grad) # b.grad = 1
###Output
tensor(2.)
tensor(1.)
tensor(1.)
###Markdown
autograd(自动求导/求梯度) 基础案例2
###Code
# 创建大小为 (10, 3) 和 (10, 2)的张量.
x = torch.randn(10, 3)
y = torch.randn(10, 2)
# 构建全连接层(fully connected layer)
linear = nn.Linear(3, 2)
print ('w: ', linear.weight)
print ('b: ', linear.bias)
# 构建损失函数和优化器(loss function and optimizer)
# 损失函数使用均方差
# 优化器使用随机梯度下降,lr是learning rate
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(linear.parameters(), lr=0.01)
# 前向传播
pred = linear(x)
# 计算损失
loss = criterion(pred, y)
print('loss: ', loss.item())
# 反向传播
loss.backward()
# 输出梯度
print ('dL/dw: ', linear.weight.grad)
print ('dL/db: ', linear.bias.grad)
# 执行一步-梯度下降(1-step gradient descent)
optimizer.step()
# 更底层的实现方式是这样子的
# linear.weight.data.sub_(0.01 * linear.weight.grad.data)
# linear.bias.data.sub_(0.01 * linear.bias.grad.data)
# 进行一次梯度下降之后,输出新的预测损失
# loss的确变少了
pred = linear(x)
loss = criterion(pred, y)
print('loss after 1 step optimization: ', loss.item())
###Output
w: Parameter containing:
tensor([[ 0.5180, 0.2238, -0.5470],
[ 0.1531, 0.2152, -0.4022]], requires_grad=True)
b: Parameter containing:
tensor([-0.2110, -0.2629], requires_grad=True)
loss: 0.8057981729507446
dL/dw: tensor([[-0.0315, 0.1169, -0.8623],
[ 0.4858, 0.5005, -0.0223]])
dL/db: tensor([0.1065, 0.0955])
loss after 1 step optimization: 0.7932316660881042
###Markdown
从Numpy装载数据
###Code
# 创建Numpy数组
x = np.array([[1, 2], [3, 4]])
print(x)
# 将numpy数组转换为torch的张量
y = torch.from_numpy(x)
print(y)
# 将torch的张量转换为numpy数组
z = y.numpy()
print(z)
###Output
[[1 2]
[3 4]]
tensor([[1, 2],
[3, 4]])
[[1 2]
[3 4]]
###Markdown
输入工作流(Input pipeline)
###Code
# 下载和构造CIFAR-10 数据集
# Cifar-10数据集介绍:https://www.cs.toronto.edu/~kriz/cifar.html
train_dataset = torchvision.datasets.CIFAR10(root='../../../data/',
train=True,
transform=transforms.ToTensor(),
download=True)
# 获取一组数据对(从磁盘中读取)
image, label = train_dataset[0]
print (image.size())
print (label)
# 数据加载器(提供了队列和线程的简单实现)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=64,
shuffle=True)
# 迭代的使用
# 当迭代开始时,队列和线程开始从文件中加载数据
data_iter = iter(train_loader)
# 获取一组mini-batch
images, labels = data_iter.next()
# 正常的使用方式如下:
for images, labels in train_loader:
# 在此处添加训练用的代码
pass
###Output
Files already downloaded and verified
torch.Size([3, 32, 32])
6
###Markdown
自定义数据集的Input pipeline
###Code
# 构建自定义数据集的方式如下:
class CustomDataset(torch.utils.data.Dataset):
def __init__(self):
# TODO
# 1. 初始化文件路径或者文件名
pass
def __getitem__(self, index):
# TODO
# 1. 从文件中读取一份数据(比如使用nump.fromfile,PIL.Image.open)
# 2. 预处理数据(比如使用 torchvision.Transform)
# 3. 返回数据对(比如 image和label)
pass
def __len__(self):
# 将0替换成数据集的总长度
return 0
# 然后就可以使用预置的数据加载器(data loader)了
custom_dataset = CustomDataset()
train_loader = torch.utils.data.DataLoader(dataset=custom_dataset,
batch_size=64,
shuffle=True)
###Output
_____no_output_____
###Markdown
预训练模型
###Code
# 下载并加载预训练好的模型 ResNet-18
resnet = torchvision.models.resnet18(pretrained=True)
# 如果想要在模型仅对Top Layer进行微调的话,可以设置如下:
# requieres_grad设置为False的话,就不会进行梯度更新,就能保持原有的参数
for param in resnet.parameters():
param.requires_grad = False
# 替换TopLayer,只对这一层做微调
resnet.fc = nn.Linear(resnet.fc.in_features, 100) # 100 is an example.
# 前向传播
images = torch.randn(64, 3, 224, 224)
outputs = resnet(images)
print (outputs.size()) # (64, 100)
###Output
torch.Size([64, 100])
###Markdown
保存和加载模型
###Code
# 保存和加载整个模型
torch.save(resnet, 'model.ckpt')
model = torch.load('model.ckpt')
# 仅保存和加载模型的参数(推荐这个方式)
torch.save(resnet.state_dict(), 'params.ckpt')
resnet.load_state_dict(torch.load('params.ckpt'))
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/tfrecord-tf.example.ipynb | ###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
#!pip install --upgrade tensorflow==2.5
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
You can parse TFRecords using the standard protocol buffer `.FromString` methodTo decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
# `.map` function maps across the elements of the dataset.
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
# Create a Dataset whose elements are generated by generator using `.from_generator` function
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
# `.TFRecordWriter` function writes a dataset to a TFRecord file
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
# Use the `.take` method to pull ten examples from the dataset.
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function.
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# Check the image file
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a **TODO** in the [student lab notebook](courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tf-nightly
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
To decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read amd Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a **TODO** in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/ml_on_gcloud_v2/labs/03_tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tf-nightly
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
feature = _float_feature(np.exp(1))
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
To decode the message use the `tf.train.Example.FromString` method.
###Code
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
!pip install --upgrade tensorflow==2.5
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
You can parse TFRecords using the standard protocol buffer `.FromString` methodTo decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
# `.map` function maps across the elements of the dataset.
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
# Create a Dataset whose elements are generated by generator using `.from_generator` function
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
# `.TFRecordWriter` function writes a dataset to a TFRecord file
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
# Use the `.take` method to pull ten examples from the dataset.
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function.
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# Check the image file
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/datasets/performances) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
#!pip install --upgrade tensorflow==2.5
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
You can parse TFRecords using the standard protocol buffer `.FromString` methodTo decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
# `.map` function maps across the elements of the dataset.
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
# Create a Dataset whose elements are generated by generator using `.from_generator` function
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
# `.TFRecordWriter` function writes a dataset to a TFRecord file
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/dataconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
# Use the `.take` method to pull ten examples from the dataset.
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function.
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# Check the image file
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/datasets/performances) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
#!pip install --upgrade tensorflow==2.5
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
You can parse TFRecords using the standard protocol buffer `.FromString` methodTo decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
# `.map` function maps across the elements of the dataset.
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
# Create a Dataset whose elements are generated by generator using `.from_generator` function
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
# `.TFRecordWriter` function writes a dataset to a TFRecord file
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/dataconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
# Use the `.take` method to pull ten examples from the dataset.
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function.
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# Check the image file
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tf-nightly
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
To decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____
###Markdown
TFRecord and tf.Example**Learning Objectives**1. Understand the TFRecord format for storing data2. Understand the tf.Example message type3. Read and Write a TFRecord file Introduction In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/tfrecord-tf.example.ipynb) -- try to complete that notebook first before reviewing this solution notebook. The TFRecord format The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets) for dataset performance tips. Load necessary libraries We will start by importing the necessary libraries for this lab.
###Code
# Run the chown command to change the ownership of the repository
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the results of that search
# to a name in the local scope.
!pip install -q tf-nightly
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
###Output
[31mERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement cloudpickle==1.1.1, but you'll have cloudpickle 1.3.0 which is incompatible.[0m
[31mERROR: tensorflow-probability 0.8.0 has requirement gast<0.3,>=0.2, but you'll have gast 0.3.3 which is incompatible.[0m
[31mERROR: tensorflow-io 0.9.10 has requirement tensorflow==2.1.0rc0, but you'll have tensorflow 2.1.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.1; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
TensorFlow version: 2.3.0-dev20200613
###Markdown
Please ignore any incompatibility warnings and errors. `tf.Example` Data types for `tf.Example` Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:1. `tf.train.BytesList` (the following types can be coerced) - `string` - `byte`1. `tf.train.FloatList` (the following types can be coerced) - `float` (`float32`) - `double` (`float64`)1. `tf.train.Int64List` (the following types can be coerced) - `bool` - `enum` - `int32` - `uint32` - `int64` - `uint64` In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above:
###Code
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
###Output
_____no_output_____
###Markdown
Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor. Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
###Code
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
###Output
bytes_list {
value: "test_string"
}
bytes_list {
value: "test_bytes"
}
float_list {
value: 2.7182817459106445
}
int64_list {
value: 1
}
int64_list {
value: 1
}
###Markdown
All proto messages can be serialized to a binary-string using the `.SerializeToString` method:
###Code
# TODO 1b
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
###Output
_____no_output_____
###Markdown
Creating a `tf.Example` message Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.1. You create a map (dictionary) from the feature name string to the encoded feature value produced in 1.1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.protoL85). In this notebook, you will create a dataset using NumPy.This dataset will have 4 features:* a boolean feature, `False` or `True` with equal probability* an integer feature uniformly randomly chosen from `[0, 5]`* a string feature generated from a string table by using the integer feature as an index* a float feature from a standard normal distributionConsider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
###Code
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
###Output
_____no_output_____
###Markdown
Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
###Code
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
###Output
_____no_output_____
###Markdown
For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.protoL88) is just a wrapper around the `Features` message:
###Code
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
###Output
_____no_output_____
###Markdown
You can parse TFRecords using the standard protocol buffer `.FromString` methodTo decode the message use the `tf.train.Example.FromString` method.
###Code
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
###Output
_____no_output_____
###Markdown
TFRecords format detailsA TFRecord file contains a sequence of records. The file can only be read sequentially.Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.Each record is stored in the following formats: uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_dataThe records are concatenated together to produce the file. CRCs are[described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), andthe mask of a CRC is: masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ulNote: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and`tf.io.parse_tensor` when loading). See the `tf.io` module for more options. TFRecord files using `tf.data` The `tf.data` module also provides tools for reading and writing data in TensorFlow. Writing a TFRecord fileThe easiest way to get the data into a dataset is to use the `from_tensor_slices` method.Applied to an array, it returns a dataset of scalars:
###Code
tf.data.Dataset.from_tensor_slices(feature1)
###Output
_____no_output_____
###Markdown
Applied to a tuple of arrays, it returns a dataset of tuples:
###Code
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
###Output
tf.Tensor(False, shape=(), dtype=bool)
tf.Tensor(1, shape=(), dtype=int64)
tf.Tensor(b'dog', shape=(), dtype=string)
tf.Tensor(-0.6086492521118764, shape=(), dtype=float64)
###Markdown
Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
###Code
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
###Output
_____no_output_____
###Markdown
Apply this function to each element in the dataset:
###Code
# TODO 2b
# `.map` function maps across the elements of the dataset.
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
# Create a Dataset whose elements are generated by generator using `.from_generator` function
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
###Output
_____no_output_____
###Markdown
And write them to a TFRecord file:
###Code
filename = 'test.tfrecord'
# `.TFRecordWriter` function writes a dataset to a TFRecord file
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
###Output
_____no_output_____
###Markdown
Reading a TFRecord file You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/datasetsconsuming_tfrecord_data).Using `TFRecordDataset`s can be useful for standardizing input data and optimizing performance.
###Code
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
###Output
_____no_output_____
###Markdown
At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.Use the `.take` method to only show the first 10 records.Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
###Code
# Use the `.take` method to pull ten examples from the dataset.
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
###Output
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p\xd0\x1b\xbf\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xa6\xbf\xba\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xaa\x05/@'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04C\x96\n?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04^\x06\x96>\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x01\n\x13\n\x08feature2\x12\x07\n\x05\n\x03dog\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x057\x8c\xbe'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nS\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x03\n\x15\n\x08feature2\x12\t\n\x07\n\x05horse\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xbco\xab\xbe\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x00\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04p[|\xbd'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nU\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x02\n\x17\n\x08feature2\x12\x0b\n\t\n\x07chicken\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\xba.\xb6\xbf'>
<tf.Tensor: shape=(), dtype=string, numpy=b'\nQ\n\x14\n\x08feature3\x12\x08\x12\x06\n\x04\x96tf?\n\x11\n\x08feature0\x12\x05\x1a\x03\n\x01\x01\n\x11\n\x08feature1\x12\x05\x1a\x03\n\x01\x00\n\x13\n\x08feature2\x12\x07\n\x05\n\x03cat'>
###Markdown
These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
###Code
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
###Output
_____no_output_____
###Markdown
Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
###Code
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
###Output
_____no_output_____
###Markdown
Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
###Code
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
###Output
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.60864925>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.3647434>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=2.7347207>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.5413553>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.29301733>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'dog'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.27385727>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'horse'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=3>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.33483684>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-0.06161064>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'chicken'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=2>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=-1.423301>}
{'feature0': <tf.Tensor: shape=(), dtype=int64, numpy=1>, 'feature2': <tf.Tensor: shape=(), dtype=string, numpy=b'cat'>, 'feature1': <tf.Tensor: shape=(), dtype=int64, numpy=0>, 'feature3': <tf.Tensor: shape=(), dtype=float32, numpy=0.90021646>}
###Markdown
Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors. TFRecord files in Python The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files. Writing a TFRecord file Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
###Code
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {filename}
###Output
984K test.tfrecord
###Markdown
Reading a TFRecord fileThese serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
###Code
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
###Output
features {
feature {
key: "feature0"
value {
int64_list {
value: 0
}
}
}
feature {
key: "feature1"
value {
int64_list {
value: 1
}
}
}
feature {
key: "feature2"
value {
bytes_list {
value: "dog"
}
}
}
feature {
key: "feature3"
value {
float_list {
value: -0.6086492538452148
}
}
}
}
###Markdown
Walkthrough: Reading and writing image data This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction. Fetch the images
###Code
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file` function.
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# Check the image file
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
###Output
_____no_output_____
###Markdown
Write the TFRecord file As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat image, and `1` for the bridge image:
###Code
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
###Output
features {
feature {
key: "depth"
value {
int64_list {
value: 3
}
}
}
feature {
key: "height"
value {
int64_list {
value: 213
}
...
###Markdown
Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
###Code
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# `du` stands for disk usage and is used to estimate the amount of disk space used by a given file or directory.
!du -sh {record_file}
###Output
36K images.tfrecords
###Markdown
Read the TFRecord fileYou now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
###Code
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
###Output
_____no_output_____
###Markdown
Recover the images from the TFRecord file:
###Code
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
###Output
_____no_output_____ |
fmri-02/fmri-02-block.ipynb | ###Markdown
`fMRI-02`: Block designs and detection powerToday's demonstration will be in two parts. In the first section, we will show you how to generate the predicted BOLD signal for analysis of a block design experiment. In the second section, we will demonstrate from first principles why the optimal length of a block in a block design task is approximately 16 seconds.
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook', font_scale=1.5)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part 1: Generating the predicted BOLD signalIn this first section, we will generate the predicted BOLD signal for a simple block design experiment. In fact, it was one of the experiments collected in a previous year of NEU502b. In this experiment, we presented to participants alternating blocks of a [visual checkerboard](https://www.youtube.com/watch?v=xEd1h_lz4rA) (warning: flashing lights) and an empty black background, each presented for 20 seconds a time. A participant views six total blocks (i.e. 6 checkerboard presentations, 6 background presentations). Also, the paradigm begin with 10s of blank background presentation. Images were collected at a rate of 1 acquisition per second (TR=1). As we demonstrated last Wednesday, this paradigm robustly excited early visual cortex.To generate for this experiment its expected neural activity, and corresponding BOLD signal, we will perform the following:1. Define the (super-sampled) experiment times.2. Generate the neural "boxcars".3. Define the hemodynamic response function (HRF).4. Convolve the boxcar timeseries with the HRF.5. Downsample expected BOLD timeseries. Define (super-sampled) timesHere we define the timing of the experiment. Importantly we first define the experiment in a "super-sampled" space, or we act like we have acquired far more data points than we actually have. We do this for several reasons. First, it functions to reduce the noisiness of our convolved regressors (more on that in a minute). Second, it allows us to model events that occur between TRs.
###Code
# Define experiment metadata
n_times = 10 + 6 * 20 + 6 * 20
sfreq = 0.1
# Define (super-sampled) times
sst = np.arange(0, n_times, sfreq)
###Output
_____no_output_____
###Markdown
Generate boxcar time seriesHere we define a "boxcar" time series. In this step we make a binary time series (comprised of 0s and 1s), where 0s represent neuronal silence and 1s represent neuronal activity. Essentially, we initialize a time series that is as long as the number of times defined above, where the value of the timeseries is 1 if we expect the neurons to be active in that minute, and 0 otherwise. The term boxcar comes from the boxy pattern that results from this process.
###Code
# Define experiment events
events = [(10, 30), (50, 70), (90, 110), (130, 150), (170, 190), (210, 230)]
# Generate boxcars
boxcars = np.zeros_like(sst)
for onset, offset in events:
boxcars[np.logical_and(sst >= onset, sst < offset)] = 1
# Plot the time series
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(sst, boxcars);
ax.set(xlim=(0, n_times), xlabel='time (s)', yticks=[], ylabel='neural activity')
sns.despine()
###Output
_____no_output_____
###Markdown
Define the HRFIn this step, we define the expected shape of the HRF. Following convention, we will use the **SPM HRF**.
###Code
from fmritools.hrf import spm_hrf
# Define HRF
hrf = spm_hrf(sfreq)
# Plot the HRF
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
ax.plot(sst[:hrf.size], hrf);
ax.set(xlabel='time (s)', ylabel='AU')
sns.despine()
###Output
_____no_output_____
###Markdown
ConvolutionConvolution describes a particular mathematical operation where we use two functions to produce a third function that expresses how the shape of one is modified by the other. In this case, we convolve the boxcars with the HRF to model how we expect the BOLD signal to change in response to the neural activity
###Code
# Convolve boxcars and HRF
bold = np.convolve(boxcars, hrf)[:sst.size]
# Normalize regressor
bold /= bold.max()
# Plot
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(sst, boxcars);
ax.plot(sst, bold);
ax.set(xlim=(0,n_times), xlabel='time (s)', yticks=[], ylabel='neural activity')
sns.despine()
###Output
_____no_output_____
###Markdown
DownsamplingIn this fifth and final step, we reduce the convolved timeseries to only those observations that we actually acquired.
###Code
# Define observation times
tr = 1
times = np.arange(n_times) * tr
# Define downsampling indices
ix = np.in1d(sst, times)
# Downsampling
boxcars = boxcars[ix]
bold = bold[ix]
# Plot downsampled time series
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(times, boxcars);
ax.plot(times, bold);
ax.set(xlim=(0, n_times), xlabel='time (s)', yticks=[], ylabel='neural Activity')
sns.despine()
###Output
_____no_output_____
###Markdown
Part 1.5: Simple RegressionNext, let's use the predicted BOLD timeseries we just made and use it perform simple linear regression. Load and visualize data
###Code
# Load and extract data
npz = np.load('fmri-02-regression.npz')
times = npz['times']
y = npz['y']
# Plot the data
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(times, y);
ax.set(xlim=(0, n_times), xlabel='time (s)', ylabel='PSC')
sns.despine()
###Output
_____no_output_____
###Markdown
Construct design matrixThe design matrix is collection of timeseries we will predicted the observed data here. Here we use the timeseries we made and an intercept, i.e. a column of 1s.
###Code
X = np.column_stack([np.ones_like(bold), bold])
###Output
_____no_output_____
###Markdown
Linear regression
###Code
# Perform linear ordinary least squares (OLS) regression
b, _, _, _ = np.linalg.lstsq(X, y, rcond=-1)
print('Mean b1 = %0.3f' %b[1].mean())
# Posterior predictive check
yhat = X @ b
# Plot the actual and predicted time series
fig, ax = plt.subplots(1, 1, figsize=(12, 3))
ax.plot(times, y.mean(axis=1));
ax.plot(times, yhat.mean(axis=1));
ax.set(xlim=(0, n_times), xlabel='time (s)', ylabel='PSC')
sns.despine()
###Output
Mean b1 = 3.056
###Markdown
Part 2: fMRI detection powerIn the second part of this demonstration, we will explore the measure of fMRI detection power. We will define detection power as the ability to detect nonzero changes in functional activity. Detection power lets us know how well suited our experiment design is for measuring differences in the BOLD signal from baseline or between two conditions. Load and visualize example designsTo get started, let's look at four example block design experiments.
###Code
# Load experiment designs
npz = np.load('fmri-02-power.npz')
X1 = npz['X1']; X2 = npz['X2']; X3 = npz['X3']; X4 = npz['X4']
times = npz['times']
# Plot designs
fig, axes = plt.subplots(2, 2, figsize=(12, 6),
sharex=True, sharey=True)
for ax, X in zip(axes.flatten(), [X1, X2, X3, X4]): ax.plot(times, X)
sns.despine()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Defining detection power [Liu & Frank (2004)](https://www.sciencedirect.com/science/article/pii/S1053811903005779) provided a formal definition of detection power:$$ R_{tot} = \frac{1}{ \frac{1}{N} \sum_{i \leq j} \frac{1}{R_{ij}} } $$Put another way, the total detection power of our experimental design, $R_{tot}$, is the inverse of the average detection power of the contrasts of interest, $R_{ij}$. Here, a contrast refers to a particular statistical difference we might calculate. For example, we might ask whether a particular condition shows a BOLD signal change different than zero; alternately, we might ask whether there is a difference in BOLD signal change between two conditions.We define the detection power of a particular contrast as:$$ R_{ij} = \frac{ \left[ D_{ij}(X^T X)^{-1}D_{ij} \right]^{-1} }{h^Th} $$Now, this may look daunting but it is actually fairly simple:- $X$ is the design matrix, i.e. the matrix of regressors from above.- $h$ is the assumed hemodynamic response function.- $D$ is a contrast vector, corresponding to the main effects, $[1,0], [0,1]$ and pairwise contrasts, $[1,-1]$. Of these values, the most important is $X^TX$, or the [Fisher information matrix](https://en.wikipedia.org/wiki/Fisher_information) (for a linear model, derivation [here](https://math.stackexchange.com/questions/328835/get-a-fisher-information-matrix-for-linear-model-with-the-normal-distribution-fo/917698)), which has some important properties that we will not go into here. For the present purposes, the important thing to know to know is that an optimal design will have large values along the diagonal of this matrix, and small values in its off-diagonals. Why is this so?Consider again the design matrix, $X$. It is an $[N,K]$ matrix where $N$ is the number of time points and $K$ is the number of conditions. Therefore, $X^TX$ simply returns a $[K,K]$ matrix. The diagonals of this matrix are the dot product of the regressors with themselves. As such, it is apparent when the diagonals of the Fisher information matrix are large: when the regressors themselves, on average, deviate from zero. When the estimated BOLD response differs greatly from zero, we should expect that nonzero changes in the observed BOLD signal will be easier to detect. Hence, a design matrix with larger diagonal elements in its corresponding Fisher information matrix is more optimal.The converse is true for the off-diagonals. The off-diagonals of the Fisher information matrix are computed from the pairwise dot product of the columns of the design matrix. As such, the off-diagonals are large when, on average, the estimated BOLD signal for two conditions deviate from zero in the same direction at the same time. If we want to be able to resolve differences between conditions, then we want the estimated BOLD signal of two conditions to be orthogonal. Hence, a design matrix with smaller off-diagonal elements in its corresponding Fisher information matrix is more optimal.The Fisher information matrices of our four designs are presented below.
###Code
# Plot Fisher information
fig, axes = plt.subplots(1, 4, figsize=(12, 4),
sharex=True, sharey=True)
for i, (ax, X) in enumerate(zip(axes.flatten(), [X1, X2, X3, X4])):
sns.heatmap(X.T @ X, vmin=0, vmax=70, cmap='binary_r', square=True, cbar=False,
annot=True, fmt='0.2f', annot_kws=dict(fontsize=18), ax=ax)
ax.set(xticks=[], yticks=[], title='X%s' %(i+1))
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Compute and plot detection power
###Code
from fmritools.design import detection_power
# Initialize plot
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
# Iteratively compute and plot
for i, X in enumerate([X1, X2, X3, X4]):
R = detection_power(X, 1)
ax.bar(i, R, 1, color='#1f77b4')
# Add details to plot
ax.set(xticks=range(4), xticklabels=['X1','X2','X3','X4'],
ylim=(0,300), ylabel=r'$R$')
sns.despine()
###Output
_____no_output_____
###Markdown
Re-estimating detection power assuming driftsWhy does the 4th design have such high detection power despite the conventional wisdom? The answer has to do with artifactual noise observed in fMRI data. Unfortunately, fMRI data is often contaminated by low frequency drifts. By the same logic as above, a design is robust when it is nearly orthogonal to the nuisance terms. If the on-off cycle of stimuli blocks is too slow, then it may overlap with the low-frequency drifts (i.e. off-diagonal terms) thereby reducing overall detection power. We can simulate this effect by including Legendre polynomials (up to order 3) as nuisance regressors in the design matrix.
###Code
from fmritools import legendre
# Generate Legendre polynomials
Z = legendre(times.size, order=3)
# Plot polynomials
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
ax.plot(times, Z);
ax.set(xlabel='time (s)')
sns.despine()
# Initialize plot
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
# Iteratively compute and plot
for i, X in enumerate([X1,X2,X3,X4]):
R = detection_power(np.column_stack([X, Z]), 1)
ax.bar(i, R, 1, color='#1f77b4')
# Add labels to plot
ax.set(xticks=range(4), xticklabels=['X1','X2','X3','X4'],
ylim=(0, 200), ylabel=r'$R$')
sns.despine()
###Output
_____no_output_____ |
Python-Programming/Python-3-Bootcamp/07-Errors and Exception Handling/.ipynb_checkpoints/02-Errors and Exceptions Homework-checkpoint.ipynb | ###Markdown
Errors and Exceptions Homework Problem 1Handle the exception thrown by the code below by using try and except blocks.
###Code
for i in ['a','b','c']:
print(i**2)
###Output
_____no_output_____
###Markdown
Problem 2Handle the exception thrown by the code below by using try and except blocks. Then use a finally block to print 'All Done.'
###Code
x = 5
y = 0
z = x/y
###Output
_____no_output_____
###Markdown
Problem 3Write a function that asks for an integer and prints the square of it. Use a while loop with a try, except, else block to account for incorrect inputs.
###Code
def ask():
pass
ask()
###Output
Input an integer: null
An error occurred! Please try again!
Input an integer: 2
Thank you, your number squared is: 4
|
SecondaryScreens/Goujon_Caco2_Cas9Act_SecondaryScreen.ipynb | ###Markdown
Data summary
###Code
reads_nopDNA = pd.read_csv('../../../Data/Reads/Goujon/Caco2/SecondaryScreens/counts-JD_GPP2845_Goujon_Plate3_CP1663.txt', sep='\t')
reads_nopDNA = reads_nopDNA.copy().drop('Construct IDs', axis=1)
CP1663_cols = ['Construct Barcode']+[col for col in reads_nopDNA.columns if 'CP1663' in col]
reads_nopDNA_CP1663 = reads_nopDNA[CP1663_cols]
reads_nopDNA_CP1663
pDNA_reads_all = pd.read_csv('../../../Data/Reads/Goujon/Calu3/Secondary_Library/counts-LS20210325_A01_AAHG03_XPR502_G0_CP1663_M-AM39.txt', sep='\t')
pDNA_reads = pDNA_reads_all[['Construct Barcode','A01_AAHG03_XPR502_G0_CP1663_M-AM39']].copy()
pDNA_reads = pDNA_reads.rename(columns = {'A01_AAHG03_XPR502_G0_CP1663_M-AM39': 'pDNA'})
pDNA_reads
reads_all = pd.merge(pDNA_reads, reads_nopDNA_CP1663, how = 'right', on='Construct Barcode')
empty_cols = [col for col in reads_all.columns if 'EMPTY' in col]
reads = reads_all.copy().drop(empty_cols, axis=1)
reads
# Gene Annotations
chip = pd.read_csv('../../../Data/Interim/Goujon/Secondary_Library/CP1663_GRCh38_NCBI_strict_gene_20210707.chip', sep='\t')
chip = chip.rename(columns={'Barcode Sequence':'Construct Barcode'})
chip_reads = pd.merge(chip[['Construct Barcode', 'Gene Symbol']], reads, on = ['Construct Barcode'], how = 'right')
chip_reads
#Calculate lognorm
cols = chip_reads.columns[2:].to_list() #reads columns = start at 3rd column
lognorms = fns.get_lognorm(chip_reads.dropna(), cols = cols)
col_list = []
for col in lognorms.columns:
if 'intitial'in col:
new_col = col.replace('intitial', 'initial')
col_list.append(new_col)
else:
col_list.append(col)
lognorms.columns = col_list
lognorms
###Output
_____no_output_____
###Markdown
Quality Control Population Distributions
###Code
#Calculate log-fold change relative to pDNA
target_cols = list(lognorms.columns[3:])
pDNA_lfc = fns.calculate_lfc(lognorms,target_cols)
pDNA_lfc
pair1 = list(pDNA_lfc.columns[2:4])
pair2 = list(pDNA_lfc.columns[-2:])
paired_cols = (True, [pair1, pair2])
#Plot population distributions of log-fold changes
fns.lfc_dist_plot(pDNA_lfc, paired_cols=paired_cols, filename = 'Caco2_Cas9SecondaryLibraryAct_Goujon')
###Output
_____no_output_____
###Markdown
Distributions of control sets
###Code
fns.control_dist_plot(pDNA_lfc, paired_cols=paired_cols, control_name=['ONE_INTERGENIC_SITE'], filename = 'Caco2_Cas9SecondaryLibraryAct_Goujon')
###Output
_____no_output_____
###Markdown
ROC_AUC Essential gene set: Hart et al., 2015 Non-essential gene set: Hart et al., 2014 AUC expected to be ~0.5 because no cutting occurred
###Code
ess_genes, non_ess_genes = fns.get_gene_sets()
initial_cols = [col for col in pDNA_lfc.columns if 'initial' in col]
tp_genes = ess_genes.loc[:, 'Gene Symbol'].to_list()
fp_genes = non_ess_genes.loc[:, 'Gene Symbol'].to_list()
initial_roc_dict = {}
intial_roc_auc_dict = {}
for col in initial_cols:
roc_auc, roc_df = pool.get_roc_aucs(pDNA_lfc, tp_genes, fp_genes, gene_col = 'Gene Symbol', score_col=col)
initial_roc_dict[col] = roc_df
intial_roc_auc_dict[col] = roc_auc
fig,ax=plt.subplots(figsize=(6,6))
for key, df in initial_roc_dict.items():
roc_auc = intial_roc_auc_dict[key]
ax=sns.lineplot(data=df, x='fpr',y='tpr', ci=None, label = key+',' + str(round(roc_auc,2)))
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('ROC-AUC')
plt.xlabel('False Positive Rate (non-essential)')
plt.ylabel('True Positive Rate (essential)')
###Output
_____no_output_____
###Markdown
Gene level analysis Residual z-scores
###Code
lfc_df = pDNA_lfc.drop('Gene Symbol', axis = 1)
lfc_df
# run_guide_residuals(lfc_df.drop_duplicates(), cols)
residuals_lfcs, all_model_info, model_fit_plots = run_guide_residuals(lfc_df, paired_lfc_cols=paired_cols[1])
residuals_lfcs
guide_mapping = pool.group_pseudogenes(chip[['Construct Barcode', 'Gene Symbol']], pseudogene_size=10, gene_col='Gene Symbol', control_regex=['ONE_INTERGENIC_SITE'])
guide_mapping
gene_residuals = anchors.get_gene_residuals(residuals_lfcs.drop_duplicates(), guide_mapping)
gene_residuals
gene_residual_sheet = fns.format_gene_residuals(gene_residuals, guide_min = 8, guide_max = 11, ascending=True)
guide_residual_sheet = pd.merge(guide_mapping, residuals_lfcs.drop_duplicates(), on = 'Construct Barcode', how = 'inner')
guide_residual_sheet
with pd.ExcelWriter('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Caco2_SecondaryLibraryAct_Goujon.xlsx') as writer:
gene_residual_sheet.to_excel(writer, sheet_name='Caco2_avg_zscore', index =False)
reads.to_excel(writer, sheet_name='Caco2_genomewide_reads', index =False)
guide_mapping.to_excel(writer, sheet_name='Caco2_guide_mapping', index =False)
with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/Caco2_SecondaryLibraryAct_Goujon.xlsx') as writer:
gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
###Output
_____no_output_____
###Markdown
Comparison to Secondary Library KO
###Code
KO_gene_residual_sheet = pd.read_excel('../../../Data/Processed/GEO_submission_v2/SecondaryLibrary/Caco2_Cas9SecondaryLibraryKO_Goujon.xlsx')
secondary_library = pd.merge(KO_gene_residual_sheet, gene_residual_sheet, how = 'outer', on = 'Gene Symbol', suffixes=['_KO', '_Act'])
secondary_library
secondary_library_annot_df = select_top_ranks(secondary_library)
fig, ax = plt.subplots()
ax = gpp.point_densityplot(secondary_library.dropna(), x='residual_zscore_avg_KO', y='residual_zscore_avg_Act')
sns.scatterplot(data=secondary_library_annot_df.dropna(), x='residual_zscore_avg_KO', y='residual_zscore_avg_Act')
texts= []
for j, row in secondary_library_annot_df.dropna().iterrows():
texts.append(ax.text(row['residual_zscore_avg_KO'], row['residual_zscore_avg_Act'], row['Gene Symbol'],
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
sns.despine()
plt.title('Secondary Library Caco-2 Act vs KO Screens')
plt.xlabel('KO')
plt.ylabel('Activation')
# fig.savefig('../../../Figures/SecondaryLibrary_Caco2_ActvsKO.png', bbox_inches='tight')
screen1_df = gene_residuals[gene_residuals['condition'].str.contains('#1')]
screen2_df = gene_residuals[gene_residuals['condition'].str.contains('#2')]
screen2_df_avg = screen2_df[['condition','Gene Symbol', 'residual_zscore']].groupby('Gene Symbol', as_index=False).mean()
zscore_df = pd.merge(screen1_df[['condition','Gene Symbol', 'residual_zscore']], screen2_df_avg, on = 'Gene Symbol', how = 'outer', suffixes = ['_screen#1', '_screen#2']).sort_values(by='residual_zscore_screen#1')
# zscore_df = pd.concat([screen1_df[['condition','Gene Symbol', 'residual_zscore']], screen2_df[['condition','Gene Symbol', 'residual_zscore']]])
zscore_df
# Screen 2 vs Screen 1
fig, ax = plt.subplots(figsize = (2, 2))
ax = gpp.point_densityplot(zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', s=6)
ax = gpp.add_correlation(zscore_df, 'residual_zscore_screen#1', 'residual_zscore_screen#2', fontsize=7)
top_ranked_screen1 = zscore_df.nsmallest(10, 'residual_zscore_screen#1')
top_ranked_screen2 = zscore_df.nsmallest(10, 'residual_zscore_screen#2')
bottom_ranked_screen1 = zscore_df.nlargest(10, 'residual_zscore_screen#1')
bottom_ranked_screen2 = zscore_df.nlargest(10, 'residual_zscore_screen#2')
screen1_ranked = pd.concat([top_ranked_screen1, bottom_ranked_screen1])
screen2_ranked = pd.concat([top_ranked_screen2, bottom_ranked_screen2])
# Annotate common hits
common_ranked = pd.merge(screen1_ranked, screen2_ranked, on = ['Gene Symbol', 'residual_zscore_screen#1', 'residual_zscore_screen#2'], how = 'inner')
common_ranked
sns.scatterplot(data=common_ranked, x='residual_zscore_screen#1', y='residual_zscore_screen#2', color = sns.color_palette('Set2')[0], edgecolor=None, s=6)
texts= []
for j, row in common_ranked.iterrows():
texts.append(ax.text(row['residual_zscore_screen#1']+0.25, row['residual_zscore_screen#2'], row['Gene Symbol'], fontsize=7,
color = 'black'))
# ensures text labels are non-overlapping
# adjust_text(texts)
plt.title('Caco-2 Act SARS-CoV-2 Secondary Library Screen', fontsize=7)
plt.xlabel('Screen #1', fontsize=7)
plt.ylabel('Screen #2', fontsize=7)
plt.xticks(fontsize=7)
plt.yticks(fontsize=7)
sns.despine()
gpp.savefig('../../../Figures/Scatterplots/Caco2_Act_Secondary_Screen1vs2_scatterplot.pdf', dpi=300)
with pd.ExcelWriter('../../../Data/Processed/Individual_screens_v2/Caco2_SecondaryLibraryAct_Goujon_indiv_screens.xlsx') as writer:
zscore_df.to_excel(writer, sheet_name='indiv_screen_zscore', index =False)
gene_residuals.to_excel(writer, sheet_name='condition_genomewide_zscore', index =False)
guide_residual_sheet.to_excel(writer, sheet_name='guide-level_zscore', index =False)
###Output
_____no_output_____ |
experimental/parameter_estimation.ipynb | ###Markdown
This notebook is built to fit the One-Dimension Diffusion-Advection-Reaction model parameters to experimental data imports
###Code
import numpy as np
%matplotlib inline
import pandas as pd
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('seaborn-whitegrid')
from sklearn import linear_model
plt.rcParams["figure.figsize"] = (16,8)
import math
from math import sqrt
from math import e as exp
import seaborn as sns
import statsmodels.api as sm
import random
from scipy import optimize
sns.set(rc={'figure.figsize':(11.7,8.27),"font.size":50,"axes.titlesize":30,"axes.labelsize":30},style="white", context='paper',font_scale=3)
import numba
from numba.experimental import jitclass
import pymc3 as pm
import theano
import theano.tensor as tt
import arviz as az
###Output
_____no_output_____
###Markdown
model paramenters
###Code
B = .10 ### biomass in Kg
D = 1e-3 ### turbid diffusion coeficient
λ = 2e-3 ### degradation rate
u = 1e7/60 ### production rate
V = 0.03 ### river velocity
BV = 1/3 ### boat velocity
T = 10*20 ### sampling time
pf = 5/1000 ### pump flow
H = 50 ### river cross section area
def _solved_river_abv(x):
return -(pf/(BV*H)) *(2*B*u*D) /( 4*D*λ - V*sqrt(V**2 + 4*D * λ)+ V**2)* exp**( (V - sqrt(V**2 + 4*D*λ))/ (2*D) * x )
def _solved_river_bl(x):
return (pf/(BV*H)) *(2*B*u*D) /(4*D*λ + V*sqrt(V**2 + 4*D * λ)+ V**2) * exp**( (V + sqrt(V**2 + 4*D*λ))/ (2*D) * x )
def _solved_river_abv_complete(x, pf, BV, H, B, u, D, λ, V):
return -(pf/(BV*H)) *(2*B*u*D) /( 4*D*λ - V*np.sqrt(V**2 + 4*D * λ)+ V**2)* np.exp( (V - np.sqrt(V**2 + 4*D*λ))/ (2*D) * x )
def _solved_river_abv_complete_tt(x, pf, BV, H, B, u, D, λ, V):
return -(pf/(BV*H)) *(2*B*u*D) /( 4*D*λ - V*tt.sqrt(V**2 + 4*D * λ)+ V**2)* tt.exp( (V - tt.sqrt(V**2 + 4*D*λ))/ (2*D) * x )
def _sld_intermediary(Xi, Xf):
low, high = sorted([Xi, Xf])
if low >= 0:
return abs(_solved_river_abv(Xf) - _solved_river_abv(Xi))
if high <= 0:
return abs(_solved_river_bl(Xf) - _solved_river_bl(Xi))
return _sld_intermediary(low, 0) + _sld_intermediary(0, high)
def sample_eDNA_transect(x0):
ret = _sld_intermediary(x0, x0 + BV*T) # + random.gauss(0, error)
if ret< 0: return 0
else: return ret
def sample_eDNA_transect_dowstream_only(x0, T, pf, BV, H, B, u, D, λ, V):
return _solved_river_abv_complete(x0+BV*T, pf, BV, H, B, u, D, λ, V) - _solved_river_abv_complete(x0, pf, BV, H, B, u, D, λ, V)
def sample_eDNA_transect_dowstream_only_tt(x0, T, pf, BV, H, B, u, D, λ, V):
return _solved_river_abv_complete_tt(x0+BV*T, pf, BV, H, B, u, D, λ, V) - _solved_river_abv_complete_tt(x0, pf, BV, H, B, u, D, λ, V)
sample_eDNA_transect_dowstream_only(np.array([[10,10, 10]]),100 , .005, 1e-5, 1000, np.array([[1,2,3]]), 1e7, 1,2,np.array([[0,0,1]]))
###Output
_____no_output_____
###Markdown
get data from multiple sources
###Code
#pd.read_csv('Caged fish experiment and hydrodynamic bidimensional modeling highlight the importance to consider 2D dispersion in fluvial environmental DNA studies_ data.txt',\
# sep = '\t')
#pd.read_csv('Experimental assessment of optimal lotic eDNA sampling and assay multiplexing for a critically endangered fish data.txt',\
# sep = '\t')
wood = pd.read_csv('wood et all.csv')
wood = wood[(wood['Dist (m)'] != 'Upstream') & (wood['Position'] == 'MidStream')]
wood = wood.dropna(subset=['Dist (m)', 'FishMass (g)','Velocity (m/s)', 'Detect', 'Pg eDNA'])
#wood2 = pd.read_csv('wood et all.csv')
#wood2 = wood2[(wood2['Velocity (m/s)']<= 1)& (wood2['Dist (m)'] != 'Upstream') ]
#wood2 = wood2[(wood2['Dist (m)'].astype(float) <2000)]
#wood2['Dist (m)'] = wood2['Dist (m)'].astype(float)
#wood2
#sns.lmplot(data =wood2, x = 'Dist (m)', y = 'Pg eDNA', hue = 'Position', height = 10, logx=True)
wood['Dist (m)']= wood['Dist (m)'].astype(float)
wood['FishMass (kg)'] = wood['FishMass (g)'].astype(float)/1000
wood['Velocity (m/s)']= wood['Velocity (m/s)'].astype(float)
#wood = wood[(wood.River == 'Waweig River') & (wood['Dist (m)'] >0) & (wood['Velocity (m/s)']>=0)]
wood = wood[(wood['Dist (m)'] <3000) & (wood['Dist (m)'] > 0) & (wood['Velocity (m/s)']>0)]# & (wood['Pg eDNA']>=0) (wood.River == 'Waweig River')&
wood
wood['copies eDNA'] = wood['Pg eDNA']
wood['copies eDNA expanded'] = wood['Pg eDNA']*100
#*100#3705846.15
#wood = wood.query('Detect == 1')
sns.lmplot(data =wood, x = 'Dist (m)', y = 'copies eDNA', hue = 'Position', height = 10, logx=True)
import theano.tensor as tt
from IPython.core.display import display, HTML
wood_dist = wood[['Dist (m)']].values
wood_mass = wood[['FishMass (kg)']].values
wood_vel = wood[['Velocity (m/s)']].values
observed_discrete = wood[['Detect']].values
observed_copies = wood[['copies eDNA']].values
observed_copies_max = wood[['copies eDNA expanded']].values
copies_upper_bound = observed_copies.max()*2
copies_upper_bound_max = observed_copies_max.max()*2
copies_upper_bound_max/1e5
from pymc3.distributions import Continuous, Normal
from theano.tensor.random.basic import RandomVariable, normal
class ZeroInflatedNormalRV(RandomVariable):
name = "zero_inflated_normal"
ndim_supp = 0
ndims_params = [0, 0, 0]
dtype = "int64"
_print_name = ("ZeroInflatedNormal", "\\operatorname{ZeroInflatedNormal}")
@classmethod
def rng_fn(cls, rng, pi,mu ,sigma, size):
return rng.normal(mu=mu, sigma=sigma, size=size) * (rng.random(size=size) < pi)
class ZeroInflatedNormal(Continuous):
rv_op = ZeroInflatedNormalRV
def __init__(self, mu, sigma, pi, *args, **kwargs):
super(ZeroInflatedNormal, self).__init__(*args, **kwargs)
self.mu = mu
self.sigma = sigma
self.pi = pi = tt.as_tensor_variable(pi)
self.Normal = pm.Normal.dist(mu, sigma)
def logp(self, value):
return tt.switch(value > 0,
tt.log(1 - self.pi) + self.Normal.logp(value),
tt.log(self.pi))
def log_clip(X):
return tt.log(tt.clip(X, 1e-10, 1e10))
def log_clipnp(X):
return np.log(np.clip(X, 1e-10, 1e10))
0.0440/60
1/(1+np.exp(-(10-5)))
###Output
_____no_output_____
###Markdown
Continuous eDNA concentration model
###Code
with pm.Model() as continouns_model:
D = pm.Bound(pm.Normal, lower=1e-2, upper = 10)('diffusion', mu=.1, sigma = 10. ) #,shape=(wood_vel.shape[0], 1)
λ = pm.Bound(pm.Normal, lower=7.3e-5, upper = 7.3e-1)('degradation', mu=7.3e-4,sigma = 10.) #,shape=(wood_vel.shape[0], 1)
u = pm.Bound(pm.Normal, lower=3e2, upper = 3e6)('eDNA production rate', mu= 3e4, sigma=10.0) #shape=(wood_vel.shape[0], 1) #pm.Lognormal
#Vadj = pm.Bound(pm.Normal, lower=-.1, upper = .2)('River_scaler', mu= 0, sigma=1, shape=(wood_vel.shape[0], 1))
BV = 1e-5 ### boat velocity
T = 10*20 ### sampling time
pf = 5/1000 ### pump flow
H = 10 ### river cross section area
sigma = pm.HalfNormal("sigma", sigma=1)
Yhat = pm.Deterministic('eDNA transport model',sample_eDNA_transect_dowstream_only_tt(wood_dist , T, pf, BV, H, wood_mass, u, D, λ, wood_vel))
Yhat = tt.clip(Yhat, 1e-2, copies_upper_bound_max) #cut large values
#Yhatlog = log_clip(Yhat)
#a = pm.Normal('logistic base', mu= 0, sigma = 1.)
#c = pm.Bound(pm.Normal, lower=-.1, upper = .1)('logistic distance', mu = 0, sigma=5.0)
#pi = pm.Deterministic('logistic regression', 1/(1+ tt.exp(-(a+log_clip(Yhat)*b+ wood_dist*c)) ) ) # + tt.log(wood_dist)*c
#pi = pm.Deterministic('logistic regression', 1/(1+ tt.exp(-tt.clip(a+Yhat*b+ wood_dist*c, -10, 10 )) ) ) # + tt.log(wood_dist)*c
# Likelihood (sampling distribution) of observations
### non zero inflated
#Y_obs = Normal("Y_obs", mu=Yhat, sigma=sigma,observed=observed_copies_max)
#Y_obs = Normal("Y_obs", mu=Yhatlog, sigma=sigma, observed=log_clipnp(observed_copies))
### zero inflated
b = pm.Bound(pm.Normal, lower=0, upper = 1)('logistic concetration', mu = .5, sigma=5.0)
pi = pm.Deterministic('logistic regression', 1/(1+ tt.exp(-tt.clip(Yhat*b-5, -5, 30))))
Y_obs = ZeroInflatedNormal("Y_obs", mu=Yhat, sigma=sigma,pi= pi, observed=observed_copies_max)
#Y_obs = ZeroInflatedNormal("Y_obs", mu=Yhatlog, sigma=sigma,pi= pi, observed=log_clipnp(observed_copies))
fitted = pm.fit(method="fullrank_advi",n=100000) #, start = pm.find_MAP()
trace = fitted.sample(5000 )#
#display(pm.model_to_graphviz(continouns_model))
#trace = pm.sample(10000, tune=1000, cores=1, return_inferencedata=True, init = 'advi_map')#, init = 'advi_map'
az.plot_trace(trace,var_names= [ '~eDNA transport model']); #var_names= [ '~eDNA transport model']
display(az.summary(trace, round_to=8, var_names= ['degradation', 'diffusion', 'eDNA production rate', 'sigma']))
#display(az.summary(trace,var_names= [ 'degradation', 'eDNA production rate'] ,round_to=5))
sns.despine()
dataframe_vals = az.summary(trace, round_to=8, var_names= ['degradation', 'diffusion', 'eDNA production rate'])['mean']
dataframe_vals.index = ['λ', 'D', 'u']
dataframe_vals.to_dict()
ppc = pm.sample_posterior_predictive(trace, model =continouns_model, var_names=['Y_obs','eDNA transport model'])
wood['Yhat'] = ppc['Y_obs'].mean(axis= 0)
f, ax = plt.subplots(figsize=(12, 7))
ax.set(xscale="log")# , yscale="log"
#sns.scatterplot(data =wood, x = 'Dist (m)', y = 'Yhat', label = 'yhat', s = 100)
sns.regplot(data =wood, x = 'Dist (m)', y = 'Yhat', label = 'yhat' ,marker = 'o',logx = True,color = 'r')
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'copies eDNA', label = 'y')
sns.despine()
#plt.ylim([0,100])
#wood
#A[:, np.random.randint(A.shape[0], size=2)] sample from trace
f, ax = plt.subplots(figsize=(12, 7))
#ppc = pm.sample_posterior_predictive(trace, model =continouns_model, var_names=['eDNA transport model'])
wood['transport_model'] = np.clip(ppc['eDNA transport model'].mean(axis= 0), 1e-1, copies_upper_bound*10000)
plt.semilogy( np.linspace(0,2000,1000), np.clip(sample_eDNA_transect_dowstream_only(x0 = np.linspace(0,2000,1000) ,BV = 1e-5,T = 10*20,pf = 5/1000, H = 10,
B = wood['FishMass (kg)'].mean(), V = wood['Velocity (m/s)'].mean(),**dataframe_vals), 1e-1, 1e10))
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'transport_model', label = 'transport_model', s = 100)
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'copies eDNA expanded', label = 'y')
sns.despine()
#plt.ylim([0.01, 10000])
ax.set(xscale="log", yscale = "log")# , yscale = "log"
218*1.5
#wood['estimated diffusion'] = trace['diffusion'].mean(axis= 0)
#wood['estimated degradation'] = trace['degradation'].mean(axis= 0)
#wood['estimated production'] = trace['eDNA production rate'].mean(axis= 0)
#sns.boxenplot(x="River", y='estimated diffusion', data=wood)
#sns.stripplot(x="River", y='estimated diffusion', data=wood, color = 'black')
#sns.despine()
#plt.show()
#sns.boxenplot(x="River", y='estimated degradation', data=wood)
#sns.stripplot(x="River", y='estimated degradation', data=wood, color = 'black')
#sns.despine()
#plt.show()
#sns.pairplot(wood[['estimated diffusion', 'estimated degradation', 'Dist (m)', 'River', 'Velocity (m/s)']], hue="River", height =4 )
#sns.despine()
#plt.show()
sptmpd = az.summary(trace, round_to=5, var_names= [ 'logistic concetration'])
sptmpd
def logisreg_concentration(X):
return 1/(1+np.exp(-(sptmpd.loc['logistic concetration', 'mean']*(X) -5) ))
edna_conc = np.linspace(0, 1e5, 1000)
#p_amp = 1/(1+np.exp(-(sptmpd.loc['logistic base', 'mean'] + sptmpd.loc['logistic concetration', 'mean']*np.log(edna_conc)) ))
p_amp = logisreg_concentration(edna_conc)
sns.set(rc={'figure.figsize':(11.7,8.27),"font.size":50,"axes.titlesize":30,"axes.labelsize":30},style="white", context='paper',font_scale=3)
plt.plot(edna_conc, p_amp)
plt.ylabel('Probability of amplification')
plt.xlabel('eDNA concentration')
sns.despine()
#p_amp = 1/(1+np.exp(-(sptmpd.loc['logistic base', 'mean'] + sptmpd.loc['logistic concetration', 'mean']*np.log(observed_copies) )))
sns.set(rc={'figure.figsize':(11.7,8.27),"font.size":50,"axes.titlesize":30,"axes.labelsize":30},style="white", context='paper',font_scale=3)
plt.scatter(observed_copies_max, logisreg_concentration(observed_copies_max))
plt.ylabel('Probability of amplification')
plt.xlabel('eDNA concentration')
sns.despine()
sptmpd = az.summary(trace, round_to=5, var_names= ['logistic base', 'logistic distance'])
sptmpd
edna_conc = np.linspace(0, 2500, 10000)
def logisreg_distance(X):
return 1/(1+np.exp(-(sptmpd.loc['logistic base', 'mean'] + sptmpd.loc['logistic distance', 'mean']*(X)) ))
#p_amp = 1/(1+np.exp(-(sptmpd.loc['logistic base', 'mean'] + sptmpd.loc['logistic distance', 'mean']*edna_conc) ))
p_amp = logisreg_distance(edna_conc)
sns.set(rc={'figure.figsize':(11.7,8.27),"font.size":50,"axes.titlesize":30,"axes.labelsize":30},style="white", context='paper',font_scale=3)
plt.plot(edna_conc, p_amp)
plt.ylabel('Probability of amplification')
plt.xlabel('eDNA concentration')
sns.despine()
#p_amp = 1/(1+np.exp(-(sptmpd.loc['logistic base', 'mean'] + sptmpd.loc['logistic distance', 'mean']*wood_dist )))
p_amp = logisreg_distance(wood_dist)
sns.set(rc={'figure.figsize':(11.7,8.27),"font.size":50,"axes.titlesize":30,"axes.labelsize":30},style="white", context='paper',font_scale=3)
plt.scatter(wood_dist, p_amp)
plt.ylabel('Probability of amplification')
plt.xlabel('distance')
#plt.xlim([0,150])
sns.despine()
print(sptmpd.loc['logistic base', 'mean'], sptmpd.loc['logistic beta', 'mean'])
def CtoP2:
return 1/(1+np.exp(-(-.83+ .00781*edna_conc) ))
###Output
_____no_output_____
###Markdown
using Discrete model
###Code
7.3e-4
with pm.Model() as discrete:
D = pm.Bound(pm.Normal, lower=1e-2, upper = .5)('diffusion', mu=.1, sigma = 10. ) #,shape=(wood_vel.shape[0], 1)
λ = pm.Bound(pm.Normal, lower=1e-4, upper = 7.3e-3)('degradation', mu=7.3e-4,sigma = 10.) #,shape=(wood_vel.shape[0], 1)
u = pm.Bound(pm.Normal, lower=3e2/100, upper = 3e6/100)('eDNA production rate', mu= 3e4/100, sigma=10.0) #shape=(wood_vel.shape[0], 1) #pm.Lognormal
#a = pm.Normal('logistic base', mu= 0, sigma = 1.)
#a = 0
b = pm.Bound(pm.Normal, lower=0, upper = 1)('logistic beta', mu= 1, sigma=1.0)
#b = 1e-7
#copy_number_scaler = pm.Bound(pm.Normal, lower=0.0, upper = 1)('copy number scaler', mu= .1, sigma=1)
#Vmultiplier = pm.Bound(pm.Normal, lower=-.5, upper = .5)('River_scaler', mu= 0, sigma=1, shape=(wood_vel.shape[0], 1)) #
#Vest = pm.Bound(pm.Exponential, lower=0.0, upper = 1)('V', lam=.1, shape=(wood_vel.shape[0], 1))
#D = 1e-3
#u = 2e5
BV = 1e-5 ### boat velocity
T = 10*20 ### sampling time
pf = 5/1000 ### pump flow
H = 10 ### river cross section area
q = pm.Deterministic("dispersion model", sample_eDNA_transect_dowstream_only_tt(wood_dist , T, pf, BV, H, wood_mass, u, D, λ, wood_vel))
#r = pm.Deterministic('log conversion and scaling',tt.log(q)*b)
s = pm.Deterministic('logistic regression', 1/(1+ tt.exp(-tt.clip(q*b-5, -20, 20)))) ## logistic fit
# Likelihood (sampling distribution) of observations
Y_obs = pm.Bernoulli("Y_obs", p = s,observed=observed_discrete)
fitted = pm.fit(method="fullrank_advi",n=100000) #, start = pm.find_MAP()
trace = fitted.sample(5000)#
#display(pm.model_to_graphviz(continouns_model))
#trace = pm.sample(10000, tune=1000, cores=1, return_inferencedata=True, init = 'advi_map')#, init = 'advi_map'
az.plot_trace(trace, var_names= [ '~logistic regression', "~dispersion model"]); #
display(az.summary(trace, round_to=8, var_names= ['degradation', 'diffusion', 'eDNA production rate']))
#display(az.summary(trace,var_names= [ 'degradation', 'eDNA production rate'] ,round_to=5))
sns.despine()
dataframe_vals = az.summary(trace, round_to=8, var_names= ['degradation', 'diffusion', 'eDNA production rate'])['mean']
dataframe_vals.index = ['λ', 'D', 'u']
dataframe_vals.to_dict()
ppc = pm.sample_posterior_predictive(trace, model =discrete, var_names=['Y_obs'])
wood['Yhat'] = ppc['Y_obs'].mean(axis= 0)
f, ax = plt.subplots(figsize=(12, 7))
ax.set(xscale="log")# , yscale="log"
sns.regplot(data =wood, x = 'Dist (m)', y = 'Yhat', label = 'yhat', y_jitter = .05, logistic = True, marker = 'o')
sns.regplot(data =wood, x = 'Dist (m)', y = 'Detect', label = 'y', color = 'r', y_jitter = 0.05, fit_reg = False, )
sns.despine()
plt.legend(loc = 'best')
#plt.ylim([0,100])
#wood
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
ConfusionMatrixDisplay(confusion_matrix(wood['Detect'].astype(int), wood['Yhat'].astype(int))).plot()
#A[:, np.random.randint(A.shape[0], size=2)] sample from trace
f, ax = plt.subplots(figsize=(12, 7))
ppc = pm.sample_posterior_predictive(trace, model =discrete, var_names=['dispersion model'])
wood['transport_model'] = np.clip(ppc['dispersion model'].mean(axis= 0), 1e-2, copies_upper_bound*10000)
#wood['transport_model'] = ppc['eDNA transport model'].mean(axis= 0)
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'transport_model', label = 'transport_model', s = 100)
plt.plot( np.linspace(0,2000,1000), np.clip(sample_eDNA_transect_dowstream_only(x0 = np.linspace(0,2000,1000) ,BV = 1e-5,T = 10*20,pf = 5/1000, H = 10,
B = wood['FishMass (kg)'].mean(), V = wood['Velocity (m/s)'].mean(),**dataframe_vals), 1e-2, 1e10))
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'copies eDNA', label = 'y')
sns.despine()
ax.set(xscale="log", yscale = "log")# , yscale = "log"
pm.model_to_graphviz(discrete)
###Output
_____no_output_____
###Markdown
Using scipy to optimize
###Code
from scipy.optimize import minimize
from scipy.stats import trimboth
def to_minimize(list_of_param):
dif, deg, prod = list_of_param
data = trimboth(np.sort(sample_eDNA_transect_dowstream_only(wood_dist , T, pf, BV, 10, wood_mass, prod, dif, deg, wood_vel) - observed_copies), .1)
return abs(data).sum()
diff, degra, production = minimize(to_minimize, [1e-3, 2e-3, 1e7], bounds = [(1e-3,1), (1e-3, 1), (1e4, 1e7)]).x
wood['yhat_scipy'] = sample_eDNA_transect_dowstream_only(wood_dist , T, pf, 1e-5, 10, wood_mass, production, diff, degra, wood_vel)
f, ax = plt.subplots(figsize=(12, 7))
ax.set(xscale="log", yscale="log") #,
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'Yhat', label = 'yhat')
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'copies eDNA', label = 'y')
sns.scatterplot(data =wood, x = 'Dist (m)', y = 'yhat_scipy', label = 'yhat_scipy') #plt.ylim([0,100])
sns.despine()
#probfunction = pd.DataFrame([[-2,.333],[-1, .875],[0,1],[1,1], [-10,0], [-3,0], [0, 1]], columns=['initial eDNA', 'probability of amplification'])
#probfunction['copy number'] = probfunction['initial eDNA'].apply(lambda x: 10**x * 3.65*1e6)
#model2 = sm.Logit(probfunction['probability of amplification'].values, probfunction['copy number'].values)
#result2 = model2.fit()
#def CtoP(x): return (result2.predict(x)-.5)/.5
wood_comp[['Detect']].values
tt.switch()
###Output
_____no_output_____ |
open_close_sentiment_study.ipynb | ###Markdown
After looking into the correlation between Alpha and sentiment on a daily basis, we're also interested in the stock return between the open price and the close price of the previous day.
###Code
# Getting return between today's open price and ytd's adj close price
return_FB = return_['r(FB)']
return_AMZN = return_['r(AMZN)']
return_GOOGL = return_['r(GOOGL)']
return_NFLX = return_['r(NFLX)']
r_NFLX_table = get_alpha_sent_table(return_NFLX, NFLX)
r_FB_table = get_alpha_sent_table(return_FB, FB)
r_AMZN_table = get_alpha_sent_table(return_AMZN, AMZN)
r_GOOGL_table = get_alpha_sent_table(return_GOOGL, GOOGL)
print('NFLX')
print(alpha_sent_stat_analysis(r_NFLX_table))
plot_alpha_corr_vader(r_NFLX_table)
print('FB')
print(alpha_sent_stat_analysis(r_FB_table))
plot_alpha_corr_vader(r_FB_table)
print('AMZN')
print(alpha_sent_stat_analysis(r_AMZN_table))
plot_alpha_corr_vader(r_AMZN_table)
print('GOOGL')
print(alpha_sent_stat_analysis(r_GOOGL_table))
plot_alpha_corr_vader(r_GOOGL_table)
###Output
GOOGL
{'Vader_corr': 0.11, 'Vader_p_value': 0.32}
|
examples/Column - Freeze-Thaw.ipynb | ###Markdown
Import libraries:
###Code
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from porousmedialab.column import Column
###Output
_____no_output_____
###Markdown
Setting up the properties of time-space and create the domain:
###Code
t = 27 / 365
dx = 0.2
L = 40
phi = 0.8
dt = 1e-5
ftc = Column(L, dx, t, dt)
###Output
_____no_output_____
###Markdown
To make things interesting lets create not simple inital conditions for iron:
###Code
x = np.linspace(0, L, L / dx + 1)
Fe3_init = np.zeros(x.size)
Fe3_init[x > 5] = 75
Fe3_init[x > 15] = 0
Fe3_init[x > 25] = 75
Fe3_init[x > 35] = 0
###Output
/Users/imarkelo/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: object of type <class 'float'> cannot be safely interpreted as an integer.
"""Entry point for launching an IPython kernel.
###Markdown
Adding species with names, diffusion coefficients, initial concentrations and boundary top and bottom conditions:
###Code
ftc.add_species(theta=phi, name='O2', D=368, init_conc=0, bc_top_value=0.231, bc_top_type='dirichlet', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='TIC', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='Fe2', D=127, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=1-phi, name='OM', D=1e-18, init_conc=15, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=1-phi, name='FeOH3', D=1e-18, init_conc=Fe3_init, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='CO2g', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.henry_equilibrium('TIC', 'CO2g', 0.2*0.83)
###Output
_____no_output_____
###Markdown
Specify the constants used in the rates:
###Code
ftc.constants['k_OM'] = 1
ftc.constants['Km_O2'] = 1e-3
ftc.constants['Km_FeOH3'] = 2
ftc.constants['k8'] = 1.4e+5
ftc.constants['Q10'] = 4 ### added
ftc.constants['CF'] = (1-phi)/phi ### conversion factor
###Output
_____no_output_____
###Markdown
Simulate Temperature with thermal diffusivity coefficient 281000 and init and boundary temperature 5C:
###Code
ftc.add_species(theta=0.99, name='Temperature', D=281000, init_conc=5, bc_top_value=5., bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
###Output
_____no_output_____
###Markdown
Add Q10 factor:
###Code
ftc.rates['R1'] = 'Q10**((Temperature-5)/10) * k_OM * OM * O2 / (Km_O2 + O2)'
ftc.rates['R2'] = 'Q10**((Temperature-5)/10) * k_OM * OM * FeOH3 / (Km_FeOH3 + FeOH3) * Km_O2 / (Km_O2 + O2)'
ftc.rates['R8'] = 'k8 * O2 * Fe2'
###Output
_____no_output_____
###Markdown
ODEs for specific species:
###Code
ftc.dcdt['OM'] = '-R1-R2'
ftc.dcdt['O2'] = '-R1-R8'
ftc.dcdt['FeOH3'] = '-4*R2+R8/CF'
ftc.dcdt['Fe2'] = '-R8+4*R2*CF'
ftc.dcdt['TIC'] = 'R1+R2*CF'
###Output
_____no_output_____
###Markdown
Because we are changing the boundary conditions for temperature and Oxygen (when T no oxygen at the top), then we need to have a time loop:
###Code
# %pdb
for i in range(1, len(ftc.time)):
day_of_bi_week = (ftc.time[i]*365) % 14
if day_of_bi_week < 7:
ftc.Temperature.bc_top_value = 5 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365)
else:
ftc.Temperature.bc_top_value = -10 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365)
# when T < 0 => 0 flux of oxygen and CO2 at the top:
if ftc.Temperature.bc_top_value < 0:
ftc.change_boundary_conditions('O2', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
else:
ftc.change_boundary_conditions('O2', i, bc_top_value=0.231, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
# Integrate one timestep:
ftc.integrate_one_timestep(i)
###Output
Boundary conditions changed for O2 at time 1.000035186423226e-05
Boundary conditions changed for CO2g at time 1.000035186423226e-05
Boundary conditions changed for O2 at time 0.019180674875597475
Boundary conditions changed for CO2g at time 0.019180674875597475
Boundary conditions changed for O2 at time 0.03836134975119495
Boundary conditions changed for CO2g at time 0.03836134975119495
Boundary conditions changed for O2 at time 0.05754202462679243
Boundary conditions changed for CO2g at time 0.05754202462679243
###Markdown
What we did with temperature
###Code
ftc.plot_depths("Temperature",[0,1,3,7,10,40])
###Output
_____no_output_____
###Markdown
Concentrations of different species during the whole period of simulation:
###Code
ftc.plot_contourplots()
###Output
_____no_output_____
###Markdown
The rates of consumption and production of species:
###Code
ftc.reconstruct_rates()
ftc.plot_contourplots_of_rates()
ftc.plot_contourplots_of_deltas()
###Output
_____no_output_____
###Markdown
Profiles at the end of the simulation
###Code
Fx = ftc.estimate_flux_at_top('CO2g')
ftc.custom_plot(ftc.time*365, 1e+3*Fx*1e+4/365/24/60/60,x_lbl='Days, [day]' , y_lbl='$F_{CO_2}$, $[\mu mol$ $m^{-2}$ $s^{-1}]$')
Fxco2 = 1e+3*Fx*1e+4/365/24/60/60
Fxco2nz = (ftc.time*365<7)*Fxco2 + ((ftc.time*365>14) & (ftc.time*365<21))*Fxco2
import seaborn as sns
fig, ax1 = plt.subplots(figsize=(5,3), dpi=200)
ax2 = ax1.twinx()
ax1.plot(ftc.time*365, Fxco2nz, label='$F_{CO_2}$', lw=3)
ax2.plot(ftc.time*365, ftc.Temperature.concentration[0, :], 'k', lw=1, label='T at 0 cm')
ax2.plot(ftc.time*365, ftc.Temperature.concentration[100, :], ls='-', c=sns.color_palette("deep", 10)[3], lw=2, label='T at 20 cm')
# ax1.scatter(NO3_t, NO3, c=sns.color_palette("deep", 10)[0], lw=1)
ax2.grid(False)
ax1.grid(lw=0.2)
ax2.set_ylim(-20, 20)
ax1.set_xlim(0, 27)
ax1.set_xlabel('Time, [days]')
ax1.set_ylabel('$CO_2(g)$ flux, $[\mu mol$ $m^{-2}$ $s^{-1}]$')
ax2.set_ylabel('Temperature, [C]')
ax1.set_ylim(0, 20)
ax1.legend(frameon=1, loc=2)
ax2.legend(frameon=1, loc=1)
import math
from matplotlib.colors import ListedColormap
lab = ftc
element = 'Fe2'
labels=False
days=False
last_year=False
plt.figure(figsize=(5,3), dpi=200)
# plt.title('$Fe(II)$ concentration')
resoluion = 100
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
if days:
X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x)
plt.xlabel('Time')
else:
X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x)
plt.xlabel('Time, [days]')
z = lab.species[element]['concentration'][:, k - 1:-1:n]
CS = plt.contourf(X, Y, z, 51, cmap=ListedColormap(
sns.color_palette("Blues", 51)), origin='lower')
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel('%s, [mM]' % element)
plt.figure(figsize=(5,3), dpi=200)
r='R2'
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
z = lab.estimated_rates[r][:, k - 1:-1:n]
# lim = np.max(np.abs(z))
# lim = np.linspace(-lim - 0.1, +lim + 0.1, 51)
X, Y = np.meshgrid(lab.time[k::n], -lab.x)
plt.xlabel('Time, [days]')
CS = plt.contourf(X*365, Y, z/365, 20, cmap=ListedColormap(
sns.color_palette("Blues", 51)))
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel(r'Rate R2, [$mM$ $d^{-1}$]')
plt.figure(figsize=(5,3),dpi=200)
element='FeOH3'
resoluion = 100
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
z = lab.species[element]['rates'][:, k - 1:-1:n]/365
lim = np.max(np.abs(z))
lim = np.linspace(-lim, +lim, 51)
X, Y = np.meshgrid(lab.time[k:-1:n], -lab.x)
plt.xlabel('Time, [days]')
CS = plt.contourf(X*365, Y, z, 20, cmap=ListedColormap(sns.color_palette(
"RdBu_r", 101)), origin='lower', levels=lim, extend='both')
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel('$\Delta$ $Fe(OH)_3$ [$mM$ $d^{-1}$]')
###Output
_____no_output_____
###Markdown
Import libraries:
###Code
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from porousmedialab.column import Column
###Output
_____no_output_____
###Markdown
Setting up the properties of time-space and create the domain:
###Code
t = 27 / 365
dx = 0.2
L = 40
phi = 0.8
dt = 1e-4
ftc = Column(L, dx, t, dt)
###Output
_____no_output_____
###Markdown
To make things interesting lets create not simple inital conditions for iron:
###Code
x = np.linspace(0, L, int(L / dx) + 1)
Fe3_init = np.zeros(x.size)
Fe3_init[x > 5] = 75
Fe3_init[x > 15] = 0
Fe3_init[x > 25] = 75
Fe3_init[x > 35] = 0
###Output
_____no_output_____
###Markdown
Adding species with names, diffusion coefficients, initial concentrations and boundary top and bottom conditions:
###Code
ftc.add_species(theta=phi, name='O2', D=368, init_conc=0, bc_top_value=0.231, bc_top_type='dirichlet', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='TIC', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='Fe2', D=127, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=1-phi, name='OM', D=1e-18, init_conc=15, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=1-phi, name='FeOH3', D=1e-18, init_conc=Fe3_init, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.add_species(theta=phi, name='CO2g', D=320, init_conc=0, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.henry_equilibrium('TIC', 'CO2g', 0.2*0.83)
###Output
_____no_output_____
###Markdown
Specify the constants used in the rates:
###Code
ftc.constants['k_OM'] = 1
ftc.constants['Km_O2'] = 1e-3
ftc.constants['Km_FeOH3'] = 2
ftc.constants['k8'] = 1.4e+5
ftc.constants['Q10'] = 4 ### added
ftc.constants['CF'] = (1-phi)/phi ### conversion factor
###Output
_____no_output_____
###Markdown
Simulate Temperature with thermal diffusivity coefficient 281000 and init and boundary temperature 5C:
###Code
ftc.add_species(theta=0.99, name='Temperature', D=281000, init_conc=5, bc_top_value=5., bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
###Output
_____no_output_____
###Markdown
Add Q10 factor:
###Code
ftc.rates['R1'] = 'Q10**((Temperature-5)/10) * k_OM * OM * O2 / (Km_O2 + O2)'
ftc.rates['R2'] = 'Q10**((Temperature-5)/10) * k_OM * OM * FeOH3 / (Km_FeOH3 + FeOH3) * Km_O2 / (Km_O2 + O2)'
ftc.rates['R8'] = 'k8 * O2 * Fe2'
###Output
_____no_output_____
###Markdown
ODEs for specific species:
###Code
ftc.dcdt['OM'] = '-R1-R2'
ftc.dcdt['O2'] = '-R1-R8'
ftc.dcdt['FeOH3'] = '-4*R2+R8/CF'
ftc.dcdt['Fe2'] = '-R8+4*R2*CF'
ftc.dcdt['TIC'] = 'R1+R2*CF'
###Output
_____no_output_____
###Markdown
Because we are changing the boundary conditions for temperature and Oxygen (when T no oxygen at the top), then we need to have a time loop:
###Code
# %pdb
for i in range(1, len(ftc.time)):
day_of_bi_week = (ftc.time[i]*365) % 14
if day_of_bi_week < 7:
ftc.Temperature.bc_top_value = 5 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365)
else:
ftc.Temperature.bc_top_value = -10 + 5 * np.sin(np.pi * 2 * ftc.time[i] * 365)
# when T < 0 => 0 flux of oxygen and CO2 at the top:
if ftc.Temperature.bc_top_value < 0:
ftc.change_boundary_conditions('O2', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='flux', bc_bot_value=0, bc_bot_type='flux')
else:
ftc.change_boundary_conditions('O2', i, bc_top_value=0.231, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
ftc.change_boundary_conditions('CO2g', i, bc_top_value=0, bc_top_type='constant', bc_bot_value=0, bc_bot_type='flux')
# Integrate one timestep:
ftc.integrate_one_timestep(i)
###Output
Boundary conditions changed for O2 at time 9.996297667530544e-05
Boundary conditions changed for CO2g at time 9.996297667530544e-05
Boundary conditions changed for O2 at time 0.019192891521658643
Boundary conditions changed for CO2g at time 0.019192891521658643
Boundary conditions changed for O2 at time 0.03838578304331729
Boundary conditions changed for CO2g at time 0.03838578304331729
Boundary conditions changed for O2 at time 0.05757867456497593
Boundary conditions changed for CO2g at time 0.05757867456497593
###Markdown
What we did with temperature
###Code
ftc.plot_depths("Temperature",[0,1,3,7,10,40])
###Output
_____no_output_____
###Markdown
Concentrations of different species during the whole period of simulation:
###Code
ftc.plot_contourplots()
###Output
_____no_output_____
###Markdown
The rates of consumption and production of species:
###Code
ftc.reconstruct_rates()
ftc.plot_contourplots_of_rates()
ftc.plot_contourplots_of_deltas()
###Output
_____no_output_____
###Markdown
Profiles at the end of the simulation
###Code
Fx = ftc.estimate_flux_at_top('CO2g')
ftc.custom_plot(ftc.time*365, 1e+3*Fx*1e+4/365/24/60/60,x_lbl='Days, [day]' , y_lbl='$F_{CO_2}$, $[\mu mol$ $m^{-2}$ $s^{-1}]$')
Fxco2 = 1e+3*Fx*1e+4/365/24/60/60
Fxco2nz = (ftc.time*365<7)*Fxco2 + ((ftc.time*365>14) & (ftc.time*365<21))*Fxco2
import seaborn as sns
fig, ax1 = plt.subplots(figsize=(5,3), dpi=200)
ax2 = ax1.twinx()
ax1.plot(ftc.time*365, Fxco2nz, label='$F_{CO_2}$', lw=3)
ax2.plot(ftc.time*365, ftc.Temperature.concentration[0, :], 'k', lw=1, label='T at 0 cm')
ax2.plot(ftc.time*365, ftc.Temperature.concentration[100, :], ls='-', c=sns.color_palette("deep", 10)[3], lw=2, label='T at 20 cm')
# ax1.scatter(NO3_t, NO3, c=sns.color_palette("deep", 10)[0], lw=1)
ax2.grid(False)
ax1.grid(lw=0.2)
ax2.set_ylim(-20, 20)
ax1.set_xlim(0, 27)
ax1.set_xlabel('Time, [days]')
ax1.set_ylabel('$CO_2(g)$ flux, $[\mu mol$ $m^{-2}$ $s^{-1}]$')
ax2.set_ylabel('Temperature, [C]')
ax1.set_ylim(0, 20)
ax1.legend(frameon=1, loc=2)
ax2.legend(frameon=1, loc=1)
import math
from matplotlib.colors import ListedColormap
lab = ftc
element = 'Fe2'
labels=False
days=False
last_year=False
plt.figure(figsize=(5,3), dpi=200)
# plt.title('$Fe(II)$ concentration')
resoluion = 100
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
if days:
X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x)
plt.xlabel('Time')
else:
X, Y = np.meshgrid(lab.time[k::n] * 365, -lab.x)
plt.xlabel('Time, [days]')
z = lab.species[element]['concentration'][:, k - 1:-1:n]
CS = plt.contourf(X, Y, z, 51, cmap=ListedColormap(
sns.color_palette("Blues", 51)), origin='lower')
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel('%s, [mM]' % element)
plt.figure(figsize=(5,3), dpi=200)
r='R2'
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
z = lab.estimated_rates[r][:, k - 1:-1:n]
# lim = np.max(np.abs(z))
# lim = np.linspace(-lim - 0.1, +lim + 0.1, 51)
X, Y = np.meshgrid(lab.time[k::n], -lab.x)
plt.xlabel('Time, [days]')
CS = plt.contourf(X*365, Y, z/365, 20, cmap=ListedColormap(
sns.color_palette("Blues", 51)))
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel(r'Rate R2, [$mM$ $d^{-1}$]')
plt.figure(figsize=(5,3),dpi=200)
element='FeOH3'
resoluion = 100
n = math.ceil(lab.time.size / resoluion)
if last_year:
k = n - int(1 / lab.dt)
else:
k = 1
z = lab.species[element]['rates'][:, k - 1:-1:n]/365
lim = np.max(np.abs(z))
lim = np.linspace(-lim, +lim, 51)
X, Y = np.meshgrid(lab.time[k:-1:n], -lab.x)
plt.xlabel('Time, [days]')
CS = plt.contourf(X*365, Y, z, 20, cmap=ListedColormap(sns.color_palette(
"RdBu_r", 101)), origin='lower', levels=lim, extend='both')
if labels:
plt.clabel(CS, inline=1, fontsize=10, colors='w')
cbar = plt.colorbar(CS)
plt.ylabel('Depth, [cm]')
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
cbar.ax.set_ylabel('$\Delta$ $Fe(OH)_3$ [$mM$ $d^{-1}$]')
###Output
_____no_output_____ |
4_vadkarok_hr_parser.ipynb | ###Markdown
Fill in missing date from image EXIF data
###Code
!pip install exifread
no_date=df[df['date'].astype(str)=='NaT']
no_date_yes_image=no_date[no_date['image'].astype(str)!='']
no_date_yes_image
import exifread
import urllib.request
for im_link in no_date_yes_image['image'].values:
im_link=im_link.split(' ')[0] #if multiple images, take only first
print(im_link)
try:
urllib.request.urlretrieve(im_link, "temp.png")
with open('temp.png', 'rb') as fh:
tags = exifread.process_file(fh, stop_tag="EXIF DateTimeOriginal")
if tags:
dateTaken = tags["EXIF DateTimeOriginal"]
print(dateTaken)
else:
print('No EXIF')
except:
print('No readable image')
###Output
https://doc-0k-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/j4pcli5pfmmbes7q30tllaba3o/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALobI_ipqW5ICbMvGNPKbt3arQBei5-scjRLfl-blvv71F7hO4ennVw-6-206Gk7q1cILxsQno7vHU9NcxiAeNTA6rQT7GlT56CMeEn4BhkBoSpBUkiztnF1qxAabdgzPVO1pvxHRjj5H0cIUibIYNxzd3E48eZ2WI5Gflbib3JHZJzMTw7qT2dH8t-QKGFMoq7EdPnw1uvA_Fr3a2glYJFoeDPiMK2_7QPJRVGwg3C6aF2wiQcXRjtiYfJlWJzmbcw8cUbUoyi0hNWIjPyUVPYNjppGtw?fife
No EXIF
https://doc-0g-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/mte4p4bnuof32j3hm3v4na9kec/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALrYq8wIwpYzie0YHfjTBAo3sZCFSf2zsyFkZ2uGMD3WsZiG4IzgO1X1gSnGlit7EJVcQ0elBxxsMmsw8oKzhhDbR2it9bfDhGMteq82J6F1C7AU5O6N_XInLyagczjh64WPqXvruiIeDdN7QhKKjH7w8k7nwJAgfsEgYE6LW_BEohmfI6E7Fa4rMWPKpYU6pDPUWLX8dD1QNWukxsEVBenpGehoZCx10MxagPI3SnvPvmmDiB7qCZWZOMe0yyfGjqx-gN777gk4-zbGbJH1yI4XVWlbrA?fife
No EXIF
https://doc-10-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/gek80ca1cm1acm6720i0pdd62g/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALoA9Nn3ZB4NxOnU1KZRII21fW34-36rkz5N5ZGE-86XYVwIQ73yz0eaWr--qikcktkeBI-F51G5gE9UiehPVUItDn_TWZ9FyDYLhW67kpYvswbDl_xVgCL4g7y-01vb2kgyI-kQtpHWdjtrcF7owjl_DXgx2rEZg4F_5bozhpyRLzP0mcBSAR4pO9uLC-gztXHJvJC-Dvehc_9WMQaKq4lKisdMpxHoG0M6hQUbIdIWwHAZ6b8xrY9dCMoaKdU19NfqlVs51Pn-75EDOee9Wl2MVvW99Q?fife
No EXIF
https://doc-00-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/gj34nqtlk97bsdbiq1d0h46i88/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALqH6MuVQi3e69mAasPC1i5UcqF4qbTmhnSO2vqb3JUpq9hvkx-TTJGMENhrjUXhmvwjXEoyy3y9YED-jagN6-ttAM3IB9QkZLbp74SfSyi57tbxyC_S7J3ac-FxPzVbxRJvqNuSnurQQt2jbNuL1hkeYSNJeiVvCsWn92Q60VEvushsc1573158xbhi3vyemJVTeMfzoKtcQqTmPoqNx9llYQ7T4tykkmJ0O66iKorLAgU8LvbanrezEBxy-s8-iPftlr87dQCQpaWOMj_fCo5tmEX7LA?fife
No EXIF
https://doc-0s-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/1ijrusdr1dk01fr0504b0q8v40/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALoOozmeoZLILpxPdeHrmS3exUO9l1bXxTz8cz9qj86PgPcUq92mUA-Vey4F3UZajLz_UzkpuLfOeDigsshQuhRT5lc3i1HAtkLGZo4GP0lOXPvSPlt7Ue6R9hopF2TUqAJJYB92X0FC_tnt8Qh4D1ocKoh01-SikpVv1sYQsi4zYihGdyqnoNqGs2rOrw6mayrYjLa8933LNTBQ0o4G73VKzOp5LZrDmeGQn6ZqlBcuSFbhEnnNtjVzczCTV9_z5cCvXRfGqWs38EqE51CjauGlAATtwQ?fife
No EXIF
https://doc-10-6g-mymaps.googleusercontent.com/untrusted/hostedimage/f6u64nodcabo26320jjaagdfpc/ld2e349ubao0lq2tjdtn3j5p14/1580233500000/8vpJ8s-AWmMZHl1okKvYY29jERWnD2LC/*/2AF2TALqqtblsJpdU5q6eU9wFUfgL9WpYkYvE3DU_IxUC3kA8K9sDBOo27rTEkW3LWkJX4xsCqfbQt3vth4qbAIM2BpuqbCWosejUEatZlnfBaGpJGcs8rAvBAo_vSiJtCFWyN1eCV_iDZL-U_4vKRbVok-F4Q7gG58sOSCbYPJR6Dsu56hkvuqq8FOqlpBRqwpZ-nhvBsh7HuzytJsYRhJxKgGxJi2WXaakyjbYlGjJRwRZ256wVG3B97hrJAR9hDBjjfPiHf0s-iQ9w9nk1L_fQpiLTUqVAHQ?fife
No EXIF
|
concerts_mlr.ipynb | ###Markdown
Multiple lineare Regression Fallbeispiel: Konzert- Ein Club organisiert regelmäßig Konzerte- Um den Umsatz zu optimieren möchten die Konzertveranstalter herausfinden, welche Faktoren zum Erfolg (Anzahl Besucher) eines Konzertes beitragen- Aus ihrer langjährigen Erfahrung wissen sie, dass der Erfolg unter anderem vom Ticketpreis (in €), dem Werbeaufwand (in €), sowie dem Erfolg der Band (Anzahl verkaufter CDs) abhängt- Dies möchte der Club nun statistisch überprüfen, um künftig den Erfolg eines Konzertes im Voraus besser abschätzen zu können
###Code
import pandas as pd
import statsmodels.api as sm
###Output
C:\Users\Dominik\Anaconda3\lib\site-packages\statsmodels\compat\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
###Markdown
DatensatzDer zu analysierende Datensatz enthält neben einer Identifikationsnummer des Anlasses (ID) die Besucherzahl des Anlasses (Besucher), den Ticketpreis (Preis), den betriebenen Werbeaufwand (Werbung) und die Anzahl verkaufter CDs (CD_Verkauf)
###Code
concert = pd.read_csv('https://github.com/Wurmloch/MultipleLinearRegression/raw/master/concert_mlr.csv', delimiter=';', index_col=0)
concert
independent_vars = concert[['PREIS', 'WERBUNG', 'CD_VERKAUF']]
dependent_var = concert['BESUCHER']
###Output
_____no_output_____
###Markdown
RegressionsmodellMittels statsmodel wird das Regressionsmodell angewandt. Der Schätzung werden die abhängige Variable als auch die unabhängigen Variablen übergeben.Dazu wird bei statsmodel die Methode der kleinsten Quadrate (KQ-Methode angewandt), welche in diesem Fall Schnittpunkte der Besucher auf den Preis, die Werbung und die Anzahl der verkauften CDs besitzt.
###Code
independent_vars = sm.add_constant(independent_vars)
estimation = sm.OLS(dependent_var, independent_vars.astype(float)).fit()
estimation.summary()
###Output
_____no_output_____
###Markdown
Alternative
###Code
import statsmodels.formula.api as smf
# formula: response ~ predictor + predictor
estimation_alternative = smf.ols(formula='BESUCHER ~ PREIS + WERBUNG + CD_VERKAUF', data=concert).fit()
estimation_alternative.summary()
###Output
_____no_output_____
###Markdown
Visualisierung FunktionEs wird zunächst das Grid für den plot mit PREIS, WERBUNG und CD_VERKAUF erstellt.Die Hyperebene wird dann erstellt, indem in die Formel der multiplen linearen Regression die berechneten Werte eingesetzt werden.
###Code
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# grid für 3d plot
xx1, xx2, xx3 = np.meshgrid(np.linspace(independent_vars.PREIS.min(), independent_vars.PREIS.max(), 100),
np.linspace(independent_vars.WERBUNG.min(), independent_vars.WERBUNG.max(), 100),
np.linspace(independent_vars.CD_VERKAUF.min(), independent_vars.CD_VERKAUF.max(), 100))
# Um 3d plot zu erzeugen, jede Schätzung einzeln nochmals
estimation_1 = smf.ols(formula='BESUCHER ~ PREIS + WERBUNG', data=concert).fit()
estimation_2 = smf.ols(formula='BESUCHER ~ WERBUNG + CD_VERKAUF', data=concert).fit()
estimation_3 = smf.ols(formula='BESUCHER ~ PREIS + CD_VERKAUF', data=concert).fit()
xx1_1, xx2_1 = np.meshgrid(np.linspace(independent_vars.PREIS.min(), independent_vars.PREIS.max(), 100),
np.linspace(independent_vars.WERBUNG.min(), independent_vars.WERBUNG.max(), 100))
xx2_2, xx3_2 = np.meshgrid(np.linspace(independent_vars.WERBUNG.min(), independent_vars.WERBUNG.max(), 100),
np.linspace(independent_vars.CD_VERKAUF.min(), independent_vars.CD_VERKAUF.max(), 100))
xx1_3, xx3_3 = np.meshgrid(np.linspace(independent_vars.PREIS.min(), independent_vars.PREIS.max(), 100),
np.linspace(independent_vars.CD_VERKAUF.min(), independent_vars.CD_VERKAUF.max(), 100))
# plot der Hyperebene durch Evaluierung der Parameter im Grid
Z = estimation.params[0] + estimation.params[1] * xx1 + estimation.params[2] * xx2 + estimation.params[3] * xx3
Z_1 = estimation_1.params[0] + estimation_1.params[1] * xx1_1 + estimation_1.params[2] * xx2_1
Z_2 = estimation_2.params[0] + estimation_2.params[1] * xx2_2 + estimation_2.params[2] * xx3_2
Z_3 = estimation_3.params[0] + estimation_3.params[1] * xx1_3 + estimation_3.params[2] * xx3_3
###Output
_____no_output_____
###Markdown
Plot Besucher in Abhängigkeit von Werbung und Preis
###Code
# Erstelle die matplotlib 3d-Achsen
fig_1 = plt.figure(figsize=(12, 8))
ax_1 = Axes3D(fig_1)
# Hyperebene plotten
surface_1 = ax_1.plot_surface(xx1_1, xx2_1, Z_1, cmap=plt.cm.RdBu_r, alpha=0.6, linewidth=0)
# Datenpunkte plotten - Punkte über der Hyperebene sind weiß, Punkte darunter schwarz
# residuals = dependent_var - estimation.predict(independent_vars)
# ax_1.scatter(independent_vars[residuals >= 0].PREIS, independent_vars[residuals >= 0].WERBUNG, dependent_var[residuals >= 0], color='black', alpha=1.0, facecolor='white')
# ax_1.scatter(independent_vars[residuals < 0].PREIS, independent_vars[residuals < 0].WERBUNG, dependent_var[residuals < 0], color='black', alpha=1.0)
ax_1.set_xlabel('PREIS')
ax_1.set_ylabel('WERBUNG')
ax_1.set_zlabel('BESUCHER')
###Output
_____no_output_____
###Markdown
Besucher in Abhängigkeit von Werbung und CD-Verkauf
###Code
fig_2 = plt.figure(figsize=(12, 8))
ax_2 = Axes3D(fig_2, azim=-115, elev=15)
surface_2 = ax_2.plot_surface(xx2_2, xx3_2, Z_2, cmap=plt.cm.RdBu_r, alpha=0.6, linewidth=0)
# residuals = dependent_var - estimation.predict(independent_vars)
# ax_2.scatter(independent_vars[residuals >= 0].WERBUNG, independent_vars[residuals >= 0].CD_VERKAUF, dependent_var[residuals >= 0], color='black', alpha=1.0, facecolor='white')
# ax_2.scatter(independent_vars[residuals < 0].WERBUNG, independent_vars[residuals < 0].CD_VERKAUF, dependent_var[residuals < 0], color='black', alpha=1.0)
ax_2.set_xlabel('WERBUNG')
ax_2.set_ylabel('CD_VERKAUF')
ax_2.set_zlabel('BESUCHER')
###Output
_____no_output_____
###Markdown
Besucher in Abhängigkeit von Preis und CD-Verkauf
###Code
fig_3 = plt.figure(figsize=(12, 8))
ax_3 = Axes3D(fig_3, azim=-115, elev=15)
surface_3 = ax_3.plot_surface(xx1_3, xx3_3, Z_3, cmap=plt.cm.RdBu_r, alpha=0.6, linewidth=0)
# residuals = dependent_var - estimation.predict(independent_vars)
# ax_3.scatter(independent_vars[residuals >= 0].PREIS, independent_vars[residuals >= 0].CD_VERKAUF, dependent_var[residuals >= 0], color='black', alpha=1.0, facecolor='white')
# ax_3.scatter(independent_vars[residuals < 0].PREIS, independent_vars[residuals < 0].CD_VERKAUF, dependent_var[residuals < 0], color='black', alpha=1.0)
ax_3.set_xlabel('PREIS')
ax_3.set_ylabel('CD_VERKAUF')
ax_3.set_zlabel('BESUCHER')
###Output
_____no_output_____ |
notebooks/spin_orbit_interaction.ipynb | ###Markdown
Spin-Orbit Interaction Imports
###Code
from IPython.display import display
from sympy import init_printing
init_printing(use_latex=True)
from sympy import factor, pi, S, Sum, symbols
from sympy.physics.quantum.spin import (
Jminus, Jx, Jz, J2, J2Op, JzKet, JzKetCoupled, Rotation, WignerD, couple, uncouple
)
from sympy.physics.quantum import (
Dagger, hbar, qapply, represent, TensorProduct
)
###Output
_____no_output_____
###Markdown
Symbolic calculation If we start with a hydrogen atom, i.e. a nucleus of charge $Ze$ orbited by a single electron of charge $e$ with reduced mass $\mu$, ignoring energy from center-of-mass motion, we can write the Hamiltonian in terms of the relative momentum, $p$, and position, $r$, as:$$H=\frac{p^2}{2\mu} - \frac{Ze^2}{r}$$The resulting eigenfunctions have a seperate radial and angular compents, $\psi=R_{n,l}(r)Y_{l,m}(\phi,\theta)$. While the radial component is a complicated function involving Laguere polynomials, the radial part is the familiar spherical harmonics with orbital angular momentum $\vec{L}$, where $l$ and $m$ give the orbital angular momentum quantum numbers. We represent this as a angular momentum state:
###Code
l, ml = symbols('l m_l')
orbit = JzKet(l, ml)
orbit
###Output
_____no_output_____
###Markdown
Now, the spin orbit interaction arises from the electron experiencing a magnetic field as it orbits the electrically charged nucleus. This magnetic field is:$$\vec{B} = \frac{1}{c}\frac{Ze\vec{v}\times\vec{r}}{r^3} = \frac{Ze\vec{p}\times\vec{r}}{mcr^3}=\frac{Ze\vec{L}}{mc\hbar r^3}$$Then the spin-orbit Hamiltonian can be written, using the electron's magnetic dipole moment $\mu$, as:$$H_{SO} = -\vec{\mu}\cdot\vec{B} = -\left(-\frac{g\mu_B \vec{S}}{\hbar}\right)\cdot\left(\frac{Ze\vec{L}}{mc\hbar r^3}\right)$$Ignoring the radial term:$$\propto \vec{L}\cdot\vec{S} = J^2 - L^2 - S^2$$for $\vec{J}$, the coupled angular momentum.The electron spin angular momentum is given as $\vec{S}$, where the spin wavefunction is:
###Code
ms = symbols('m_s')
spin = JzKet(S(1)/2, ms)
spin
###Output
_____no_output_____
###Markdown
From this we build our uncoupled state:
###Code
state = TensorProduct(orbit, spin)
state
###Output
_____no_output_____
###Markdown
For clarity we will define $L^2$ and $S^2$ operators. These behave the same as `J2`, they only display differently.
###Code
L2 = J2Op('L')
S2 = J2Op('S')
###Output
_____no_output_____
###Markdown
We also have the spin-orbit Hamiltonian:
###Code
hso = J2 - TensorProduct(L2, 1) - TensorProduct(1, S2)
hso
###Output
_____no_output_____
###Markdown
Now we apply this to our state:
###Code
apply1 = qapply(hso*state)
apply1
###Output
_____no_output_____
###Markdown
Note this has not applied the coupled $J^2$ operator to the states, so we couple the states and apply again:
###Code
apply2 = qapply(couple(apply1))
apply2
###Output
_____no_output_____
###Markdown
We now collect the terms of the sum, since they share the same limits, and factor the result:
###Code
subs = []
for sum_term in apply2.atoms(Sum):
subs.append((sum_term, sum_term.function))
limits = sum_term.limits
final = Sum(factor(apply2.subs(subs)), limits)
final
###Output
_____no_output_____ |
_build/jupyter_execute/ipynb/04b-plotagem-matplotlib.ipynb | ###Markdown
Plotagem básica com _matplotlib_ Visualização de dados A visualização de dados é um campo do conhecimento bastante antigo que foi trazido à mostra muito recentemente com a expansão do "Big Data". Seu principal objetivo é representar dados e informações graficamente por meio de elementos visuais como tabelas, gráficos, mapas e infográficos. Diversas ferramentas estão disponíveis para tornar a interpretação de dados mais clara, compreensível e acessível. No contexto da análise de dados, a visualização de dados é um componente fundamental para a criação de relatórios de negócios, painéis de instrumentos (_dashboards_) e gráficos multidimensionais que são aplicáveis às mais diversas disciplinas, tais como Economia, Ciência Política e, principalmente, todo o núcleo de ciências exatas (Matemática, Estatística e Computação). Em seu livro _The Visual Display of Quantitative Information_, [[Edward Tufte]](https://www.edwardtufte.com/tufte/), conhecido como o guru do _design_ aplicado à visualização de dados, afirma que, a cada ano, o mundo produz algo entre 900 bilhões e 2 trilhões de imagens impressas de gráficos. Ele destaca que o _design_ de um gráfico estatístico, por exemplo, é uma matéria universal similar à Matemática e não está atrelado a características únicas de uma linguagem particular. Portanto, aprender visualização de dados para comunicar dados com eficiência é tão importante quanto aprender a Língua Portuguesa para escrever melhor. Você pode ver uma lista sugestiva de bons blogues e livros sobre visualização de dados nas páginas de aprendizagem do software Tableau [[TabelauBlogs]](https://www.tableau.com/learn/articles/best-data-visualization-blogs), [[TabelauBooks]](https://www.tableau.com/learn/articles/books-about-data-visualization). _Data storytelling_ _Data Storytelling_ é o processo de "contar histórias através dos dados". [[Cole Knaflic]](http://www.storytellingwithdata.com), uma engenheira de dados do Google, ao perceber como a quantidade de informação produzida no mundo às vezes é muito mal lida e comunicada, escreveu dois *best-sellers* sobre este tema a fim de ajudar pessoas a comunicarem melhor seus dados e produtos quantitativos. Ela argumenta em seu livro *Storytelling with Data: A Data Visualization Guide for Business Professionals* (*Storytelling com Dados: um Guia Sobre Visualização de Dados Para Profissionais de Negócios*, na versão em português) que não somos inerentemente bons para "contar uma história" através dos dados. Cole mostra com poucas lições o que devemos aprender para atingir uma comunicação eficiente por meio da visualização de dados. Plotagem matemática_Plotagem_ é o termo comumente empregado para o esboço de gráficos de funções matemáticas via computador. Plotar gráficos é uma das tarefas que você mais realizará como futuro(a) cientista ou analista de dados. Nesta aula, nós introduziremos você ao universo da plotagem de gráficos em duas dimensões e ensinar como você pode visualizar dados facilmente com a biblioteca *matplotlib*. Daremos uma visão geral principalmente sobre a plotagem de funções matemáticas utilizando *arrays* e recursos de computação vetorizada com *numpy* já aprendidos. Ao longo do curso, você aprenderá a fazer plotagens mais interessantes de cunho estatístico. A biblioteca *matplotlib**Matplotlib* é a biblioteca Python mais conhecida para plotagem 2D (bidimensional) de *arrays*. Sua filosofia é simples: criar plotagens simples com apenas alguns comandos, ou apenas um. John Hunter [[History]](https://matplotlib.org/users/history.html), falecido em 2012, foi o autor desta biblioteca. Em 2008, ele escreveu que, enquanto buscava uma solução em Python para plotagem 2D, ele gostaria de ter, entre outras coisas:- gráficos bonitos com pronta qualidade para publicação;- capacidade de incorporação em interfaces gráficas para desenvolvimento de aplicações;- um código fácil de entender e de manusear.O *matplotlib* é um código dividido em três partes: 1. A interface *pylab*: um conjunto de funções predefinidas no submódulo `matplotlib.pyplot`.2. O *frontend*: um conjunto de classes responsáveis pela criação de figuras, textos, linhas, gráficos etc. No *frontend*, todos os elementos gráficos são objetos ainda abstratos.3. O *backend*: um conjunto de renderizadores responsáveis por converter os gráficos para dispositivos onde eles podem ser, de fato, visualizados. A [[renderização]](https://pt.wikipedia.org/wiki/Renderização) é o produto final do processamento digital. Por exemplo, o *backend* PS é responsável pela renderização de [[PostScript]](https://www.adobe.com/br/products/postscript.html). Já o *backend* SVG constroi gráficos vetoriais escaláveis ([[Scalable Vector Graphics]](https://www.w3.org/Graphics/SVG/).Veja o conceito de [[Canvas]](https://en.wikipedia.org/wiki/Canvas_(GUI)). Sessões interativas do *matplotlib*Sessões interativas do *matplotlib* são habilitadas através de um [[comando mágico]](https://ipython.readthedocs.io/en/stable/interactive/magics.html):- Em consoles, use `%matplotlib`;- No Jupyter notebook, use `%matplotlib inline`.Lembre que na aula anterior usamos o comando mágico `%timeit` para temporizar operações.Para usar plenamente o matplotlib nesta aula, vamos usar:```python%matplotlib inlinefrom matplotlib import pyplot as plt```A segunda instrução também pode ser feita como ```pythonimport matplotlib.pyplot as plt```em que `plt` é um *alias* já padronizado.
###Code
# chamada padrão
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Criação de plots simplesVamos importar o *numpy* para usarmos os benefícios da computação vetorizada e plotar nossos primeiros exemplos.
###Code
import numpy as np
x = np.linspace(-10,10,50)
y = x
plt.plot(x,y); # reta y = x
###Output
_____no_output_____
###Markdown
**Exemplo:** plote o gráfico da parábola $f(x) = ax^2 + bx + c$ para valores quaisquer de $a,b,c$ no intervalo $-20 \leq x \leq 20$.
###Code
x = np.linspace(-20,20,50)
a,b,c = 2,3,4
y = a*x**2 + b*x + c # f(x)
plt.plot(x,y);
###Output
_____no_output_____
###Markdown
Podemos definir uma função para plotar a parábola:
###Code
def plota_parabola(a,b,c):
x = np.linspace(-20,21,50)
y = a*x**2 + b*x + c
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
Agora podemos estudar o que cada coeficiente faz:
###Code
# mude o valor de a e considere b = 2, c = 1
for a in np.linspace(-2,3,10):
plota_parabola(a,2,1)
# mude o valor de b e considere a = 2, c = 1
for b in np.linspace(-2,3,20):
plota_parabola(2,b,1)
# mude o valor de c e considere a = 2, b = 1
for c in np.linspace(-2,3,10):
plota_parabola(2,1,c) # por que você não vê muitas mudanças?
# mude o valor de a, b e c
valores = np.linspace(-2,3,5)
for a in valores:
for b in valores:
for c in valores:
plota_parabola(a,b,c)
###Output
_____no_output_____
###Markdown
**Exemplo:** plote o gráfico da função $g(t) = a\cos(bt + \pi)$ para valores quaisquer de $a$ e $b$ no intervalo $0 \leq t \leq 2\pi$.
###Code
t = np.linspace(0,2*np.pi,50,endpoint=True) # t: ângulo
a, b = 1, 1
plt.plot(t,a*np.cos(b*t + np.pi));
b = 2
plt.plot(t,a*np.cos(b*t + np.pi));
b = 3
plt.plot(t,a*np.cos(b*t + np.pi));
###Output
_____no_output_____
###Markdown
As cores e marcações no gráfico são todas padronizadas. Vejamos como alterar tudo isto. Alteração de propriedades e estilos de linhas Altere: - cores com `color` ou `c`, - espessura de linha com `linewidth` ou `lw`- estilo de linha com `linestyle` ou `ls`- tipo de símbolo marcador com `marker`- largura de borda do símbolo marcardor com `markeredgewidth` ou `mew`- cor de borda do símbolo marcardor com `markeredgecolor` ou `mec`- cor de face do símbolo marcardor com `markerfacecolor` ou `mfc`- transparência com `alpha` no intervalo [0,1]
###Code
g = lambda a,b: a*np.cos(b*t + np.pi) # assume t anterior
# estude cada exemplo
# a ordem do 3o. argumento em diante pode mudar
plt.plot(t,g(1,1),color='c',linewidth=5,linestyle='-.',alpha=.3)
plt.plot(t,g(1,2),c='g',ls='-',lw='.7',marker='s',mfc='y',ms=8)
plt.plot(t,g(1,3),c='#e26d5a',ls=':', marker='d',mec='k',mew=2.0);
###Output
_____no_output_____
###Markdown
Cores e estilo de linha podem ser especificados de modo reduzido e em ordens distintas usando um especificador de formato.
###Code
plt.plot(t,g(1,1),'yv') # amarelo; triângulo para baixo;
plt.plot(t,g(1,2),':c+') # pontilhado; ciano; cruz;
plt.plot(t,-g(2,2),'>-.r'); # triangulo direita; traço-ponto; vermelho;
###Output
_____no_output_____
###Markdown
Plotagem múltipla O exemplo acima poderia ser feito como plotagem múltipla em 3 blocos do tipo (`x,y,'fmt')`, onde `x` e `y` são as informações dos eixos coordenados e `fmt` é uma string de formatação.
###Code
plt.plot(t,g(1,1),'yv', t,g(1,2),':c+', t,-g(2,2),'>-.r'); # 3 blocos sequenciados
###Output
_____no_output_____
###Markdown
Para verificar todas as opções de propriedades e estilos de linhas, veja `plt.plot?`. Especificação de figurasUse `plt.figure` para criar um ambiente de figura e altere:- a largura e altura (em polegadas) com `figsize = (largura,altura)`. O padrão é (6.4,4.8).- a resolução (em pontos por polegadas) com `dpi`. O padrão é 100.- a cor de fundo (*background*) com `facecolor`. O padrão é `w` (branco). **Exemplo:** Plote os gráficos de $h_1(x) = a\sqrt{x}$ e $h_2(x) = be^{\frac{x}{c}}$ para valores de a,b,c e propriedades acima livres.
###Code
x = np.linspace(0,10,50,endpoint=True)
h1, h2 = lambda a: a*np.sqrt(x), lambda b,c: b*np.exp(x/c)
plt.figure(figsize=(8,6), dpi=200, facecolor='#e0eeee')
plt.plot(x,h1(.9),x,h2(1,9));
###Output
_____no_output_____
###Markdown
Alterando limites e marcações de eixosAltere: - o intervalo do eixo `x` com `xlim` - o intervalo do eixo `y` com `ylim`- as marcações do eixo `x` com `xticks` - as marcações do eixo `y` com `yticks`
###Code
plt.plot(x,h1(.9),x,h2(1,9)); plt.xlim(1.6,9.2); plt.ylim(1.0,2.8);
plt.figure(figsize=(10,8))
plt.plot(t,g(1,3),c=[0.1,0.4,0.5],marker='s',mfc='w',mew=2.0);
plt.plot(t,g(1.2,2),c=[1.0,0.5,0.0],ls='--',marker='>',mfc='c',mew=1.0,ms=10);
plt.xticks([0, np.pi/2,np.pi,3*np.pi/2,2*np.pi]); # lista de múltiplos de pi
plt.yticks([-1, 0, 1]); # 3 valores em y
###Output
_____no_output_____
###Markdown
Especificando texto de marcações em eixosPodemos alterar as marcações das `ticks` passando um texto indicativo. No caso anterior, seria melhor algo como:
###Code
plt.figure(figsize=(10,8))
plt.plot(t,g(1,3),c=[0.1,0.4,0.5],marker='s',mfc='w',mew=2.0);
plt.plot(t,g(1.2,2),c=[1.0,0.5,0.0],ls='--',marker='>',mfc='c',mew=1.0,ms=10);
# o par de $...$ formata os números na linguagem TeX
plt.xticks([0, np.pi/2,np.pi,3*np.pi/2,2*np.pi], ['$0$','$\pi/2$','$\pi$','$3/2\pi$','$2\pi$']);
plt.yticks([-1, 0, 1], ['$y = -1$', '$y = 0$', '$y = +1$']);
###Output
_____no_output_____
###Markdown
Deslocamento de eixos principaisOs eixos principais podem ser movidos para outras posições arbitrárias e as bordas da área de plotagem desligadas usando `spine`.
###Code
# plotagem da função
x = np.linspace(-3,3)
plt.plot(x,x**1/2*np.sin(x)-0.5); # f(x) = √x*sen(x) - 1/2
ax = plt.gca()
ax.spines['right'].set_color('none') # remove borda direita
ax.spines['top'].set_color('none') # remove borda superior
ax.spines['bottom'].set_position(('data',0)) # desloca eixo para x = 0
ax.spines['left'].set_position(('data',0)) # desloca eixo para y = 0
ax.xaxis.set_ticks_position('top') # desloca marcações para cima
ax.yaxis.set_ticks_position('right') # desloca marcações para a direita
plt.xticks([-2,0,2]) # altera ticks de x
ax.set_xticklabels(['esq.','zero','dir.']) # altera ticklabels de x
plt.yticks([-0.4,0,0.4]) # altera ticks de y
ax.set_yticklabels(['sup.','zero','inf.']); # altera ticklabels de y
###Output
_____no_output_____
###Markdown
Inserção de legendasPara criarmos:- uma legenda para os gráficos, usamos `legend`.- uma legenda para o eixo x, usamos `xlabel`- uma legenda para o eixo y, usamos `ylabel`- um título para o gráfico, usamos `title` **Exemplo:** plote o gráfico da reta $f_1(x) = x + 1$ e da reta $f_2(x) = 1 - x$ e adicione uma legenda com cores azul e laranja.
###Code
plt.plot(x, x + 1,'-b', label = 'y = x + 1' )
plt.plot(x, 1-x, c = [1.0,0.5,0.0], label = 'y = 1 - x'); # laranja: 100% de vermelho, 50% verde
plt.legend(loc = 'best') # 'loc=best' : melhor localização da legenda
plt.xlabel('x'); plt.ylabel('y'); plt.title('Gráfico de duas retas');
###Output
_____no_output_____
###Markdown
Localização de legendasUse `loc=valor` para especificar onde posicionar a legenda. Use `plt.legend?` para verificar as posições disponíveis para `valor`. Vide tabela de valores `Location String` e `Location Code`.
###Code
plt.plot(np.nan,np.nan,label='upper right'); # nan : not a number
plt.legend(loc=1); # usando número
plt.plot(np.nan,np.nan,label='loc=1');
plt.legend(loc='upper right'); # usando a string correspondente
###Output
_____no_output_____
###Markdown
Alteração de tamanho de fontePara alterar o tamanho da fonte de legendas, use `fontsize`.
###Code
plt.plot(np.nan,np.nan,label='legenda');
FSx, FSy, FSleg, FStit = 10, 20, 30, 40
plt.xlabel('Eixo x',c='b', fontsize=FSx)
plt.ylabel('Eixo y',c='g', fontsize=FSy)
plt.legend(loc='center', fontsize=FSleg);
plt.title('Título', c='c', fontsize=FStit);
###Output
_____no_output_____
###Markdown
Anotações simples Podemos incluir anotações em gráficos com a função `annotate(texto,xref,yref)`
###Code
plt.plot(np.nan,np.nan);
plt.annotate('P (0.5,0.5)',(0.5,0.5));
plt.annotate('Q (0.1,0.8)',(0.1,0.8));
###Output
_____no_output_____
###Markdown
**Exemplo**: gere um conjunto de 10 pontos $(x,y)$ aleatórios em que $0.2 < x,y < 0.8$ e anote-os no plano.
###Code
# gera uma lista de 10 pontos satisfazendo a condição
P = []
while len(P) != 10:
xy = np.round(np.random.rand(2),1)
test = np.all( (xy > 0.2) & (xy < 0.8) )
if test:
P.append(tuple(xy))
# plota o plano
plt.figure(figsize=(8,8))
plt.xlim(0,1)
plt.ylim(0,1)
for ponto in P:
plt.plot(ponto[0],ponto[1],'o')
plt.annotate(f'({ponto[0]},{ponto[1]})',ponto,fontsize=14)
###Output
_____no_output_____
###Markdown
**Problema:** o código acima tem um problema. Verifique que `len(P) = 10`, mas ele não plota os 10 pontos como gostaríamos de ver. Descubra o que está acontecendo e proponha uma solução. Multiplotagem e eixosNo matplotlib, podemos trabalhar com a função `subplot(m,n,p)` para criar múltiplas figuras e eixos independentes como se cada figura fosse um elemento de uma grande "matriz de figuras" de `m` linhas e `n` colunas, enquanto `p` é o índice da figura (este valor será no máximo o produto `mxn`). A função funciona da seguinte forma. - Exemplo 1: suponha que você queira criar 3 figuras e dispô-las em uma única linha. Neste caso, `m = 1`, `n = 3` e `p` variará de 1 a 3, visto que `mxn = 3`.- Exemplo 2: suponha que você queira criar 6 figuras e dispô-las em 2 linhas e 3 colunas. Neste caso, `m = 2`, `n = 3` e `p` variará de 1 a 6, visto que `mxn = 6`.- Exemplo 3: suponha que você queira criar 12 figuras e dispô-las em 4 linhas e 3 colunas. Neste caso, `m = 4`, `n = 3` e `p` variará de 1 a 12, visto que `mxn = 12`. Cada plotagem possui seu eixo independentemente da outra. **Exemplo 1:** gráfico de 1 reta, 1 parábola e 1 polinômio cúbico lado a lado.
###Code
x = np.linspace(-5,5,20)
plt.figure(figsize=(15,4))
# aqui p = 1
plt.subplot(1,3,1) # plt.subplot(131) também é válida
plt.plot(x,2*x-1,c='r',marker='^')
plt.title('$y=2x-1$')
# aqui p = 2
plt.subplot(1,3,2) # plt.subplot(132) também é válida
plt.plot(x,3*x**2 - 2*x - 1,c='g',marker='o')
plt.title('$y=3x^2 - 2x - 1$')
# aqui p = 3
plt.subplot(1,3,3) # plt.subplot(133) também é válida
plt.plot(x,1/2*x**3 + 3*x**2 - 2*x - 1,c='b',marker='*')
plt.title('$y=1/2x^3 + 3x^2 - 2x - 1$');
###Output
_____no_output_____
###Markdown
**Exemplo 2:** gráficos de {$sen(x)$, $sen(2x)$, $sen(3x)$} e {$cos(x)$, $cos(2x)$, $cos(3x)$} dispostos em matriz 2x3.
###Code
plt.figure(figsize=(15,4))
plt.subplots_adjust(top=2.5,right=1.2) # ajusta a separação dos plots individuais
def sencosx(p):
x = np.linspace(0,2*np.pi,50)
plt.subplot(2,3,p)
if p <= 3:
plt.plot(x,np.sin(p*x),c=[p/4,p/5,p/6],label=f'$sen({p}x)$')
plt.title(f'subplot(2,3,{p})');
else:
plt.title(f'subplot(2,3,{p})');
p-=3 #
plt.plot(x,np.cos(p*x),c=[p/9,p/7,p/8],label=f'$cos({p}x)$')
plt.legend(loc=0,fontsize=8)
plt.xlabel('x'); plt.ylabel('y');
# plotagem
for p in range(1,7):
sencosx(p)
###Output
_____no_output_____
###Markdown
**Exemplo 3:** gráficos de um ponto isolado em matriz 4 x 3.
###Code
plt.figure(figsize=(15,4))
m,n = 4,3
def star(p):
plt.subplot(m,n,p)
plt.axis('off') # desliga eixos
plt.plot(0.5,0.5,marker='*',c=list(np.random.rand(3)),ms=p*2)
plt.annotate(f'subplot({m},{n},{p})',(0.5,0.5),c='g',fontsize=10)
for p in range(1,m*n+1):
star(p);
###Output
_____no_output_____
###Markdown
Plots com gradeadoPodemos habilitar o gradeado usando `grid(b,which,axis)`.Para especificar o gradeado: - em ambos os eixos, use `b='True'` ou `b='False'`.- maior, menor ou ambos, use `which='major'`, `which='minor'` ou `which='both'`.- nos eixos x, y ou ambos, use `axis='x'`, `axis='y'` ou `axis='both'`.
###Code
x = np.linspace(-10,10)
plt.plot(x,x)
plt.grid(True)
plt.plot(x,x)
plt.grid(True,which='major',axis='x')
plt.plot(x,x)
plt.grid(True,which='major',axis='y')
###Output
_____no_output_____
###Markdown
**Exemplo:** plotagem de gradeado.Neste exemplo, um eixo abstrato é adicionado sobre a figura (criada diretamente) origem no ponto (0.025,0.025), largura 0.95 e altura 0.95.
###Code
ax = plt.axes([0.025, 0.025, 0.95, 0.95])
ax.set_xlim(0,4)
ax.set_ylim(0,3)
# MultipleLocator estabelece pontos de referência para divisão da grade
ax.xaxis.set_major_locator(plt.MultipleLocator(1.0)) # divisor maior em X
ax.xaxis.set_minor_locator(plt.MultipleLocator(0.2)) # divisor maior em X
ax.yaxis.set_major_locator(plt.MultipleLocator(1.0)) # divisor maior em Y
ax.yaxis.set_minor_locator(plt.MultipleLocator(0.1)) # divisor maior em Y
# propriedades das linhas
ax.grid(which='major', axis='x', linewidth=0.75, linestyle='-', color='r')
ax.grid(which='minor', axis='x', linewidth=0.5, linestyle=':', color='b')
ax.grid(which='major', axis='y', linewidth=0.75, linestyle='-', color='r')
ax.grid(which='minor', axis='y', linewidth=0.5, linestyle=':', color='g')
# para remover as ticks, adicione comentários
#ax.set_xticklabels([])
#ax.set_yticklabels([]);
plt.plot(x,x,'k')
plt.plot(x,-x+4,'k')
###Output
_____no_output_____
###Markdown
Plots com preenchimentoPodemos usar `fill_between` para criar preenchimentos de área em gráficos.
###Code
x = np.linspace(-np.pi, np.pi, 60)
y = np.sin(2*x)*np.cos(x/2)
plt.fill_between(x,y,alpha=0.5);
x = np.linspace(-np.pi, np.pi, 60)
f1 = np.sin(2*x)
f2 = 0.5*np.sin(2*x)
plt.plot(x,f1,c='r');
plt.plot(x,f2,c='k');
plt.fill_between(x,f1,f2,color='g',alpha=0.2);
###Output
_____no_output_____ |
DeepLearningAnisoTomo.ipynb | ###Markdown
A real-time deep learning approach to grain orientation mapping of anisotropic media for improved ultrasonic non-destructive evaluation Jonathan Singh ([email protected]) Cite as: Singh J., Tant K., Mulholland A., Curtis A., (2021), A real-time deep learning approach to grain orientation mapping of anisotropic media for improved ultrasonic non-destructive evaluation. https://arxiv.org/abs/2105.09466[](https://colab.research.google.com/github/jonnyrsingh/DeepLearningAnisoTomo/blob/main/DeepLearningAnisoTomo.ipynb)This notebook demonstrates a deep learning aproach to ultrasound tomography for reconstructing maps of grain orientations using deep neural networks (DNNs) and generative adversatial networks (GANs). This notebook accompanys the paper "A real-time deep learning approach to grain orientation mapping of anisotropic media for improved ultrasonic non-destructive evaluation". We recomend using [Google colab](https://colab.research.google.com/github/jonnyrsingh/DeepLearningAnisoTomo/blob/main/DeepLearningAnisoTomo.ipynb) free GPUs, which allows instant implementation without the need of installation of any additional packages. In this example, you will use ultrasonic time of flight data for a square sample using full aperture, pitch-catch and pulse echo transducer array configurations to train a DNN for ultrasound tomography and train a GAN to acheive factor-four super resolution. Some example results are shown below: Example output of the deep neural network reconstructing grain orientation Example output of the GAN acheiving factor-four upscaling of resolution Import libraries
###Code
import os
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
import tensorflow.keras as tfk
from multiprocessing import Process, Queue
from scipy.io import loadmat
from scipy import interpolate
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from tqdm import tqdm
import numpy as np
import matplotlib.pyplot as plt
import time
import pylab as pl
from IPython import display
###Output
_____no_output_____
###Markdown
Load dataImport data from .mat files. The data consists of 7500 pairs of travel times (inputs) and grain orientation maps (outputs) for training, and 200 model-data pairs for testing. The training data is generated by smoothing a voronoi tesselation and computing travel times using the [anisotropic multi-stencil fast marching method](https://www.tandfonline.com/doi/full/10.1080/17415977.2020.1762596). Three tranduscer array configurations are used and pictured below:  Clone from Github
###Code
# clone - need to break up anything over 25mb
!git clone https://github.com/jonnyrsingh/DeepLearningAnisoTomo
training_matlab = loadmat('DeepLearningAnisoTomo/anisoTrainingFastMarching16x16.mat')
inputs_true_fa = training_matlab['inputs']
outputs_true_fa = training_matlab['outputs']
training_matlab = loadmat('DeepLearningAnisoTomo/anisoTrainingFastMarchingPulseReflect16x16.mat')
inputs_true_pr = training_matlab['inputs']
outputs_true_pr = training_matlab['outputs']
training_matlab = loadmat('DeepLearningAnisoTomo/anisoTrainingFastMarchingPulseTrans16x16_input.mat')
inputs_true_pt = training_matlab['inputs']
training_matlab = loadmat('DeepLearningAnisoTomo/anisoTrainingFastMarchingPulseTrans16x16_output.mat')
outputs_true_pt = training_matlab['outputs']
training_matlab = loadmat('DeepLearningAnisoTomo/anisotTestAllAperture.mat')
inputs_fa_ns = training_matlab['inputs_fa'] # full aperture
inputs_pr_ns = training_matlab['inputs_pr'] # pulse reflection
inputs_pt_ns = training_matlab['inputs_pt'] # pulse tranmission
outputs_all_ap = training_matlab['outputs'] # true orientation models
###Output
_____no_output_____
###Markdown
The data is scaled to have a zero mean and unit variance
###Code
#Combine training and testing data so they are scaled by the same factor
inputs_true_fa=np.append(inputs_true_fa,(inputs_fa_ns),axis=0)
inputs_true_pt=np.append(inputs_true_pt,(inputs_pt_ns),axis=0)
inputs_true_pr=np.append(inputs_true_pr,(inputs_pr_ns),axis=0)
outputs_true_fa = np.append(outputs_true_fa,(outputs_all_ap),axis=0)
# reshape images into 1D
outputs_true_fa=np.reshape(outputs_true_fa,(7700,256))
outputs_true_pt=np.reshape(outputs_true_pt,(7500,256))
outputs_true_pr=np.reshape(outputs_true_pr,(7500,256))
inputs_fa = preprocessing.scale(inputs_true_fa)
inputs_pr = preprocessing.scale(inputs_true_pr)
inputs_pt = preprocessing.scale(inputs_true_pt)
outputs_fa = preprocessing.scale(outputs_true_fa)
outputs_pt = preprocessing.scale(outputs_true_pt)
outputs_pr = preprocessing.scale(outputs_true_pr)
input_fa_test = inputs_fa[7500:7700,:]
input_pt_test = inputs_pt[7500:7700,:]
input_pr_test = inputs_pr[7500:7700,:]
output_all_test = outputs_fa[7500:7700,:]
inputs_fa=np.delete(inputs_fa,(np.arange(200)+7500),axis=0)
inputs_pt=np.delete(inputs_pt,(np.arange(200)+7500),axis=0)
inputs_pr=np.delete(inputs_pr,(np.arange(200)+7500),axis=0)
outputs_fa=np.delete(outputs_fa,(np.arange(200)+7500),axis=0)
x_train_fa,x_test_fa,y_train_fa,y_test_fa = train_test_split(inputs_fa,outputs_fa,test_size=0.01)
x_train_pt,x_test_pt,y_train_pt,y_test_pt = train_test_split(inputs_pt,outputs_pt,test_size=0.01)
x_train_pr,x_test_pr,y_train_pr,y_test_pr = train_test_split(inputs_pr,outputs_pr,test_size=0.01)
###Output
_____no_output_____
###Markdown
DNN configurationsConfigurations were optimised using [Scikit-Optimize](https://scikit-optimize.github.io/stable/) for [hyperparameter optimisation](https://machinelearningmastery.com/scikit-optimize-for-hyperparameter-tuning-in-machine-learning/). Optimal network parameters are described by netParams(adam learning rate, number of hidden layers, number of input nodes, number of nodes for hidden layers, activation function, batch size, adam decay rate).Using a validation dataset (with validation split = 0.2), early stopping is implemented with patience=10. So training stops where the validation loss increases for over 10 epochs. The maximum number of epochs is 500. Full aperture transucer array configuration
###Code
netParams = [0.0038824847512218914, 3, 315, 63, 'sigmoid', 80,0.0058864640772035735] #optim for full aperture
callbacks = tfk.callbacks.EarlyStopping(monitor='val_loss', patience=10)
def create_model(outCellInd,learning_rate, num_dense_layers,num_input_nodes,
num_dense_nodes, activation, batch_size, adam_decay):
#start the model making process and create our first layer
model = Sequential()
model.add(Dense(num_input_nodes, input_shape= (96,), activation=activation
))
#create a loop making a new dense layer for the amount passed to this model.
#naming the layers helps avoid tensorflow error deep in the stack trace.
for i in range(num_dense_layers):
name = 'layer_dense_{0}'.format(i+1)
model.add(Dense(num_dense_nodes,
activation=activation,
name=name
))
#add our classification layer.
model.add(Dense(1))
#setup our optimizer and compile
adam = Adam(lr=learning_rate, decay= adam_decay)
model.compile(optimizer=adam, loss='mse')
history=model.fit(x=x_train_fa,
y=y_train_fa[:,outCellInd],
epochs=500,
callbacks=callbacks,
verbose=0,
batch_size=batch_size,
validation_split=0.2
)
return model,history
#train network for each pixel
modelMat_fa=[]
for i in tqdm(range(0,256)):
modelOut=create_model(i,netParams[0],netParams[1],netParams[2],netParams[3],netParams[4],netParams[5],netParams[6])
modelMat_fa.append(modelOut)
###Output
100%|██████████████████████████████████████████████████████████████████████| 256/256 [34:42<00:00, 11.28s/it]
###Markdown
Pulse reflection
###Code
netParams = [0.005, 3, 353, 68, 'sigmoid', 78,0.0068] #optim for fmm pulse reflection
# set up early stopping
callbacks = tfk.callbacks.EarlyStopping(monitor='val_loss', patience=10)
def create_model(outCellInd,learning_rate, num_dense_layers,num_input_nodes,
num_dense_nodes, activation, batch_size, adam_decay):
#start the model making process and create our first layer
model = Sequential()
model.add(Dense(num_input_nodes, input_shape= (136,), activation=activation
))
#create a loop making a new dense layer for the amount passed to this model.
#naming the layers helps avoid tensorflow error deep in the stack trace.
for i in range(num_dense_layers):
name = 'layer_dense_{0}'.format(i+1)
model.add(Dense(num_dense_nodes,
activation=activation,
name=name
))
#add our classification layer.
model.add(Dense(1))
#setup our optimizer and compile
adam = Adam(lr=learning_rate, decay= adam_decay)
model.compile(optimizer=adam, loss='mse')
history=model.fit(x=x_train_pr,
y=y_train_pr[:,outCellInd],
epochs=500,
callbacks=callbacks,
verbose=0,
batch_size=batch_size,
validation_split=0.2
)
return model,history
#train network for each pixel
modelMat_pr=[]
for i in tqdm(range(0,256)):
modelOut=create_model(i,netParams[0],netParams[1],netParams[2],netParams[3],netParams[4],netParams[5],netParams[6])
modelMat_pr.append(modelOut)
###Output
100%|██████████████████████████████████████████████████████████████████████| 256/256 [59:00<00:00, 7.80s/it]
###Markdown
Pulse transmission
###Code
netParams = [0.0025, 3, 345, 55, 'sigmoid', 50,0.00553] #
callbacks = tfk.callbacks.EarlyStopping(monitor='val_loss', patience=10)
def create_model(outCellInd,learning_rate, num_dense_layers,num_input_nodes,
num_dense_nodes, activation, batch_size, adam_decay):
#start the model making process and create our first layer
model = Sequential()
model.add(Dense(num_input_nodes, input_shape= (256,), activation=activation
))
#create a loop making a new dense layer for the amount passed to this model.
#naming the layers helps avoid tensorflow error deep in the stack trace.
for i in range(num_dense_layers):
name = 'layer_dense_{0}'.format(i+1)
model.add(Dense(num_dense_nodes,
activation=activation,
name=name
))
#add our classification layer.
model.add(Dense(1))
#setup our optimizer and compile
adam = Adam(lr=learning_rate, decay= adam_decay)
model.compile(optimizer=adam, loss='mse')
history=model.fit(x=x_train_pt,
y=y_train_pt[:,outCellInd],
epochs=500,
callbacks=callbacks,
verbose=0,
batch_size=batch_size,
validation_split=0.2
)
return model,history
#train network for each pixel
modelMat_pt=[]
for i in tqdm(range(0,256)):
modelOut=create_model(i,netParams[0],netParams[1],netParams[2],netParams[3],netParams[4],netParams[5],netParams[6])
modelMat_pt.append(modelOut)
###Output
100%|████████████████████████████████████████████████████████████████████| 256/256 [3:22:24<00:00, 35.77s/it]
###Markdown
Network TestingDNNs for each tranduscer configuration have now been trained. Next take a dataset that has not been used in any part of training (input_fa_test,input_pt_test,input_pr_test). For comaprison, the same models (output_all_test) are used for the three transducer configurations.
###Code
bMat_fa =[]
bMat_pr =[]
bMat_pt =[]
for i in range(0,256):
# full aperture
b = modelMat_fa[i][0].predict(input_fa_test)
bMat_fa.append(b)
# pulse reflection
b = modelMat_pr[i][0].predict(input_pr_test)
bMat_pr.append(b)
# pulse transmission
b = modelMat_pt[i][0].predict(input_pt_test)
bMat_pt.append(b)
# convert results to numpy arrays
bMat_pr = np.array(bMat_pr)
bMat_pt = np.array(bMat_pt)
bMat_fa = np.array(bMat_fa)
###Output
_____no_output_____
###Markdown
Animate results
###Code
plt.figure(figsize=(16,8))
for i in range(200):
xInd=i
mod_pred_fa = np.reshape(bMat_fa[:,xInd],[16,16])
mod_pred_pt = np.reshape(bMat_pt[:,xInd],[16,16])
mod_pred_pr = np.reshape(bMat_pr[:,xInd],[16,16])
mod_true = np.reshape(output_all_test[xInd,:],[16,16])
plt.subplot(141)
plt.imshow(mod_true)
plt.title('True Orientation Map')
plt.subplot(142)
plt.imshow(mod_pred_fa)
plt.title('Full aperture prediction')
plt.subplot(143)
plt.imshow(mod_pred_pt)
plt.title('Pulse transmission prediction')
plt.subplot(144)
plt.imshow(mod_pred_pr)
plt.title('Pulse reflection prediction')
display.clear_output(wait=True)
display.display(pl.gcf())
time.sleep(0.5)
pl.clf()
###Output
_____no_output_____
###Markdown
GAN for super resolution and post processing tomography resultsGenerative adversarial networks (GANs) are powerful computer vision tools for image based problems. Here we use a GAN to take the low resolution (16x16) output of the DNN tomography described above, and output a 64x64 high resolution image of images with sharp discontiuous boundaries in grain orientation. A schematic of the GAN is shown below  The GAN methodology presented here is a modification of the [pix2pix](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/pix2pix.ipynbscrollTo=qmkj-80IHxnd) algorithm, to allow changes in image resolution. A copy of the License for this work can be found [here](https://www.apache.org/licenses/LICENSE-2.0). Get training dataThe training data consits of:* 64x64 resolution true models with discontinous grain boundaries* Travel times from the AMSFMM methods using a full aperture transducer array configuration* 16x16 tomographic reconstruction from the full aperture DNN The travel times and true models are stored in anisoTrainingFMM64x64_6seed.mat
###Code
training_matlab = loadmat('anisoTrainingFMM64x64_6seed.mat')
inputs_trueHighRes = training_matlab['inputs'] # inputs should be scaled with training data
outputs_trueHighRes = training_matlab['outputs'] # doesn't need to be scaled in the same way
inputs_trueHighRes_scaled = preprocessing.scale(inputs_trueHighRes)
outputs_trueHighRes_scaled = (outputs_trueHighRes/22.5)-1
x_trainHR,x_testHR,y_trainHR,y_testHR = train_test_split(inputs_trueHighRes_scaled,outputs_trueHighRes_scaled,test_size=0.5)
# Use trained DNN to generate 16x16 images
bMatHR_test = []
bMatHR_train = []
for i in tqdm(range(0,256)):
b = modelMat_fa[i][0].predict(x_testHR)
bMatHR_test.append(b)
b = modelMat_fa[i][0].predict(x_trainHR)
bMatHR_train.append(b)
bMatHR_test = np.array(bMatHR_test)
bMatHR_train = np.array(bMatHR_train)
bMatHR_train= bMatHR_train.transpose(1,0,2).reshape(1000,16,16)
bMatHR_test= bMatHR_test.transpose(1,0,2).reshape(1000,16,16)
np.shape(bMatHR_train)
y_trainHR_HR = y_trainHR.repeat(4,axis=1).repeat(4,axis=2)
y_testHR_HR = y_testHR.repeat(4,axis=1).repeat(4,axis=2)
# convert arrays to tensors
input_train = tf.cast(bMatHR_train, tf.float32)
output_train = tf.cast(y_trainHR, tf.float32)
input_test = tf.cast(bMatHR_test, tf.float32)
output_test = tf.cast(y_testHR, tf.float32)
input_train = tf.reshape(input_train,(1000,16,16,1))
output_train = tf.reshape(output_train,(1000,64,64,1))
input_test = tf.reshape(input_test,(1000,16,16,1))
output_test = tf.reshape(output_test,(1000,64,64,1))
# create tensorflow datasets
BATCH_SIZE = 10
BUFFER_SIZE = 400
train_dataset = tf.data.Dataset.from_tensor_slices((input_train,output_train))
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices((input_test,output_test))
test_dataset = test_dataset.batch(BATCH_SIZE)
test_dataset = test_dataset.shuffle(BUFFER_SIZE)
###Output
_____no_output_____
###Markdown
Plot example 16x16 - 64x64 image pairs
###Code
# randomly sample training data
ind = np.random.randint(1000)
plt.subplot(121)
plt.imshow(input_train[ind,:,:,0])
plt.subplot(122)
plt.imshow(output_train[ind,:,:,0])
###Output
_____no_output_____
###Markdown
Build the GeneratorBoth the generator and discriminator are defined using the Keras Sequential API.* The architecture of generator is a modified U-Net.* Each block in the encoder is (Conv -> Batchnorm -> Leaky ReLU)* Each block in the decoder is (Transposed Conv -> Batchnorm -> Dropout(applied to the first 3 blocks) -> ReLU)* There are skip connections between the encoder and decoder (as in U-Net).
###Code
OUTPUT_CHANNELS = 1
def downsample(filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',
kernel_initializer=initializer, use_bias=False))
if apply_batchnorm:
result.add(tf.keras.layers.BatchNormalization())
result.add(tf.keras.layers.LeakyReLU())
return result
inp = tf.reshape(input_train[0],(16,16,1))
out = tf.reshape(output_train[0],(64,64,1))
down_model = downsample(3, 4)
down_result = down_model(tf.expand_dims(inp, 0))
print (down_result.shape)
def upsampleimage():
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.UpSampling2D(size=(4,4), interpolation='nearest'))
return result
up_model = upsampleimage()
up_result = up_model(down_result)
print (up_result.shape)
def upsample(filters, size, apply_dropout=False):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False))
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result
up_model = upsample(3, 4)
up_result = up_model(down_result)
print (up_result.shape)
def Generator():
inputs = tf.keras.layers.Input(shape=[16,16,1])
down_stack = [
upsampleimage(),
downsample(64, 4, apply_batchnorm=False), # (bs, 32, 32, 64)
downsample(128, 4), # (bs, 16, 16, 128)
downsample(256, 4), # (bs, 8, 8, 256)
downsample(512, 4), # (bs, 4, 4, 512)
downsample(512, 4), # (bs, 2, 2, 512)
downsample(512, 4), # (bs, 1, 1, 512)
]
up_stack = [
upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024)
upsample(512, 4), # (bs, 16, 16, 1024)
upsample(256, 4), # (bs, 32, 32, 512)
]
initializer = tf.random_normal_initializer(0., 0.02)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,
strides=2,
padding='same',
kernel_initializer=initializer,
activation='tanh') # (bs, 64, 64, 1)
x = inputs
# Downsampling through the model
skips = []
for down in down_stack:
x = down(x)
skips.append(x)
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
x = tf.keras.layers.Concatenate()([x, skip])
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
###Output
Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.
###Markdown
Plot what generater outputs before training
###Code
inp0 = tf.reshape(inp,(1,16,16,1))
gen_output = generator(inp0, training=False)
plt.subplot(121)
plt.imshow(inp0[0,:,:,0])
plt.title('Input')
plt.subplot(122)
gen_output = generator(inp0, training=False)
plt.imshow(gen_output[0,:,:,0])
plt.subplots_adjust(right=1.5)
plt.title('Output')
###Output
_____no_output_____
###Markdown
Generator LossAs in [pix2pix paper](https://arxiv.org/abs/1611.07004), the generator loss is : * sigmoid cross entropy los of generated images and an **array of ones***also includes L1 loss (MAE) between generated and target image* total generator loss = gan_loss + LAMBDA*L1_loss, where LAMBDA=100
###Code
LAMBDA = 50 # changed from 100
def generator_loss(disc_generated_output, gen_output, target):
gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)
# mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss, gan_loss, l1_loss
###Output
_____no_output_____
###Markdown
Build the Discriminator * Discriminator is a PatchGAN* Each block in the discriminator is (Conv -> BatchNorm -> Leaky ReLU)* The shape of the output after the last layer is (batch_size, 30, 30, 1). Each 30x30 patch of the output classifies a 70x70 portion of the input image (such an architecture is called a PatchGAN).Discriminator receives 2 inputs.* Input image and the target image, which it should classify as real* Input image and the generated image (output of generator), which it should classify as fake.
###Code
def Discriminator():
initializer = tf.random_normal_initializer(0., 0.02)
inp = tf.keras.layers.Input(shape=[16, 16, 1], name='input_image')
inp2 = tf.keras.layers.UpSampling2D(size=(4,4),interpolation='nearest')(inp)
tar = tf.keras.layers.Input(shape=[64, 64, 1], name='target_image')
x = tf.keras.layers.concatenate([inp2, tar]) # (bs, 64, 64, channels*2)
down1 = downsample(64, 4, False)(x) # (bs, 32, 32, 64)
# down2 = downsample(128, 4)(down1) # (bs, 64, 64, 128)
zero_pad1 = tf.keras.layers.ZeroPadding2D()(down1) # (bs, 34, 34, 256)
conv = tf.keras.layers.Conv2D(512, 4, strides=1,
kernel_initializer=initializer,
use_bias=False)(zero_pad1) # (bs, 31, 31, 512)
batchnorm1 = tf.keras.layers.BatchNormalization()(conv)
leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1)
zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (bs, 33, 33, 512)
last = tf.keras.layers.Conv2D(1, 4, strides=1,
kernel_initializer=initializer)(zero_pad2) # (bs, 30, 30, 1)
return tf.keras.Model(inputs=[inp, tar], outputs=last)
discriminator = Discriminator()
tf.keras.utils.plot_model(discriminator, show_shapes=True, dpi=64)
disc_out = discriminator( [inp0,gen_output], training=False)
plt.imshow(np.squeeze(disc_out))
plt.colorbar()
###Output
_____no_output_____
###Markdown
Discriminator loss* takes 2 inputs: **real image** and **generated image*** real_loss is sigmoid cross entropy loss with real image and array of ones* gen_loss is sigmoid cross entropy loss with generated image and array of zerostotal loss = real_loss + gen_loss
###Code
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(disc_real_output, disc_generated_output):
real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output)
generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
###Output
_____no_output_____
###Markdown
GAN Training Prediction and Plotting Functions
###Code
def generate_images(model, test_input, tar,epoch):
prediction = model(test_input, training=True)
plt.figure(figsize=(15,15))
display_list = [test_input[0,:,:,0], tar[0,:,:,0], prediction[0,:,:,0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it. ## Why?
# plt.imshow(display_list[i] * 0.5 + 0.5)
im = plt.imshow(display_list[i] )
if i==0:
im.set_clim(-2,2)
else:
im.set_clim(-1,1)
plt.colorbar(fraction=0.046, pad=0.04)
plt.axis()
plt.show()
for example_input, example_target in test_dataset.take(1):
generate_images(generator, example_input, example_target,1)
###Output
_____no_output_____
###Markdown
TrainingCalculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.
###Code
import datetime
log_dir="logs/"
summary_writer = tf.summary.create_file_writer(
log_dir + "fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
def train_step(input_image, target, epoch):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator(input_image, training=True)
disc_real_output = discriminator([input_image, target], training=True)
disc_generated_output = discriminator([input_image, gen_output], training=True)
gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(gen_total_loss,
generator.trainable_variables)
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_gradients,
generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.trainable_variables))
with summary_writer.as_default():
tf.summary.scalar('gen_total_loss', gen_total_loss, step=epoch)
tf.summary.scalar('gen_gan_loss', gen_gan_loss, step=epoch)
tf.summary.scalar('gen_l1_loss', gen_l1_loss, step=epoch)
tf.summary.scalar('disc_loss', disc_loss, step=epoch)
def fit(train_ds, epochs, test_ds):
for epoch in range(epochs):
start = time.time()
display.clear_output(wait=True)
for example_input, example_target in test_ds.take(1):
generate_images(generator, example_input, example_target,0)
print("Epoch: ", epoch)
# Train
for n, (input_image, target) in train_ds.enumerate():
print('.', end='')
if (n+1) % 5 ==0: ############################ Change this number for how frequently you want figures
display.clear_output(wait=True)
for example_input, example_target in test_ds.take(1):
generate_images(generator, example_input, example_target,epoch+n+1)
if (n+1) % 100 == 0:
print()
train_step(input_image, target, epoch)
print()
# saving (checkpoint) the model every 20 epochs
if (epoch + 1) % 20 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
checkpoint.save(file_prefix = checkpoint_prefix)
EPOCHS = 1 # Each epoch can take several minutes. 40 epochs were used for example results image
fit(train_dataset, EPOCHS, test_dataset)
###Output
_____no_output_____ |
tasks/time-related-features/Experiment.ipynb | ###Markdown
Criação de atributos temporaisCriação de atributos temporais a partir da agregação por períodos (dia, semana, mês, trimestre ou ano). Para cada atributo categórico no dataset, são gerados novos atributos considerando-se janelas móveis de três, seis e noves períodos consecutivos. **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).** Declaração de parâmetros e hiperparâmetrosDeclare parâmetros com o botão na barra de ferramentas.O parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão na barra de ferramentas.
###Code
dataset = "/tmp/data/hotel_bookings.csv" #@param {type:"string"}
group_col = "hotel" #@param {type:"feature",label:"Atributo de agrupamento",description:"Atributo de agrupamento utilizado para a geração de atributos temporais."}
period = "mês" #@param ["dia","semana","mês","trimestre","ano"] {type:"string",multiple:false,label:"Período",description:"Período considerado para a geração de atributos temporais."}
date_col = "reservation_status_date" #@param {type:"feature",label:"Data de referência",description:"Atributo que determina a data de referência para a criação de atributos temporais"}
target_col = "reservation_status" #@param {type:"feature",label:"Atributo alvo",description:"O atributo alvo não pode ser considerado no processo de criação de novos atributos."}
###Output
_____no_output_____
###Markdown
Acesso ao conjunto de dadosO conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.O tipo da variável retornada depende do arquivo de origem:- [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz- [Binary IO stream](https://docs.python.org/3/library/io.htmlbinary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc
###Code
import pandas as pd
data = pd.read_csv(dataset)
###Output
_____no_output_____
###Markdown
Validação dos dados de entrada
###Code
import platiagro
from platiagro import stat_dataset
metadata = stat_dataset(name=dataset)
numerical_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.NUMERICAL]
numerical_cols = [col for col in numerical_cols if col != target_col]
categorical_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.CATEGORICAL
]
categorical_cols = [col for col in categorical_cols if col != target_col]
datetime_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.DATETIME
]
if len(numerical_cols) == 0:
raise ValueError('Dataset deve necessariamente possuir um atributo do tipo numérico')
if group_col not in categorical_cols:
raise ValueError('Atributo deve necessariamente ser do tipo categórico')
if date_col not in datetime_cols:
raise ValueError('Atributo deve ser necessariamente do tipo datetime')
###Output
_____no_output_____
###Markdown
Converte para o período selecionado o atributo referente à data A transformação é realizada pela função auxiliar `generate_new_index`. O perído obtido é então transformado em índice para faciliar a execução das próximas etapas.
###Code
period_abbr = {
'dia': 'D',
'semana': 'W',
'mês': 'M',
'ano': 'Y',
'trimestre': 'Q'
}
def generate_new_index(df: pd.DataFrame, date_col: str, period: str = 'mês'):
if period not in period_abbr:
raise KeyError(f'Parâmetro de entrada \'period\' precisa ser um dos seguintes: {list(period_abbr.keys())}.')
return pd.DatetimeIndex(df[date_col]).to_period(period_abbr[period])
data[date_col] = pd.to_datetime(data[date_col])
data.index = generate_new_index(data, date_col, 'mês')
###Output
_____no_output_____
###Markdown
Cálculo do quadrado dos atributos numéricosValores quadráticos serão usados no cálculo do desvio padrão móvel (`rolling std`).
###Code
# for each target column calculates the square
data = pd.concat([data, data[numerical_cols].pow(2).rename(columns=lambda x: 'SQR_' + x )], axis=1)
###Output
_____no_output_____
###Markdown
Agregação e início do processo de criação de atributos temporaisConsiderando-se os grupos determinados pelo período e atributo de agrupamento, são calculados para cada atributo numérico as seguintes medidas: `min`, `max`, `count` e `sum`.
###Code
# aggregates the data by [date_col, group_col]
agg_functions = ['min', 'max', 'count', 'sum']
agg_df = data.groupby([data.index, group_col]).agg(
{col : agg_functions for col in numerical_cols + ['SQR_' + col for col in numerical_cols]}
)
# fill missing (date_col, group_col) values
agg_df = agg_df.reindex(
pd.MultiIndex.from_product(
[agg_df.index.levels[0], agg_df.index.levels[1]],
names=[date_col, group_col]
)
)
agg_df = agg_df.reset_index().sort_values([group_col, date_col],ignore_index=True)
###Output
_____no_output_____
###Markdown
Função auxiliares para o cálculo dos atributos temporais* média: `calculate_rolling_mean`* min, max: `calculate_rolling_extrema`* desvio padrão: `calculate_rolling_std`
###Code
def calculate_rolling_mean(df: pd.DataFrame, target_cols, k: int = 3):
agg_cols = [(col, f) for col in target_cols for f in ['count', 'sum']]
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).sum()
for col in target_cols:
res_df[f'MEAN_{group_col}_{col}_{k}'] = res_df[(col, 'sum')] / res_df[(col, 'count')]
res_df = res_df.drop(agg_cols, axis=1)
res_df.columns = res_df.columns.droplevel(1)
return res_df
def calculate_rolling_extrema(df: pd.DataFrame, target_cols, extrema: str = 'min', k: int = 3):
agg_cols = [(col, extrema) for col in target_cols]
if extrema == 'min':
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).min().shift(1)
else:
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).max().shift(1)
res_df.columns = res_df.columns.droplevel(1)
return res_df.rename(columns={col: f'{extrema.upper()}_{group_col}_{col}_{k}' for col in target_cols})
import numpy as np
def calculate_rolling_std(df: pd.DataFrame, target_cols, k:int = 3):
agg_cols = [(col, f) for col in target_cols + ['SQR_' + col for col in target_cols] for f in ['count', 'sum']]
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).sum().shift(1)
for col in target_cols:
new_name = f'STD_{group_col}_{col}_{k}'
res_df[new_name] = res_df[('SQR_' + col, 'sum')] - (np.power(res_df[(col, 'sum')], 2) / res_df[(col, 'count')])
res_df[new_name] = np.sqrt(res_df[new_name]) / (res_df[(col, 'count')] - 1)
res_df = res_df.drop(agg_cols, axis=1)
res_df.columns = res_df.columns.droplevel(1)
return res_df
###Output
_____no_output_____
###Markdown
Cálculo dos atributos temporais
###Code
new_features = pd.concat([
pd.concat([calculate_rolling_mean(agg_df, numerical_cols, k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_std(agg_df, numerical_cols, k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_extrema(agg_df, numerical_cols, 'min', k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_extrema(agg_df, numerical_cols, 'max', k) for k in [3, 6, 9]], axis=1)
], axis=1)
new_features[date_col] = agg_df[date_col].to_list()
new_features[group_col] = agg_df[group_col].to_list()
#remove unnecessary columns
data.drop(['SQR_' + col for col in numerical_cols], axis=1, inplace=True)
#merge the generated features with the original data
data.set_index(pd.Index(data[group_col]), append=True, inplace=True)
new_features.set_index([date_col, group_col], inplace=True)
data = pd.merge(data, new_features, left_index=True, right_index=True)
#reset index and sort values
data.set_index(pd.RangeIndex(start=0, stop=data.shape[0], step=1), inplace=True)
###Output
_____no_output_____
###Markdown
Salva alterações no conjunto de dadosO conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.
###Code
data.to_csv(dataset, index=False)
###Output
_____no_output_____
###Markdown
Criação de atributos temporaisCriação de atributos temporais a partir da agregação por períodos (dia, semana, mês, trimestre ou ano). Para cada atributo categórico no dataset, são gerados novos atributos considerando-se janelas móveis de três, seis e noves períodos consecutivos. **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).** Declaração de parâmetros e hiperparâmetrosDeclare parâmetros com o botão na barra de ferramentas.O parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão na barra de ferramentas.
###Code
dataset = "/tmp/data/hotel_bookings.csv" #@param {type:"string"}
group_col = "hotel" #@param {type:"feature",label:"Atributo de agrupamento",description:"Atributo de agrupamento utilizado para a geração de atributos temporais."}
period = "mês" #@param ["dia","semana","mês","trimestre","ano"] {type:"string",multiple:false,label:"Período",description:"Período considerado para a geração de atributos temporais."}
date_col = "reservation_status_date" #@param {type:"feature",label:"Data de referência",description:"Atributo que determina a data de referência para a criação de atributos temporais"}
target_col = "reservation_status" #@param {type:"feature",label:"Atributo alvo",description:"O atributo alvo não pode ser considerado no processo de criação de novos atributos."}
###Output
_____no_output_____
###Markdown
Acesso ao conjunto de dadosO conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.O tipo da variável retornada depende do arquivo de origem:- [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz- [Binary IO stream](https://docs.python.org/3/library/io.htmlbinary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc
###Code
import pandas as pd
data = pd.read_csv(dataset)
###Output
_____no_output_____
###Markdown
Validação dos dados de entrada
###Code
import platiagro
from platiagro import stat_dataset
metadata = stat_dataset(name=dataset)
numerical_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.NUMERICAL]
numerical_cols = [col for col in numerical_cols if col != target_col]
categorical_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.CATEGORICAL
]
categorical_cols = [col for col in categorical_cols if col != target_col]
datetime_cols = [
metadata['columns'][i]
for i, ft in enumerate(metadata['featuretypes'])
if ft == platiagro.DATETIME
]
if len(numerical_cols) == 0:
raise ValueError('Dataset deve necessariamente possuir um atributo do tipo numérico')
if group_col not in categorical_cols:
raise ValueError('Atributo deve necessariamente ser do tipo categórico')
if date_col not in datetime_cols:
raise ValueError('Atributo deve ser necessariamente do tipo datetime')
###Output
_____no_output_____
###Markdown
Converte para o período selecionado o atributo referente à data A transformação é realizada pela função auxiliar `generate_new_index`. O perído obtido é então transformado em índice para faciliar a execução das próximas etapas.
###Code
period_abbr = {
'dia': 'D',
'semana': 'W',
'mês': 'M',
'ano': 'Y',
'trimestre': 'Q'
}
def generate_new_index(df: pd.DataFrame, date_col: str, period: str = 'mês'):
if period not in period_abbr:
raise KeyError(f'Parâmetro de entrada \'period\' precisa ser um dos seguintes: {list(period_abbr.keys())}.')
return pd.DatetimeIndex(df[date_col]).to_period(period_abbr[period])
data[date_col] = pd.to_datetime(data[date_col])
data.index = generate_new_index(data, date_col, 'mês')
###Output
_____no_output_____
###Markdown
Cálculo do quadrado dos atributos numéricosValores quadráticos serão usados no cálculo do desvio padrão móvel (`rolling std`).
###Code
# for each target column calculates the square
data = pd.concat([data, data[numerical_cols].pow(2).rename(columns=lambda x: 'SQR_' + x )], axis=1)
###Output
_____no_output_____
###Markdown
Agregação e início do processo de criação de atributos temporaisConsiderando-se os grupos determinados pelo período e atributo de agrupamento, são calculados para cada atributo numérico as seguintes medidas: `min`, `max`, `count` e `sum`.
###Code
# aggregates the data by [date_col, group_col]
agg_functions = ['min', 'max', 'count', 'sum']
agg_df = data.groupby([data.index, group_col]).agg(
{col : agg_functions for col in numerical_cols + ['SQR_' + col for col in numerical_cols]}
)
# fill missing (date_col, group_col) values
agg_df = agg_df.reindex(
pd.MultiIndex.from_product(
[agg_df.index.levels[0], agg_df.index.levels[1]],
names=[date_col, group_col]
)
)
agg_df = agg_df.reset_index().sort_values([group_col, date_col],ignore_index=True)
###Output
_____no_output_____
###Markdown
Função auxiliares para o cálculo dos atributos temporais* média: `calculate_rolling_mean`* min, max: `calculate_rolling_extrema`* desvio padrão: `calculate_rolling_std`
###Code
def calculate_rolling_mean(df: pd.DataFrame, target_cols, k: int = 3):
agg_cols = [(col, f) for col in target_cols for f in ['count', 'sum']]
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).sum()
for col in target_cols:
res_df[f'MEAN_{group_col}_{col}_{k}'] = res_df[(col, 'sum')] / res_df[(col, 'count')]
res_df = res_df.drop(agg_cols, axis=1)
res_df.columns = res_df.columns.droplevel(1)
return res_df
def calculate_rolling_extrema(df: pd.DataFrame, target_cols, extrema: str = 'min', k: int = 3):
agg_cols = [(col, extrema) for col in target_cols]
if extrema == 'min':
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).min().shift(1)
else:
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).max().shift(1)
res_df.columns = res_df.columns.droplevel(1)
return res_df.rename(columns={col: f'{extrema.upper()}_{group_col}_{col}_{k}' for col in target_cols})
import numpy as np
def calculate_rolling_std(df: pd.DataFrame, target_cols, k:int = 3):
agg_cols = [(col, f) for col in target_cols + ['SQR_' + col for col in target_cols] for f in ['count', 'sum']]
res_df = df.groupby(group_col)[agg_cols].rolling(k, min_periods=1).sum().shift(1)
for col in target_cols:
new_name = f'STD_{group_col}_{col}_{k}'
res_df[new_name] = res_df[('SQR_' + col, 'sum')] - (np.power(res_df[(col, 'sum')], 2) / res_df[(col, 'count')])
res_df[new_name] = np.sqrt(res_df[new_name]) / (res_df[(col, 'count')] - 1)
res_df = res_df.drop(agg_cols, axis=1)
res_df.columns = res_df.columns.droplevel(1)
return res_df
###Output
_____no_output_____
###Markdown
Cálculo dos atributos temporais
###Code
new_features = pd.concat([
pd.concat([calculate_rolling_mean(agg_df, numerical_cols, k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_std(agg_df, numerical_cols, k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_extrema(agg_df, numerical_cols, 'min', k) for k in [3, 6, 9]], axis=1),
pd.concat([calculate_rolling_extrema(agg_df, numerical_cols, 'max', k) for k in [3, 6, 9]], axis=1)
], axis=1)
new_features[date_col] = agg_df[date_col].to_list()
new_features[group_col] = agg_df[group_col].to_list()
#remove unnecessary columns
data.drop(['SQR_' + col for col in numerical_cols], axis=1, inplace=True)
#merge the generated features with the original data
data.set_index(pd.Index(data[group_col]), append=True, inplace=True)
new_features.set_index([date_col, group_col], inplace=True)
data = pd.merge(data, new_features, left_index=True, right_index=True)
#reset index and sort values
data.set_index(pd.RangeIndex(start=0, stop=data.shape[0], step=1), inplace=True)
###Output
_____no_output_____
###Markdown
Salva alterações no conjunto de dadosO conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.
###Code
data.to_csv(dataset, index=False)
from joblib import dump
artifacts = {
"group_col": group_col,
"period": period,
"date_col": date_col,
"target_col": target_col,
}
dump(artifacts, "/tmp/data/model.joblib")
###Output
_____no_output_____ |
Gradient_Explorer.ipynb | ###Markdown
Gradient Explorer
###Code
#Load Packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.colors import ListedColormap
from scipy import linalg
from IPython.display import display, clear_output
import time
from ipywidgets import interact, fixed, FloatSlider, IntSlider
%matplotlib inline
fys=[]
def plot_function(xdims, ydims, f, title):
global fys
#Prepare grid for plotting decision surface
gx1, gx2 = np.meshgrid(
np.arange(xdims[0], xdims[1], (xdims[1] - xdims[0]) / 100.0),
np.arange(ydims[0], ydims[1], (ydims[1] - ydims[0]) / 100.0)
)
gx1l = gx1.flatten()
gx2l = gx2.flatten()
gx = np.vstack((gx1l, gx2l)).T
#Compute a prediction for every point in the grid
#Cache for fast redisplay
if (len(fys) == 0):
y = f(gx)
y = np.reshape(y, gx1.shape)
fys = y
else:
y = fys
#Plot a contour map of the function
plt.contourf(gx1, gx2, y,
levels=np.unique(np.round(np.linspace(0, (np.max(y)), 10))))
plt.colorbar()
plt.contour(gx1, gx2, y, colors='k',
levels=np.unique(np.round(np.linspace(0, (np.max(y)), 10))))
plt.xlabel('x1')
plt.ylabel('x2')
plt.grid(False)
plt.title(title)
def plot_grad(q,x1=0,x2=0,steps=1,stepsize=1):
fig = plt.figure(figsize=(10, 8))
plot_function([-10, 10.1], [-10, 10.1], q.f, "Objective Function")
plt.plot(x1, x2, 'wo')
for s in range(steps):
d = -q.g(np.array([x1,x2]))
if(np.abs(x1-d[0])>0.5 or np.abs(x2-d[1])>0.5):
plt.arrow( x1, x2, stepsize*d[0], stepsize*d[1], color='w', head_length=0.5, head_width=0.5,length_includes_head=True)
x1 = x1 + stepsize*d[0]
x2 = x2 + stepsize*d[1]
plt.xlim(-10,10)
plt.ylim(-10,10)
plt.show()
###Output
_____no_output_____
###Markdown
Create a Function===
###Code
class quad_func():
def __init__(self, A, b, c):
self.A = A
self.b = b
self.c = c
self.fcount = 0
self.gcount = 0
def f(self, x):
self.fcount += 1
if (len(x.shape) == 1):
fx = x.T.dot(self.A).dot(x) + x.dot(self.b.T) + self.c
return fx[0]
else:
return np.sum(x.dot(self.A) * x, axis=1, keepdims=True) + x.dot(self.b.T) + self.c
def g(self, x):
self.gcount += 1
return 2 * np.reshape(x.dot(self.A) + self.b, x.shape)
def reset_counts(self):
self.fcount = 0
self.gcount = 0
###Output
_____no_output_____
###Markdown
Gradient Explorer--
###Code
q = quad_func(np.array([[0.1, 0.05], [0.05, 0.1]]), np.array([[0, 0]]), 7)
x1s=FloatSlider(min=-10, max=10, step=0.5, continuous_update=False)
x2s=FloatSlider(min=-10, max=10, step=0.5, continuous_update=False)
ss =IntSlider(min=1, max=20, continuous_update=False)
interact(plot_grad, q=fixed(q), x1=x1s,x2=x2s,steps=ss,stepsize=[0.1,1,2.5,5,10]);
###Output
_____no_output_____ |
src/structured_data/preprocessing_layers.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Classify structured data using Keras Preprocessing Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). You will use [Keras](https://www.tensorflow.org/guide/keras) to define the model, and [preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:* Load a CSV file using [Pandas](https://pandas.pydata.org/).* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).* Map from columns in the CSV to features used to train the model using Keras Preprocessing layers.* Build, train, and evaluate a model using Keras. Note: This tutorial is similar to [Classify structured data with feature columns](https://www.tensorflow.org/tutorials/structured_data/feature_columns). This version uses new experimental Keras [Preprocessing Layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing) instead of `tf.feature_column`. Keras Preprocessing Layers are more intuitive, and can be easily included inside your model to simplify deployment. The DatasetYou will use a simplified version of the PetFinder [dataset](https://www.kaggle.com/c/petfinder-adoption-prediction). There are several thousand rows in the CSV. Each row describes a pet, and each column describes an attribute. You will use this information to predict if the pet will be adopted.Following is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which you will not use in this tutorial.Column | Description| Feature Type | Data Type------------|--------------------|----------------------|-----------------Type | Type of animal (Dog, Cat) | Categorical | stringAge | Age of the pet | Numerical | integerBreed1 | Primary breed of the pet | Categorical | stringColor1 | Color 1 of pet | Categorical | stringColor2 | Color 2 of pet | Categorical | stringMaturitySize | Size at maturity | Categorical | stringFurLength | Fur length | Categorical | stringVaccinated | Pet has been vaccinated | Categorical | stringSterilized | Pet has been sterilized | Categorical | stringHealth | Health Condition | Categorical | stringFee | Adoption Fee | Numerical | integerDescription | Profile write-up for this pet | Text | stringPhotoAmt | Total uploaded photos for this pet | Numerical | integerAdoptionSpeed | Speed of adoption | Classification | integer Import TensorFlow and other libraries
###Code
!pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
tf.__version__
###Output
_____no_output_____
###Markdown
Use Pandas to create a dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. You will use Pandas to download the dataset from a URL, and load it into a dataframe.
###Code
import pathlib
dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'
tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
dataframe.head()
###Output
_____no_output_____
###Markdown
Create target variableThe task in the Kaggle competition is to predict the speed at which a pet will be adopted (e.g., in the first week, the first month, the first three months, and so on). Let's simplify this for our tutorial. Here, you will transform this into a binary classification problem, and simply predict whether the pet was adopted, or not.After modifying the label column, 0 will indicate the pet was not adopted, and 1 will indicate it was.
###Code
# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)
# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
###Output
_____no_output_____
###Markdown
Split the dataframe into train, validation, and testThe dataset you downloaded was a single CSV file. You will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
7383 train examples
1846 validation examples
2308 test examples
###Markdown
Create an input pipeline using tf.dataNext, you will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets), in order to shuffle and batch the data. If you were working with a very large CSV file (so large that it does not fit into memory), you would use tf.data to read it from disk directly. That is not covered in this tutorial.
###Code
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
ds = ds.prefetch(batch_size)
return ds
###Output
_____no_output_____
###Markdown
Now that you have created the input pipeline, let's call it to see the format of the data it returns. You have used a small batch size to keep the output readable.
###Code
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
###Output
Every feature: ['Type', 'Age', 'Breed1', 'Gender', 'Color1', 'Color2', 'MaturitySize', 'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Fee', 'PhotoAmt']
A batch of ages: tf.Tensor([1 2 3 1 4], shape=(5,), dtype=int64)
A batch of targets: tf.Tensor([0 1 1 0 0], shape=(5,), dtype=int64)
###Markdown
You can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Demonstrate the use of preprocessing layers.The Keras preprocessing layers API allows you to build Keras-native input processing pipelines. You will use 3 preprocessing layers to demonstrate the feature preprocessing code.* [`Normalization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) - Feature-wise normalization of the data.* [`CategoryEncoding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/CategoryEncoding) - Category encoding layer.* [`StringLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/StringLookup) - Maps strings from a vocabulary to integer indices.* [`IntegerLookup`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/IntegerLookup) - Maps integers from a vocabulary to integer indices.You can find a list of available preprocessing layers [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing). Numeric columnsFor each of the Numeric feature, you will use a Normalization() layer to make sure the mean of each feature is 0 and its standard deviation is 1. `get_normalization_layer` function returns a layer which applies featurewise normalization to numerical features.
###Code
def get_normalization_layer(name, dataset):
# Create a Normalization layer for our feature.
normalizer = preprocessing.Normalization(axis=None)
# Prepare a Dataset that only yields our feature.
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the statistics of the data.
normalizer.adapt(feature_ds)
return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
###Output
2021-07-29 12:20:43.619106: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
2021-07-29 12:20:43.638488: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 1800000000 Hz
###Markdown
Note: If you many numeric features (hundreds, or more), it is more efficient to concatenate them first and use a single [normalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Normalization) layer. Categorical columnsIn this dataset, Type is represented as a string (e.g. 'Dog', or 'Cat'). You cannot feed strings directly to a model. The preprocessing layer takes care of representing strings as a one-hot vector. `get_category_encoding_layer` function returns a layer which maps values from a vocabulary to integer indices and one-hot encodes the features.
###Code
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_tokens=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
###Output
_____no_output_____
###Markdown
Often, you don't want to feed a number directly into the model, but instead use a one-hot encoding of those inputs. Consider raw data that represents a pet's age.
###Code
type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col)
###Output
_____no_output_____
###Markdown
Choose which columns to useYou have seen how to use several types of preprocessing layers. Now you will use them to train a model. You will be using [Keras-functional API](https://www.tensorflow.org/guide/keras/functional) to build the model. The Keras functional API is a way to create models that are more flexible than the [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) API.The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with preprocessing layers. A few columns have been selected arbitrarily to train our model.Key point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented. Earlier, you used a small batch size to demonstrate the input pipeline. Let's now create a new input pipeline with a larger batch size.
###Code
batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
###Output
_____no_output_____
###Markdown
Create, compile, and train the model Now you can create our end-to-end model.
###Code
all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's visualize our connectivity graph:
###Code
# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
###Output
('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.')
###Markdown
Train the model
###Code
model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
10/10 [==============================] - 0s 2ms/step - loss: 0.5222 - accuracy: 0.7292
Accuracy 0.7292027473449707
###Markdown
Inference on new dataKey point: The model you have developed can now classify a row from a CSV file directly, because the preprocessing code is included inside the model itself. You can now save and reload the Keras model. Follow the tutorial [here](https://www.tensorflow.org/tutorials/keras/save_and_load) for more information on TensorFlow models.
###Code
model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
###Output
2021-07-29 12:20:51.519633: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) PhotoAmt, Fee, Age, Type, Color1, Color2, Gender, MaturitySize, FurLength, Vaccinated, Sterilized, Health, Breed1 with unsupported characters which will be renamed to photoamt, fee, age, type, color1, color2, gender, maturitysize, furlength, vaccinated, sterilized, health, breed1 in the SavedModel.
###Markdown
To get a prediction for a new sample, you can simply call `model.predict()`. There are just two things you need to do:1. Wrap scalars into a list so as to have a batch dimension (models only process batches of data, not single samples)2. Call `convert_to_tensor` on each feature
###Code
sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
)
###Output
This particular pet had a 84.8 percent probability of getting adopted.
|
DA Lab/5_assignment.ipynb | ###Markdown
1806554 Ganesh Bhandarkar Lab 5
###Code
height <- c(132,151,162,139,166,147,122)
weight <- c(48,49,66,53,67,52,40)
gender<- c("male","male","female","female","male","female","male")
input_data <- data.frame(height,weight,gender)
print(input_data)
print(is.factor(input_data$gender))
print(input_data$gender)
###Output
height weight gender
1 132 48 male
2 151 49 male
3 162 66 female
4 139 53 female
5 166 67 male
6 147 52 female
7 122 40 male
[1] TRUE
[1] male male female female male female male
Levels: female male
###Markdown
Questions
###Code
Players = data.frame(Name=c("Tom","Jerry","Spiderman", "Batman","Aquaman"),
Num=c("1","2","3","4","5"),
Age=c(23,22,25,26,32),
Profession=c("Golfer","Crickter","Chess Player","Tennis Player","HandBall Player"),
Grade=c("A","A","B","O","C")
)
print("Details of the Players:")
print(Players)
print(Players$Name)
print(Players$Grade)
print(head(Players,3))
print(Players[c(2,5),c(1,3)])
###Output
Name Age
2 Jerry 22
5 Aquaman 32
|
notebooks/lastfm-pytorch-dataset.ipynb | ###Markdown
Data
###Code
variant = 'lastfm-50'
%%time
lastfm_dataset = dset.LastFMUserItemDataset(variant)
print(f'Number of Total Listens: {len(lastfm_dataset.df):,}')
print(f'Number of Unique Artists: {lastfm_dataset.df.artist_name.nunique():,}')
lastfm_dataset.df.head(5)
lastfm_dataset.df.groupby('user_id').size().describe()
lastfm_dataloader = DataLoader(lastfm_dataset, batch_size=2)
%%time
for batch_num, (anchors, targets) in enumerate(lastfm_dataloader, start=0):
print('batch', batch_num + 1, '| anchors:', len(anchors), anchors, ' | targets:', len(targets), targets)
if batch_num == 4:
break
model = SGNS(num_embeddings=lastfm_dataset.num_items, embedding_dim=10, nn_embedding_kwargs={'sparse': True})
anchor_embeddings, target_embeddings, negative_embeddings = model.forward(anchors, targets)
anchor_embeddings.size()
model.as_embedding(anchors[0].item()) == anchor_embeddings[0, :]
###Output
_____no_output_____ |
ipynb/bac_genome/OTU-level_variability/.ipynb_checkpoints/p4_NCBI_comp-gen_sim-ampFrags-checkpoint.ipynb | ###Markdown
Goal* simulating amplicon fragments for genomes in non-singleton OTUs Setting variables
###Code
import os
workDir = '/var/seq_data/ncbi_db/genome/Jan2016/ampFrags/'
genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_rn/'
ampliconFile = '/var/seq_data/ncbi_db/genome/Jan2016/rnammer_aln/otusn_map_nonSingle.txt'
###Output
_____no_output_____
###Markdown
Init
###Code
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
# simlink amplicon OTU map file
tmp = os.path.join(workDir, '../', ampliconFile)
!ln -s -f $tmp .
###Output
_____no_output_____
###Markdown
Indexing genomes
###Code
!head -n 3 $ampliconFile
!cut -f 13 $ampliconFile | head
###Output
Planococcus_sp_PAMC_21323.fna
Planococcus_sp_PAMC_21323.fna
Planococcus_sp_PAMC_21323.fna
Planococcus_sp_PAMC_21323.fna
Planococcus_sp_PAMC_21323.fna
Plautia_stali_symbiont.fna
Pluralibacter_gergoviae.fna
Planococcus_sp_PAMC_21323.fna
Plautia_stali_symbiont.fna
Plautia_stali_symbiont.fna
cut: write error: Broken pipe
###Markdown
symlinking all genomes of interest
###Code
!cut -f 13 $ampliconFile | \
sort -u | \
perl -pe 's|^|../bac_complete_rn/|' | \
xargs -I % ln -s -f % .
!cut -f 13 $ampliconFile | sort -u | wc -l
!find . -name "*.fna" | wc -l
!ls -thlc 2>/dev/null | head -n 4
###Output
total 3.1M
lrwxrwxrwx 1 nick seq_data_users 67 Feb 14 10:26 Zymomonas_mobilis_subsp_mobilis_NRRL_B-12526.fna -> ../bac_complete_rn/Zymomonas_mobilis_subsp_mobilis_NRRL_B-12526.fna
lrwxrwxrwx 1 nick seq_data_users 75 Feb 14 10:26 Zymomonas_mobilis_subsp_mobilis_str_CP4_NRRL_B-14023.fna -> ../bac_complete_rn/Zymomonas_mobilis_subsp_mobilis_str_CP4_NRRL_B-14023.fna
lrwxrwxrwx 1 nick seq_data_users 69 Feb 14 10:26 Zymomonas_mobilis_subsp_mobilis_ZM4_ATCC_31821.fna -> ../bac_complete_rn/Zymomonas_mobilis_subsp_mobilis_ZM4_ATCC_31821.fna
###Markdown
Making genome -> genome_file index
###Code
!cut -f 13 $ampliconFile | perl -pe 's/(.+).fna/\$1\t\$1\.fna/' | sort -u > genome_index.txt
!wc -l genome_index.txt
!head genome_index.txt
###Output
3217 genome_index.txt
Acaryochloris_marina_MBIC11017 Acaryochloris_marina_MBIC11017.fna
Acetobacterium_woodii_DSM_1030 Acetobacterium_woodii_DSM_1030.fna
Acetobacter_pasteurianus_386B Acetobacter_pasteurianus_386B.fna
Acetobacter_pasteurianus Acetobacter_pasteurianus.fna
Acetobacter_pasteurianus_IFO_3283-01-42C Acetobacter_pasteurianus_IFO_3283-01-42C.fna
Acetobacter_pasteurianus_IFO_3283-01 Acetobacter_pasteurianus_IFO_3283-01.fna
Acetobacter_pasteurianus_IFO_3283-03 Acetobacter_pasteurianus_IFO_3283-03.fna
Acetobacter_pasteurianus_IFO_3283-07 Acetobacter_pasteurianus_IFO_3283-07.fna
Acetobacter_pasteurianus_IFO_3283-12 Acetobacter_pasteurianus_IFO_3283-12.fna
Acetobacter_pasteurianus_IFO_3283-22 Acetobacter_pasteurianus_IFO_3283-22.fna
###Markdown
Index genomes
###Code
!SIPSim genome_index \
genome_index.txt \
--fp . --np 26 \
> index_log.txt \
2> index_log_err.txt
!find . -name "*sqlite3.db" | wc -l
###Output
3217
###Markdown
Simulating fragments
###Code
# copy primer file
!cp /home/nick/notebook/SIPSim/dev/515F-806R.fna ../
!SIPSim fragments \
genome_index.txt \
--fp $workDir \
--fr ../515F-806R.fna \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 10000 \
--np 20 \
2> ../ampFrags.log \
> ../ampFrags.pkl
%pushnote SIPSim fragments complete
###Output
_____no_output_____ |
notebook/.ipynb_checkpoints/demo-checkpoint.ipynb | ###Markdown
hyperparameters
###Code
valid_size = 0.33
random_seed = 42
max_iterations = 1000
kp = 1
lr = 0.01
###Output
_____no_output_____
###Markdown
load data
###Code
train = pd.read_csv("../data/water/csv/train2017.csv")
test = pd.read_csv("../data/water/csv/test2017.csv")
X_train = train.values[:, 0:-1]
y_train = train.values[:, -1]
X_test = test.values[:, 0:-1]
y_test = train.values[:, -1]
###Output
_____no_output_____
###Markdown
train valid split
###Code
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size = valid_size,
stratify = y_train, random_state = random_seed)
print("1 in train:", sum(y_train == 1) / len(y_train))
print("1 in valid", sum(y_valid == 1) / len(y_valid))
###Output
1 in train: 0.0157049727924142
1 in valid 0.01569678407350689
###Markdown
oversampling
###Code
X_train_oversampled, y_train_oversampled = Smoter(X_train, y_train, is_random=True)
print("============ SMOTE ============")
print("train: %d, contains %.4f of 0 , after SMOTE: train: %d contains %.4f of 1"
%(X_train.shape[0],
(y_train == 0).sum()/y_train.shape[0],
X_train_oversampled.shape[0],
(y_train_oversampled == 0).sum()/y_train_oversampled.shape[0]))
###Output
============ SMOTE ============
train: 74244, contains 0.9843 of 0 , after SMOTE: train: 146156 contains 0.5000 of 1
###Markdown
normalize
###Code
clean_pipeline = Pipeline([('imputer', preprocessing.Imputer(missing_values='NaN',strategy="median")),
('std_scaler', preprocessing.StandardScaler()),])
X_train_oversampled = clean_pipeline.fit_transform(X_train_oversampled)
X_valid = clean_pipeline.fit_transform(X_valid)
X_test = clean_pipeline.fit_transform(X_test)
###Output
_____no_output_____
###Markdown
transfer y into probability vector
###Code
y_train_oversampled_pro = np.zeros([y_train_oversampled.shape[0], 2])
for i in range(len(y_train_oversampled)):
if y_train_oversampled[i] == 1:
y_train_oversampled_pro[i] = np.array([0, 1])
else:
y_train_oversampled_pro[i] = np.array([1, 0])
y_train_oversampled = y_train_oversampled_pro
y_valid_pro = np.zeros([y_valid.shape[0], 2])
for i in range(len(y_valid)):
if y_valid[i] == 1:
y_valid_pro[i] = np.array([0, 1])
else:
y_valid_pro[i] = np.array([1, 0])
y_valid = y_valid_pro
y_test_pro = np.zeros([y_test.shape[0], 2])
for i in range(len(y_test)):
if y_test[i] == 1:
y_test_pro[i] = np.array([0, 1])
else:
y_test_pro[i] = np.array([1, 0])
y_test = y_test_pro
# y_valid_nonezero = np.count_nonzero(np.argmax(y_valid, 1))
# print("y_valid: 1 contains: ", y_valid_nonezero, "/",len(y_valid))
keep_prob = tf.placeholder(tf.float32)
xs = tf.placeholder(tf.float32, [None, 9])
ys = tf.placeholder(tf.float32, [None, 2])
learning_rate = tf.placeholder(tf.float32)
X_input = tf.reshape(xs,[-1,9,1]) # [n_samples, 9 ,1] -1 具体是多少由导入数据决定(多少组数据)
def MLP():
## func1 layer ##
W_fc1 = weight_variable([9,2],name="W_fc1")
b_fc1 = bias_variable([2],name="b_fc1")
X_input_flat = tf.reshape(X_input, [-1,9])
h_fc1 = tf.nn.softmax(tf.matmul(X_input_flat, W_fc1)+b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
prediction = h_fc1_drop
var_dict = {'W_fc1': W_fc1,
'b_fc1': b_fc1}
return prediction, var_dict
# output
prediction, var_dict = MLP()
# the error between prediction and real data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction, labels=ys))
train_op = tf.train.AdamOptimizer(learning_rate).minimize(cost)
valid_acc = []
valid_f1 = []
valid_loss = []
train_acc = []
train_f1 = []
train_loss = []
X_train_oversampled = np.array(X_train_oversampled, dtype=np.float32)
y_train_oversampled = np.array(y_train_oversampled, dtype=np.float32)
X_valid = np.array(X_valid, dtype=np.float32)
y_valid = np.array(y_valid, dtype=np.float32)
saver = tf.train.Saver(var_dict)
###Output
_____no_output_____
###Markdown
train
###Code
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
true_iteration = 1
for i in range(max_iterations):
feed = {xs : X_train_oversampled, ys : y_train_oversampled, keep_prob : kp, learning_rate : lr}
# train
# print("++++++++++++++++++ train ++++++++++++++++++")
loss, _ = sess.run([cost, train_op],feed_dict=feed)
precision, recall, f1_score, accuracy = evaluate(X_train_oversampled, y_train_oversampled, sess)
train_loss.append(loss)
train_f1.append(f1_score)
train_acc.append(accuracy)
# valid cost
loss_v = sess.run(cost, feed_dict={xs: X_valid, ys: y_valid, keep_prob: 1})
valid_loss.append(loss_v)
# valid evaluationX_testX_testX_testX_test
# print("++++++++++++++++++ valid ++++++++++++++++++")
valid_precision, valid_recall, valid_f1_score, valid_accuracy = evaluate(X_valid, y_valid, sess)
valid_f1.append(valid_f1_score)
valid_acc.append(valid_accuracy)
if valid_f1_score >= 0.98:
break
print("Iteration: {}/{}\n".format(true_iteration, max_iterations),
"Train loss: {:6f}".format(loss),
"Train acc: {:.6f}".format(accuracy),
"Train f1: {:.6f}\n".format(f1_score),
"Valid loss: {:6f}".format(loss_v),
"Valid acc: {:.6f}".format(valid_accuracy),
"Valid f1: {:.6f}".format(valid_f1_score))
true_iteration += 1
# save_path = saver.save(sess,"mlp_2017/2017_save_net.ckpt")
# print("Save to path:", save_path)
# Plot training and valid loss
t = np.arange(true_iteration - 1)
plt.figure(figsize = (6,6))
plt.plot(t, np.array(train_loss), 'r-', t, np.array(valid_loss), 'b-')
plt.xlabel("iteration")
plt.ylabel("Loss")
plt.legend(['train', 'valid'], loc='upper right')
plt.savefig("../img/loss_before_ensemble.png")
plt.show()
# Plot Accuracies
plt.figure(figsize = (6,6))
plt.plot(t, np.array(train_acc), 'r-', t, valid_acc, 'b-')
plt.xlabel("iteration")
plt.ylabel("Accuray")
plt.legend(['train', 'valid'], loc='upper right')
plt.savefig("../img/acc_before_ensemble.png")
plt.show()
# Plot F1
plt.figure(figsize = (6,6))
plt.plot(t, np.array(train_f1), 'r-', t, valid_f1, 'b-')
plt.xlabel("iteration")
plt.ylabel("F1")
plt.legend(['train', 'valid'], loc='upper right')
plt.savefig("../img/f1_before_ensemble.png")
plt.show()
###Output
_____no_output_____ |
Facial Key Point Masks.ipynb | ###Markdown
Keypoint Detection:This notebook detects facial keypoints via webcam input.
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2 as cv
%matplotlib inline
import torch
#from models import Net
from models import mod_cnn
#net = Net()
net =mod_cnn()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
#net.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))
net.load_state_dict(torch.load('saved_models/keypoints_model_with5x5_1x1_final.pt'))
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
def draw_moustache(img,key_pts):
image_copy = np.copy(img)
# top-left location for sunglasses to go
# 3 = edge of left eyebrow
x = int(key_pts[3, 0])
y = int(key_pts[3, 1])
# height and width of sunglasses
# h = length of nose
h = int(abs(key_pts[62,1] - key_pts[57,1]))
# w = left to right eyebrow edges
w = int(abs(key_pts[3,0] - key_pts[13,0]))
# read in sunglasses
sunglasses = cv.imread('images/moustache.png', cv.IMREAD_UNCHANGED)
# resize sunglasses
new_sunglasses = cv.resize(sunglasses, (w, h), interpolation = cv.INTER_CUBIC)
# get region of interest on the face to change
roi_color = image_copy[y:y+h,x:x+w]
# find all non-transparent pts
ind = np.argwhere(new_sunglasses[:,:,3] > 0)
# for each non-transparent point, replace the original image pixel with that of the new_sunglasses
for i in range(3):
roi_color[ind[:,0],ind[:,1],i] = new_sunglasses[ind[:,0],ind[:,1],i]
# set the area of the image to the changed region with sunglasses
image_copy[y:y+h,x:x+w] = roi_color
return image_copy
def draw_sunglasses(img,key_pts):
image_copy = np.copy(img)
# top-left location for sunglasses to go
# 18 = edge of left eyebrow
x = int(key_pts[17, 0])
y = int(key_pts[17, 1])
# height and width of sunglasses
# h = length of nose
h = int(abs(key_pts[27,1] - key_pts[34,1]))
# w = left to right eyebrow edges
w = int(abs(key_pts[17,0] - key_pts[26,0]))
# read in sunglasses
sunglasses = cv.imread('images/sunglasses.png', cv.IMREAD_UNCHANGED)
# resize sunglasses
new_sunglasses = cv.resize(sunglasses, (w, h), interpolation = cv.INTER_CUBIC)
# get region of interest on the face to change
roi_color = image_copy[y:y+h,x:x+w]
# find all non-transparent pts
ind = np.argwhere(new_sunglasses[:,:,3] > 0)
# for each non-transparent point, replace the original image pixel with that of the new_sunglasses
for i in range(3):
roi_color[ind[:,0],ind[:,1],i] = new_sunglasses[ind[:,0],ind[:,1],i]
# set the area of the image to the changed region with sunglasses
image_copy[y:y+h,x:x+w] = roi_color
return image_copy
capture=cv.VideoCapture(0)
face_detector=cv.CascadeClassifier('Haarcascades/haarcascade_frontalface_default.xml')
def drawKeyPts(im,keyp,col,th):
for curKey in keyp:
x=np.int(curKey[0])
y=np.int(curKey[1])
size = np.int(curKey.size)
cv.circle(im,(x,y),size, col,thickness=th, lineType=8, shift=0)
#plt.imshow(im)
return im
while (True):
ret, img=capture.read()
#Haar Cascade Classifier
faces=face_detector.detectMultiScale(img,1.3,5)
padding=100
for (x,y,w,h) in faces:
#if w >100:
#Draw bounding rectangle.
cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#Crop the image.
face_detect=img[int(y)-padding:int(y+h)+padding,int(x)-padding:int(x+w)+padding]
#Resize to VGGFace dimensions.
face_detect=cv.resize(face_detect,(224,224))
#face_detect_rgb=face_detect.copy()
#PreProcess image.
face_detect_gray=cv.cvtColor(face_detect,cv.COLOR_BGR2GRAY)
face_detect_gray=face_detect_gray/255.0
roi=torch.from_numpy(face_detect_gray)
roi.resize_(1,1,224,224)
#roi.unsqueeze_(0)
roi = roi.type(torch.FloatTensor)
#print('torch shape',roi.shape)
## TODO: Make facial keypoint predictions using your loaded, trained network
output=net(roi)
## TODO: Display each detected face and the corresponding keypoints
output_pts = output.view(output.size()[0], 68, -1)
predicted_key_pts = output_pts[0].data
predicted_key_pts = predicted_key_pts.numpy()
# undo normalization of keypoints
predicted_key_pts = predicted_key_pts*50.0+100
zero_out=np.zeros((1,1))
#print(type(predicted_key_pts),predicted_key_pts.shape)
outimg_keypts = drawKeyPts(img,predicted_key_pts,(0,255,0),2)
outimg=draw_sunglasses(face_detect,predicted_key_pts)
#outimg=draw_moustache(outimg,predicted_key_pts)
# blob=cv.drawKeypoints(img,predicted_key_pts,zero_out,(255,0,255),cv.DRAW_MATCHES_FLAGS_DEFAULT)
cv.imshow('Image',outimg)
cv.imshow('Orig',face_detect)
cv.imshow('keypts',outimg_keypts)
if cv.waitKey(1) == 13: #13 is the Enter Key
break
capture.release()
cv.destroyAllWindows()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.