code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Assignment: Global average budgets in the CESM pre-industrial control simulation
This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook) by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
## Learning goals
Students completing this assignment will gain the following skills and concepts:
- Continued practice working with the Jupyter notebook
- Familiarity with atmospheric output from the CESM simulation
- More complete comparison of the global energy budget in the CESM control simulation to the observations
- Validation of the annual cycle of surface temperature against observations
- Opportunity to formulate a hypothesis about these global temperature variations
- Python programming skills: basic xarray usage: opening gridded dataset and taking averages
## Instructions
- In a local copy of this notebook (on the JupyterHub or your own device) **add your answers in additional cells**.
- **Complete the required problems** below.
- Remember to set your cell types to `Markdown` for text, and `Code` for Python code!
- **Include comments** in your code to explain your method as necessary.
- Remember to actually answer the questions. **Written answers are required** (not just code and figures!)
- Submit your solutions in **a single Jupyter notebook** that contains your text, your code, and your figures.
- *Make sure that your notebook* ***runs cleanly without errors:***
- Save your notebook
- From the `Kernel` menu, select `Restart & Run All`
- Did the notebook run from start to finish without error and produce the expected output?
- If yes, save again and submit your notebook file
- If no, fix the errors and try again.
## Problem 1: The global energy budget in the CESM control simulation
Compute the **global, time average** of each of the following quantities, and compare them to the observed values from the Trenberth and Fasullo (2012) figure in the course notes:
- Solar Radiation budget:
- Incoming Solar Radiation, or Insolation
- Reflected Solar Radiation at the top of atmosphere
- Solar Radiation Reflected by Surface
- Solar Radiation Absorbed by Surface
- Solar Radiation Refelected by Clouds and Atmosphere *(you can calculate this as the difference between the reflected radiation at the top of atmosphere and reflected radiation at the surface)*
- Total Absorbed Solar Radiation (ASR) at the top of atmosphere
- Solar Radiation Absorbed by Atmosphere *(you can calculate this as the residual of your budget, i.e. what's left over after accounting for all other absorption and reflection)*
- Longwave Radiation budget:
- Outgoing Longwave Radiation
- Upward emission from the surface
- Downwelling radiation at the surface
- Other surface fluxes:
- "Thermals", or *sensible heat flux*. *You will find this in the field called `SHFLX` in your dataset.*
- "Evapotranspiration", or *latent heat flux*. *You will find this in the field called `LHFLX` in your dataset.*
*Note we will look more carefully at atmospheric absorption and emission processes later. You do not need to try to calculate terms such as "Emitted by Atmosphere" or "Atmospheric Window"*
**Based on your results above, answer the following questions:**
- Is the CESM control simulation at (or near) **energy balance**?
- Do you think this simulation is near equilibrium?
- Summarize in your own words what you think are the most important similarities and differences of the global energy budgets in the CESM simulation and the observations.
## Problem 2: Verifying the annual cycle in global mean surface temperature against observations
In the class notes we plotted the **timeseries of global mean surface temperature** in the CESM control simulation, and found an **annual cycle**. The purpose of this exercise is to verify that this phenomenon is also found in the observed temperature record. If so, then we can conclude that it is a real feature of Earth's climate and not an artifact of the numerical model.
For observations, we will use the **NCEP Reanalysis data**.
*Reanalysis data is really a blend of observations and output from numerical weather prediction models. It represents our “best guess” at conditions over the whole globe, including regions where observations are very sparse.*
The necessary data are all served up over the internet. We will look at monthly climatologies averaged over the 30 year period 1981 - 2010.
You can browse the available data here:
https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.derived.html
**Surface air temperature** is contained in a file called `air.2m.mon.ltm.nc`, which is found in the collection called `Surface Fluxes`.
Here's a link directly to the catalog page for this data file:
https://www.esrl.noaa.gov/psd/thredds/catalog/Datasets/ncep.reanalysis.derived/surface_gauss/catalog.html?dataset=Datasets/ncep.reanalysis.derived/surface_gauss/air.2m.day.ltm.nc
Now click on the `OPeNDAP` link. A page opens up with lots of information about the contents of the file. The `Data URL` is what we need to read the data into our Python session. For example, this code opens the file and displays a list of the variables it contains:
```
import xarray as xr
url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/surface_gauss/air.2m.mon.ltm.nc"
ncep_air2m = xr.open_dataset(url)
ncep_air2m
```
The temperature data is called `air`. Take a look at the details:
```
ncep_air2m.air
```
Notice that the dimensions are `(time: 12, lat: 94, lon: 192)`. The time dimension is calendar months. But note that the lat/lon grid is not the same as our model output!
*Think about how you will handle calculating the global average of these data.*
### Your task:
- Make a well-labeled timeseries graph of the global-averaged observed average surface air temperature climatology.
- Verify that the annual cycle we found in the CESM simulation also exists in the observations.
- In your own words, suggest a plausible physical explanation for why this annual cycle exists.
____________
## Credits
This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
It is licensed for free and open consumption under the
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
____________
|
github_jupyter
|
import xarray as xr
url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/surface_gauss/air.2m.mon.ltm.nc"
ncep_air2m = xr.open_dataset(url)
ncep_air2m
ncep_air2m.air
| 0.223462 | 0.986866 |
## Preliminaries
```
# Load libraries
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras import models
from keras import layers
# Set random seed
np.random.seed(0)
```
## Load Movie Review Data
```
# Set the number of features we want
number_of_features = 1000
# Load data and target vector from movie review data
(train_data, train_target), (test_data, test_target) = imdb.load_data(num_words=number_of_features)
# Convert movie review data to one-hot encoded feature matrix
tokenizer = Tokenizer(num_words=number_of_features)
train_features = tokenizer.sequences_to_matrix(train_data, mode='binary')
test_features = tokenizer.sequences_to_matrix(test_data, mode='binary')
```
## Construct Neural Network Architecture
Because this is a binary classification problem, one common choice is to use the sigmoid activation function in a one-unit output layer.
```
# Start neural network
network = models.Sequential()
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu', input_shape=(number_of_features,)))
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu'))
# Add fully connected layer with a sigmoid activation function
network.add(layers.Dense(units=1, activation='sigmoid'))
```
## Compile Feedforward Neural Network
```
# Compile neural network
network.compile(loss='binary_crossentropy', # Cross-entropy
optimizer='rmsprop', # Root Mean Square Propagation
metrics=['accuracy']) # Accuracy performance metric
```
## Train Feedforward Neural Network
In Keras, we train our neural network using the `fit` method. There are six significant parameters to define. The first two parameters are the features and target vector of the training data.
The `epochs` parameter defines how many epochs to use when training the data. `verbose` determines how much information is outputted during the training process, with `0` being no out, `1` outputting a progress bar, and `2` one log line per epoch. `batch_size` sets the number of observations to propagate through the network before updating the parameters.
Finally, we held out a test set of data to use to evaluate the model. These test features and test target vector can be arguments of the `validation_data`, which will use them for evaluation. Alternatively, we could have used `validation_split` to define what fraction of the training data we want to hold out for evaluation.
In scikit-learn `fit` method returned a trained model, however in Keras the `fit` method returns a `History` object containing the loss values and performance metrics at each epoch.
```
# Train neural network
history = network.fit(train_features, # Features
train_target, # Target vector
epochs=3, # Number of epochs
verbose=1, # Print description after each epoch
batch_size=100, # Number of observations per batch
validation_data=(test_features, test_target)) # Data for evaluation
```
|
github_jupyter
|
# Load libraries
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras import models
from keras import layers
# Set random seed
np.random.seed(0)
# Set the number of features we want
number_of_features = 1000
# Load data and target vector from movie review data
(train_data, train_target), (test_data, test_target) = imdb.load_data(num_words=number_of_features)
# Convert movie review data to one-hot encoded feature matrix
tokenizer = Tokenizer(num_words=number_of_features)
train_features = tokenizer.sequences_to_matrix(train_data, mode='binary')
test_features = tokenizer.sequences_to_matrix(test_data, mode='binary')
# Start neural network
network = models.Sequential()
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu', input_shape=(number_of_features,)))
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu'))
# Add fully connected layer with a sigmoid activation function
network.add(layers.Dense(units=1, activation='sigmoid'))
# Compile neural network
network.compile(loss='binary_crossentropy', # Cross-entropy
optimizer='rmsprop', # Root Mean Square Propagation
metrics=['accuracy']) # Accuracy performance metric
# Train neural network
history = network.fit(train_features, # Features
train_target, # Target vector
epochs=3, # Number of epochs
verbose=1, # Print description after each epoch
batch_size=100, # Number of observations per batch
validation_data=(test_features, test_target)) # Data for evaluation
| 0.912534 | 0.954435 |
## Self Consistent Field Procedure (SCF)
<p>
<br />
The self consistent field procedure, (SCF), in the Hartree-Fock method is used to generate the molecular orbitals, (MOs), and their eigenvalues from an intial guess. The guess in this program is the most simple, and involves using the electronic hamiltonian as the starting point for the SCF. Once the MOs and their energies have been computed, they are used to generate the density matrix $P$, which is then used to construct the the electron portion of the Fock matrix, $G$. A new Fock matrix is then computed from $F=H+G$. This new Fock matrix is used again from the beginning of the SCF procedure to compute another set of MOs and energies. The expectation energy of the previous iteration is then computed and compared to the electronic expectation energy of the current iteration to determine if the two values are within a certain range of each other. If they are, the values are said to have converged and the procedure ends, if not, then the procedure will continue with other iterations in a similar fashion until convergence is reached, and the molecular orbitals and the expectation energy of the system are as accurate as the theory can compute.
<br />
</p>
<br>
## Density Matrix
<p>
<br />
The density matrix represents which MO contains the largest portion of the electronic density. It is computed as follows from the Fock matrix $F$ as follows:
$$ P_{\mu v}=2\sum_{a}^{N/2}{C_{\mu a}C^{*}_{va}} $$
Where $C$ refers to the respective MO of the molecuar system that is acquired from the eigen vectors of the MO basis Fock matrix, which are then transformed back into AO basis. $N$ refers to the total number of electrons in the molecular system, and this equation forces all molecuar systems used with this program to be closed-shell, and contain only even numbers of electrons.
Located on Szabo Pg. 139 & 163.
</p>
```
import numpy as np
def densityMatrix(C, N, size):
P = np.zeros([size, size])
#iterate through all indexes of the density matrix
for u in range(size):
for v in range(size):
for a in range(int(N/2)):
P[u, v] += 2 * C[u,a] * C[v, a]
return P
```
## Two Electron Term
<p>
<br />
The two electron term relates the electron density and the electron-electron repulsion term together, which are then used to generate the next Fock matrix, allowing the Fock matrix to become a function of the electron density. This allows for the iterative nature of the SCF procedure, as each Fock matrix is based upon the electron density of the previous Fock matrix. The two electron term is referred to as the $G$ matrix, and is computed as follows:
$$ G_{\mu v} = \sum_{\lambda\sigma}{P_{\lambda\sigma}[ (\mu v|\sigma\lambda) - \dfrac{1}{2}(\mu\lambda|\sigma v) ]} $$ $(\mu v|\sigma\lambda)$ is equal to the electron-electron repulsion matrix at the coresponding indexes.
</p>
```
def G(electronRepulsion, P, size):
#init G matrix
G = np.zeros([size, size])
#loop over all the required indexes to generate the G matrix
for u in range(size):
for v in range(size):
for y in range(size):
for s in range(size):
G[u, v] += P[y, s] * (electronRepulsion[u][v][s][y] - ( 0.5 * electronRepulsion[u][y][s][v] ) )
return G
```
## Transformation Matrix
<p>
<br/>
The transformation matrix $X$ is computed from the atomic orbital overlap matrix $S$, and is used to transform the orbitals from atomic orbitals to molecular orbitals, since the overlap provides the overlap of atomic orbitals and thus strength of any molecular orbitals that have formed. The transformation matrix is obtained by orthagnolization of the basis and is performed through the <i>Canonical Orthogonalization</i> method which uses the following equation:
$$X_{ij} = \frac{U_{ij}}{s^{1/2}_j} $$
$U_{ij}$ refers to a the eigen vector matrix of the $S$, while $s_j$ refers to the eigen values of the overlap matrix. Numpy is utilized in the diagnolization of $S$.
Located on Szabo Pg. 16 & 173
</p>
```
def X(S, size):
#init transformation matrix
X = np.zeros([size, size])
#diagnolize S to obtain eigenvalues and vector
eigenValues, eigenVectors = np.linalg.eigh(S)
X = eigenVectors * (eigenValues ** -0.5)
return X
```
## Expectation Energy
<p>
<br />
The expectation enegy is the energy of the system that can be computed from the SCF's iteration density, fock, and hamiltonian matrix. It is computed as follows:
$$ \dfrac{1}{2}\sum_{\mu}{\sum_{v}{P_{v\mu}(H^{core}_{\mu v} + F_{\mu v})}} $$
</p>
```
def expectationEnergy(H, F, P):
#get size and init E to 0
size = len(H)
E = 0
#iterate through all indexes needed
for u in range(size):
for v in range(size):
E += P[v, u] * (H[u, v] + F[u, v] )
return E * 0.5
```
## Nuclear-Nuclear Repulsion
<p>
<br />
The amount of comlumbic repulsion two nucli will experiance due to their positive charges. Equation on page 165 of Szabo.
$$ V_{ij} = \dfrac{Z_{i}Z_{j}}{|r_i - r_j|} $$
</p>
```
def nuclearRepulsion(molecule):
repulsion = 0
#iterate through all atoms present
for atom1 in molecule.atomData:
for atom2 in molecule.atomData:
if(atom1 == atom2):
continue
repulsion += (atom1.Z * atom2.Z) / (atom1.coord - atom2.coord).magnitude()
return repulsion * 0.5
```
|
github_jupyter
|
import numpy as np
def densityMatrix(C, N, size):
P = np.zeros([size, size])
#iterate through all indexes of the density matrix
for u in range(size):
for v in range(size):
for a in range(int(N/2)):
P[u, v] += 2 * C[u,a] * C[v, a]
return P
def G(electronRepulsion, P, size):
#init G matrix
G = np.zeros([size, size])
#loop over all the required indexes to generate the G matrix
for u in range(size):
for v in range(size):
for y in range(size):
for s in range(size):
G[u, v] += P[y, s] * (electronRepulsion[u][v][s][y] - ( 0.5 * electronRepulsion[u][y][s][v] ) )
return G
def X(S, size):
#init transformation matrix
X = np.zeros([size, size])
#diagnolize S to obtain eigenvalues and vector
eigenValues, eigenVectors = np.linalg.eigh(S)
X = eigenVectors * (eigenValues ** -0.5)
return X
def expectationEnergy(H, F, P):
#get size and init E to 0
size = len(H)
E = 0
#iterate through all indexes needed
for u in range(size):
for v in range(size):
E += P[v, u] * (H[u, v] + F[u, v] )
return E * 0.5
def nuclearRepulsion(molecule):
repulsion = 0
#iterate through all atoms present
for atom1 in molecule.atomData:
for atom2 in molecule.atomData:
if(atom1 == atom2):
continue
repulsion += (atom1.Z * atom2.Z) / (atom1.coord - atom2.coord).magnitude()
return repulsion * 0.5
| 0.30632 | 0.992539 |
### PySpark RDD API
https://www.kaggle.com/divyansh22/flight-delay-prediction
(No esta flights.parquet en la carpeta de datasets)
* Ejercicios
```
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark import SparkContext
import pandas as pd
from pyspark.sql import SQLContext, SparkSession
# create the Spark Session
spark = SparkSession.builder.getOrCreate()
# create the Spark Context
sc = spark.sparkContext
sqlContext = SQLContext(sc)
rdd = sqlContext.read.parquet('flights.parquet').rdd.repartition(8).cache() # Cache o sino cada vez que llamamos a rdd se hace la reparticion
rdd.count()
rdd.takeSample(False, 2) # Toma la cantidad de filas que queremos y las trae al driver
rdd.sample(False, 0.1) # Toma porcentaje del rdd y devuelve un RDD
# DISTANCE, DEP_TIME, ARR_TIME, DAY_OF_WEEK, ORIGIN, DEST, TAIL_NUM, DAY_OF_MONTH, DAY_OF_WEEK
# Ejemplo rdd.map(lambda x: (x.ORIGIN, x.DEST))
```
**Ejercicio 1:** Calcular la cantidad de vuelos por línea aérea (usar
OP_UNIQUE_CARRIER). Calcular las diez (10) líneas aéreas con mayor cantidad de vuelos. Devolver una lista de Python con los códigos de estas 10 líneas.
```
carriers = rdd.map(lambda x: (x.OP_UNIQUE_CARRIER, 1))\
.reduceByKey(lambda x,y: (x + y)).cache()
top10carriers = carriers.takeOrdered(10, lambda x: -x[1])
top10carriers
top10carriers = [x[0] for x in top10carriers]
top10carriers
```
**Ejercicio 2:** Calcular el promedio de vuelos que llegaron con 15 minutos de demora o mas (ARR_DEL15 ==1) para las 10 líneas con mas vuelos (usar el ejercicio anterior), de estas indicar las tres mejores y las tres peores.
```
delays = rdd.filter(lambda x: x.OP_UNIQUE_CARRIER in top10carriers)\
.map(lambda x: (x.OP_UNIQUE_CARRIER, (float(x.ARR_DEL15), 1)))\
.reduceByKey(lambda x,y: ( x[0] + y[0], x[1] + y[1]))\
.map(lambda x: (x[0], x[1][0] / x[1][1])).collect()
sorted(delays, key = lambda x:x[1], reverse = True)[:3]
sorted(delays, key = lambda x:x[1], reverse = False)[:3]
```
**Ejercicio 3:** Calcular la cantidad de vuelos por ruta. Usando ORIGIN y DEST para la ruta. Devolver un rdd con la siguiente estructura: (RUTA, #Vuelos). Indicar además cuáles son las 10 rutas mas frecuentes y su cantidad de vuelos.
```
routes = rdd.map(lambda x: ((x.ORIGIN + x.DEST),1))\
.reduceByKey(lambda x,y : x + y)\
.cache()
routes.takeOrdered(10, lambda x:-x[1])
```
**Ejercicio 4:** Consideremos ahora la cantidad de líneas aéreas que transitan cada ruta, queremos saber cuáles son las diez rutas realizadas por mayor cantidad de líneas aéreas y cuáles son las diez líneas aéreas con mayor cantidad de rutas.
Devolver: Una lista de 10 tuplas de forma (ROUTE, #CARRIERS) y Una lista de 10 tuplas de tipo (Carrier, #ROUTES)
```
routes_by_lines = rdd.map(lambda x: ((x.ORIGIN + x.DEST, x.OP_UNIQUE_CARRIER), 1))\
.reduceByKey(lambda x,y : x + y).cache()
routes_by_lines.take(1)
routes_by_lines.map(lambda x: (x[0][0],1))\
.reduceByKey(lambda x,y: x + y)\
.takeOrdered(10, lambda x: -x[1])
routes_by_lines.map(lambda x: (x[0][1],1))\
.reduceByKey(lambda x,y: x + y).takeOrdered(10, lambda x: -x[1])
```
**Ejercicio 5:** Por cada ruta aérea calcular el promedio de tiempo de vuelo. Calculando ARR_TIME - DEP_TIME usando la función provista. Al calculo del tiempo de vuelo en minutos hay que sumarle TIMEDIFF que es la diferencia horaria entre las ciudades (en horas). Por lo tanto el calculo es:
```
hhmmtimediff(x.DEP_TIME,x.ARR_TIME) + (x.TIMEDIFF * 60)
```
Puntos extras: Además del promedio de tiempo de vuelo calcular la desviación standard del tiempo de vuelo para cada ruta.
Devolver:
- Una lista de 10 tuplas de tipo (ROUTE, average_time)
- Una lista de 10 tuplas de tipo (ROUTE, time_std) (solo para las rutas con mas de 50 vuelos)
```
from numpy import sqrt
# Computes time diff in format HHMM (in minutes)
def hhmmtimediff(t1, t2):
m2 = (t2 // 100) * 60 + (t2 % 100)
m1 = (t1 // 100) * 60 + (t1 % 100)
return m2 - m1
routes_duration = rdd.map(lambda x: ((x.ORIGIN + x.DEST), hhmmtimediff(x.DEP_TIME, x.ARR_TIME) + (x.TIMEDIFF * 60)))\
.filter(lambda x:x[0] != 'GUMHNL')\
.filter(lambda x:x[1]>0)\
.cache()
routes_duration.take(10)
routes_duration.map(lambda x: (x[0], (x[1], 1))).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1]))\
.map(lambda x: (x[0], x[1][0] / x[1][1])).take(10)
```
Para la desviacion standard usamos:
stdev = sqrt((sum_x2 / n) - (mean * mean))
```
routes_duration.map(lambda x: (x[0], (x[1], 1, x[1]**2))).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1], x[2] + y[2]))\
.map(lambda x: (x[0], sqrt(x[1][2]/x[1][1] - (x[1][0] / x[1][1])**2))).take(10)
# Misma resolucion pero pasado a rows
routes_stats = routes_duration.map(lambda x: (x[0],(x[1],x[1] ** 2,1)))\
.reduceByKey(lambda x, y : (x[0] + y[0], x[1] + y[1], x[2] + y[2]))\
.map(lambda x: (x[0], x[1][0] / x[1][2], x[1][2], sqrt((x[1][1] / x[1][2])- ((x[1][0]/x[1][2])**2))))\
.map(lambda x: Row(ROUTE=x[0],AVERAGE_DURATION=x[1],NUM_FLIGHTS=x[2],DURATION_STD=x[3] )).cache()
routes_stats.takeOrdered(10, lambda x: -x.AVERAGE_DURATION)
routes_stats.takeOrdered(10, lambda x: x.AVERAGE_DURATION)
routes_stats.takeOrdered(10, lambda x: -x.DURATION_STD)
```
**Ejercicio 6:** Para cada linea aerea contar cuantos vuelos tuvieron cuya duracion se excedio en 15 minutos o mas la duracion promedio de la ruta (para todas las lineas). Indicar las 10 mejores lineas aereas y las 10 peores de acuerdo a esta metrica.
```
rdd_1 = routes_stats.map(lambda x: (x.ROUTE, x.AVERAGE_DURATION))
rdd_2 = rdd.map(lambda x: (x.ORIGIN + x.DEST,(x.OP_UNIQUE_CARRIER, hhmmtimediff(x.DEP_TIME, x.ARR_TIME) + (x.TIMEDIFF * 60))))
carrier_delays = rdd_1.join(rdd_2).cache()
carrier_delays.take(1)
delays_by_carrier = carrier_delays.filter(lambda x: (x[1][1][1] - x[1][0]) > 15)\
.map(lambda x: (x[1][1][0], 1))\
.reduceByKey(lambda x, y: x + y).cache()
# 10 mejores
delays_by_carrier.takeOrdered(10, lambda x: x[1])
# 10 peores
delays_by_carrier.takeOrdered(10, lambda x: -x[1])
carrier_delays = carrier_delays.filter(lambda x: (x[1][1][1] - x[1][0]) > 15)\
.map(lambda x: (x[1][1][0],1))\
.reduceByKey(lambda x,y: x + y)\
.cache()
carrier_delays.takeOrdered(10, lambda x: -x[1])
carrier_delays.takeOrdered(10, lambda x: x[1])
```
#### Anexo de comentarios extra
* CombineByKey
```
r1 = sc.parallelize([('A',1),('B',2),('C',3),('A',2),('C',28),('A',2)],4)
r1.reduceByKey(lambda x,y : x + y).collect()
r1.combineByKey(lambda x: (x,1), lambda x,y: (x[0] + y[0] ,x[1] + y[1]), lambda x,y: (x[0] + y[0], x[1] + y[1])).collect()
```
* Tablas sql
```
sqlContext.read.parquet('flights.parquet').registerTempTable('flights')
sqlContext.sql("SELECT distinct(OP_UNIQUE_CARRIER) from flights WHERE ORIGIN = 'ORD'").show()
```
|
github_jupyter
|
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark import SparkContext
import pandas as pd
from pyspark.sql import SQLContext, SparkSession
# create the Spark Session
spark = SparkSession.builder.getOrCreate()
# create the Spark Context
sc = spark.sparkContext
sqlContext = SQLContext(sc)
rdd = sqlContext.read.parquet('flights.parquet').rdd.repartition(8).cache() # Cache o sino cada vez que llamamos a rdd se hace la reparticion
rdd.count()
rdd.takeSample(False, 2) # Toma la cantidad de filas que queremos y las trae al driver
rdd.sample(False, 0.1) # Toma porcentaje del rdd y devuelve un RDD
# DISTANCE, DEP_TIME, ARR_TIME, DAY_OF_WEEK, ORIGIN, DEST, TAIL_NUM, DAY_OF_MONTH, DAY_OF_WEEK
# Ejemplo rdd.map(lambda x: (x.ORIGIN, x.DEST))
carriers = rdd.map(lambda x: (x.OP_UNIQUE_CARRIER, 1))\
.reduceByKey(lambda x,y: (x + y)).cache()
top10carriers = carriers.takeOrdered(10, lambda x: -x[1])
top10carriers
top10carriers = [x[0] for x in top10carriers]
top10carriers
delays = rdd.filter(lambda x: x.OP_UNIQUE_CARRIER in top10carriers)\
.map(lambda x: (x.OP_UNIQUE_CARRIER, (float(x.ARR_DEL15), 1)))\
.reduceByKey(lambda x,y: ( x[0] + y[0], x[1] + y[1]))\
.map(lambda x: (x[0], x[1][0] / x[1][1])).collect()
sorted(delays, key = lambda x:x[1], reverse = True)[:3]
sorted(delays, key = lambda x:x[1], reverse = False)[:3]
routes = rdd.map(lambda x: ((x.ORIGIN + x.DEST),1))\
.reduceByKey(lambda x,y : x + y)\
.cache()
routes.takeOrdered(10, lambda x:-x[1])
routes_by_lines = rdd.map(lambda x: ((x.ORIGIN + x.DEST, x.OP_UNIQUE_CARRIER), 1))\
.reduceByKey(lambda x,y : x + y).cache()
routes_by_lines.take(1)
routes_by_lines.map(lambda x: (x[0][0],1))\
.reduceByKey(lambda x,y: x + y)\
.takeOrdered(10, lambda x: -x[1])
routes_by_lines.map(lambda x: (x[0][1],1))\
.reduceByKey(lambda x,y: x + y).takeOrdered(10, lambda x: -x[1])
hhmmtimediff(x.DEP_TIME,x.ARR_TIME) + (x.TIMEDIFF * 60)
from numpy import sqrt
# Computes time diff in format HHMM (in minutes)
def hhmmtimediff(t1, t2):
m2 = (t2 // 100) * 60 + (t2 % 100)
m1 = (t1 // 100) * 60 + (t1 % 100)
return m2 - m1
routes_duration = rdd.map(lambda x: ((x.ORIGIN + x.DEST), hhmmtimediff(x.DEP_TIME, x.ARR_TIME) + (x.TIMEDIFF * 60)))\
.filter(lambda x:x[0] != 'GUMHNL')\
.filter(lambda x:x[1]>0)\
.cache()
routes_duration.take(10)
routes_duration.map(lambda x: (x[0], (x[1], 1))).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1]))\
.map(lambda x: (x[0], x[1][0] / x[1][1])).take(10)
routes_duration.map(lambda x: (x[0], (x[1], 1, x[1]**2))).reduceByKey(lambda x, y: (x[0] + y[0], x[1] + y[1], x[2] + y[2]))\
.map(lambda x: (x[0], sqrt(x[1][2]/x[1][1] - (x[1][0] / x[1][1])**2))).take(10)
# Misma resolucion pero pasado a rows
routes_stats = routes_duration.map(lambda x: (x[0],(x[1],x[1] ** 2,1)))\
.reduceByKey(lambda x, y : (x[0] + y[0], x[1] + y[1], x[2] + y[2]))\
.map(lambda x: (x[0], x[1][0] / x[1][2], x[1][2], sqrt((x[1][1] / x[1][2])- ((x[1][0]/x[1][2])**2))))\
.map(lambda x: Row(ROUTE=x[0],AVERAGE_DURATION=x[1],NUM_FLIGHTS=x[2],DURATION_STD=x[3] )).cache()
routes_stats.takeOrdered(10, lambda x: -x.AVERAGE_DURATION)
routes_stats.takeOrdered(10, lambda x: x.AVERAGE_DURATION)
routes_stats.takeOrdered(10, lambda x: -x.DURATION_STD)
rdd_1 = routes_stats.map(lambda x: (x.ROUTE, x.AVERAGE_DURATION))
rdd_2 = rdd.map(lambda x: (x.ORIGIN + x.DEST,(x.OP_UNIQUE_CARRIER, hhmmtimediff(x.DEP_TIME, x.ARR_TIME) + (x.TIMEDIFF * 60))))
carrier_delays = rdd_1.join(rdd_2).cache()
carrier_delays.take(1)
delays_by_carrier = carrier_delays.filter(lambda x: (x[1][1][1] - x[1][0]) > 15)\
.map(lambda x: (x[1][1][0], 1))\
.reduceByKey(lambda x, y: x + y).cache()
# 10 mejores
delays_by_carrier.takeOrdered(10, lambda x: x[1])
# 10 peores
delays_by_carrier.takeOrdered(10, lambda x: -x[1])
carrier_delays = carrier_delays.filter(lambda x: (x[1][1][1] - x[1][0]) > 15)\
.map(lambda x: (x[1][1][0],1))\
.reduceByKey(lambda x,y: x + y)\
.cache()
carrier_delays.takeOrdered(10, lambda x: -x[1])
carrier_delays.takeOrdered(10, lambda x: x[1])
r1 = sc.parallelize([('A',1),('B',2),('C',3),('A',2),('C',28),('A',2)],4)
r1.reduceByKey(lambda x,y : x + y).collect()
r1.combineByKey(lambda x: (x,1), lambda x,y: (x[0] + y[0] ,x[1] + y[1]), lambda x,y: (x[0] + y[0], x[1] + y[1])).collect()
sqlContext.read.parquet('flights.parquet').registerTempTable('flights')
sqlContext.sql("SELECT distinct(OP_UNIQUE_CARRIER) from flights WHERE ORIGIN = 'ORD'").show()
| 0.516595 | 0.898633 |
```
from rdkit import Chem
from rdkit.Chem import Draw
drugbank_input = Chem.SDMolSupplier('../data/drugbank.sdf')
drugbank = [m for m in drugbank_input if m]
```
# Structural keys, MACCS
We already know how to do a simple substructure search.
But we can also describe, categorize and compare structures based on what substructures they contain.
The most straightforward way to do this is a structural key - a predefined set of structural patterns.
http://www.daylight.com/dayhtml/doc/theory/theory.finger.html
Let's make our own structural key!
## Custom structural key
a 7-bit key telling if the structure:
1. has a COO group
2. has a benzene core
3. has a nitrogen atom
4. has some halogen atom
5. has a triple bond
6. has aliphatic carbon
7. has sulphur in it
```
# defining the key substructures
patterns = {
'coo': Chem.MolFromSmarts('C(=O)O'),
'benzene': Chem.MolFromSmarts('c1ccccc1'),
'n': Chem.MolFromSmarts('[#7]'), # N is for aliphatic, n for aromatic, #7 (atom. number 7) is for both
'halogen': Chem.MolFromSmarts('[F,Cl,Br,I]'),
'triple_bond': Chem.MolFromSmarts('*#*'),
'aliphatic_c': Chem.MolFromSmarts('C'),
's': Chem.MolFromSmarts('[#16]'),
}
patternorder = ('coo', 'benzene', 'n', 'halogen', 'triple_bond', 'aliphatic_c', 's')
def customkey(mol):
return tuple([mol.HasSubstructMatch(patterns[patternname]) for patternname in patternorder])
```
Let's test our new great structural key!
```
customkey(drugbank[666])
Draw.MolToImage(drugbank[666])
```
COO is there, benzene core too, as well as nitrogen.
No triple bonds or halogens, aliphatic carbons are present, but sulphur isn't.
The computed structural key seems correct. Let's try one more?
```
customkey(drugbank[33])
Draw.MolToImage(drugbank[33])
```
... seems about right.
Let's calculate our custom keys for the entire DrugBank database:
```
drugbank_fps = [customkey(mol) for mol in drugbank]
len(drugbank), len(drugbank_fps) # same length
```
Is there any compound in Drugbank that contains all the patterns in our fingerprints?
```
all_pattern_compounds = [m for m, fp in zip(drugbank, drugbank_fps) if all(fp)]
len(all_pattern_compounds)
Draw.MolsToGridImage(all_pattern_compounds, subImgSize=(300, 300))
```
This way, we can define any structural key tailored for our particular needs or problems. However, making a new structural key means that
1. it often takes a lot of work
2. nobody else will probably use it
3. is often just not necessary
There are some predefined keys:
## Predefined keys, MACCS
http://rdkit.org/Python_Docs/rdkit.Chem.MACCSkeys-pysrc.html
```
from rdkit.Chem import MACCSkeys
maccs_fps = [MACCSkeys.GenMACCSKeys(mol) for mol in drugbank]
```
RDkit represents fingerprints as its own datatype with additional functinality, much like mol for molecules:
```
maccs_fps[0]
list(maccs_fps[33].GetOnBits()) # a convenient way to get the bits that were set
```
# Similarity
So, we can now search for molecules with specific substructures, but how can we compare them? Or, even better, how do we quantify their similarity?
Probably the most used method to compare binary vectors (structural keys, fingerprints...) is the Tanimoto similarity, aka Jaccard index:
https://en.wikipedia.org/wiki/Jaccard_index
let's implement it on our amazing fingerprint:
```
def tanimoto_similarity(fp1, fp2):
fp1_on_bits = set([i for i, bit in enumerate(fp1) if bit]) # indices of True values
fp2_on_bits = set([i for i, bit in enumerate(fp2) if bit]) # indices of True values
all_bits = fp1_on_bits.union(fp2_on_bits)
shared_bits = fp1_on_bits.intersection(fp2_on_bits)
if not all_bits:
return 0 # avoid division by zero for empty fingerprint
else:
return len(shared_bits) / len(all_bits)
```
let's test out our similarity metric implementation:
```
tanimoto_similarity((True, True, False, False), (True, True, False, False))
tanimoto_similarity((True, True, False, False), (False, False, False, False))
tanimoto_similarity((True, True, False, False), (False, False, True, False))
tanimoto_similarity((True, True, False, False), (False, True, False, False))
tanimoto_similarity((True, True, False, False), (False, True, True, True))
```
... seems legit. Now, let's try similarity searching drugbank with a new query structure:
```
salicylica = Chem.MolFromSmiles("c1ccc(c(c1)C(=O)O)O")
salicylica_key = customkey(salicylica)
salicylica_key
salicylica_similarities = [tanimoto_similarity(salicylica_key, mol_fp) for mol_fp in drugbank_fps]
len(salicylica_similarities), min(salicylica_similarities), max(salicylica_similarities)
perfect_match_indexes = [i for i, similarity in enumerate(salicylica_similarities) if similarity == 1]
len(perfect_match_indexes)
```
Some examples with the perfect key match:
```
Draw.MolsToGridImage([drugbank[i] for i in perfect_match_indexes[:9]], subImgSize=(300, 300))
```
Now the same query w\ salicylic a., now with MACCS:
```
salicylica_maccs = MACCSkeys.GenMACCSKeys(salicylica)
salicylica_maccs
from rdkit import DataStructs
salicylica_maccs_similarities = [DataStructs.FingerprintSimilarity(salicylica_maccs, mol_fp) for mol_fp in maccs_fps]
len(salicylica_maccs_similarities), min(salicylica_maccs_similarities), max(salicylica_maccs_similarities)
perfect_match_indexes = [i for i, similarity in enumerate(salicylica_maccs_similarities) if similarity == 1]
perfect_match_indexes
Draw.MolToImage(drugbank[815])
```
TODO: top 9 matches w\ written similarity
TODO: hashed fingerprints
|
github_jupyter
|
from rdkit import Chem
from rdkit.Chem import Draw
drugbank_input = Chem.SDMolSupplier('../data/drugbank.sdf')
drugbank = [m for m in drugbank_input if m]
# defining the key substructures
patterns = {
'coo': Chem.MolFromSmarts('C(=O)O'),
'benzene': Chem.MolFromSmarts('c1ccccc1'),
'n': Chem.MolFromSmarts('[#7]'), # N is for aliphatic, n for aromatic, #7 (atom. number 7) is for both
'halogen': Chem.MolFromSmarts('[F,Cl,Br,I]'),
'triple_bond': Chem.MolFromSmarts('*#*'),
'aliphatic_c': Chem.MolFromSmarts('C'),
's': Chem.MolFromSmarts('[#16]'),
}
patternorder = ('coo', 'benzene', 'n', 'halogen', 'triple_bond', 'aliphatic_c', 's')
def customkey(mol):
return tuple([mol.HasSubstructMatch(patterns[patternname]) for patternname in patternorder])
customkey(drugbank[666])
Draw.MolToImage(drugbank[666])
customkey(drugbank[33])
Draw.MolToImage(drugbank[33])
drugbank_fps = [customkey(mol) for mol in drugbank]
len(drugbank), len(drugbank_fps) # same length
all_pattern_compounds = [m for m, fp in zip(drugbank, drugbank_fps) if all(fp)]
len(all_pattern_compounds)
Draw.MolsToGridImage(all_pattern_compounds, subImgSize=(300, 300))
from rdkit.Chem import MACCSkeys
maccs_fps = [MACCSkeys.GenMACCSKeys(mol) for mol in drugbank]
maccs_fps[0]
list(maccs_fps[33].GetOnBits()) # a convenient way to get the bits that were set
def tanimoto_similarity(fp1, fp2):
fp1_on_bits = set([i for i, bit in enumerate(fp1) if bit]) # indices of True values
fp2_on_bits = set([i for i, bit in enumerate(fp2) if bit]) # indices of True values
all_bits = fp1_on_bits.union(fp2_on_bits)
shared_bits = fp1_on_bits.intersection(fp2_on_bits)
if not all_bits:
return 0 # avoid division by zero for empty fingerprint
else:
return len(shared_bits) / len(all_bits)
tanimoto_similarity((True, True, False, False), (True, True, False, False))
tanimoto_similarity((True, True, False, False), (False, False, False, False))
tanimoto_similarity((True, True, False, False), (False, False, True, False))
tanimoto_similarity((True, True, False, False), (False, True, False, False))
tanimoto_similarity((True, True, False, False), (False, True, True, True))
salicylica = Chem.MolFromSmiles("c1ccc(c(c1)C(=O)O)O")
salicylica_key = customkey(salicylica)
salicylica_key
salicylica_similarities = [tanimoto_similarity(salicylica_key, mol_fp) for mol_fp in drugbank_fps]
len(salicylica_similarities), min(salicylica_similarities), max(salicylica_similarities)
perfect_match_indexes = [i for i, similarity in enumerate(salicylica_similarities) if similarity == 1]
len(perfect_match_indexes)
Draw.MolsToGridImage([drugbank[i] for i in perfect_match_indexes[:9]], subImgSize=(300, 300))
salicylica_maccs = MACCSkeys.GenMACCSKeys(salicylica)
salicylica_maccs
from rdkit import DataStructs
salicylica_maccs_similarities = [DataStructs.FingerprintSimilarity(salicylica_maccs, mol_fp) for mol_fp in maccs_fps]
len(salicylica_maccs_similarities), min(salicylica_maccs_similarities), max(salicylica_maccs_similarities)
perfect_match_indexes = [i for i, similarity in enumerate(salicylica_maccs_similarities) if similarity == 1]
perfect_match_indexes
Draw.MolToImage(drugbank[815])
| 0.41834 | 0.880026 |
# V-Type Three-Level: Weak CW, √4π Coupling: Double Optical Surfer
```
import numpy as np
sech_fwhm_conv = 1./2.6339157938
t_width = 1.0*sech_fwhm_conv # [τ]
print('t_width', t_width)
n = 4.0 # For a pulse area of nπ
ampl = n/t_width/(2*np.pi) # Pulse amplitude [2π Γ]
print('ampl', ampl)
mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"detuning_positive": true,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"on": -1.0,
"fwhm": 0.3796628587572578
},
"rabi_freq_t_func": "ramp_on"
},
{
"coupled_levels": [[0, 2]],
"detuning": 0.0,
"detuning_positive": true,
"label": "coupling",
"rabi_freq": 1.6768028730843334,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"width": 0.3796628587572578
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 120,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 140,
"z_steps_inner": 2,
"interaction_strengths": [10.0, 10.0],
"savefile": "mbs-vee-weak-cw-sech-4pi"
}
"""
from maxwellbloch import mb_solve
mb_solve_00 = mb_solve.MBSolve().from_json_str(mb_solve_json)
%time Omegas_zt, states_zt = mb_solve_00.mbsolve(recalc=False)
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 12))
# Probe
ax = fig.add_subplot(211)
cmap_range = np.linspace(0.0, 2.5e-3, 11)
cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist,
np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Probe',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes,
color='k', fontsize=16, alpha=0.5)
plt.colorbar(cf)
# Coupling
ax = fig.add_subplot(212)
cmap_range = np.linspace(0.0, 2.5, 11)
cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist,
np.abs(mb_solve_00.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Coupling',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes,
color='k', fontsize=15, alpha=0.5)
plt.colorbar(cf)
# Both
for ax in fig.axes:
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.tight_layout();
```
## Field Area
```
total_area = np.sqrt(mb_solve_00.fields_area()[0]**2 + mb_solve_00.fields_area()[1]**2)
fig, ax = plt.subplots(figsize=(16, 4))
ax.plot(mb_solve_00.zlist, mb_solve_00.fields_area()[0]/np.pi, label='Probe', clip_on=False)
ax.plot(mb_solve_00.zlist, mb_solve_00.fields_area()[1]/np.pi, label='Coupling', clip_on=False)
ax.plot(mb_solve_00.zlist, total_area/np.pi, label='Total', ls='dashed', clip_on=False)
ax.legend()
ax.set_ylim([0.0, 4.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)');
```
|
github_jupyter
|
import numpy as np
sech_fwhm_conv = 1./2.6339157938
t_width = 1.0*sech_fwhm_conv # [τ]
print('t_width', t_width)
n = 4.0 # For a pulse area of nπ
ampl = n/t_width/(2*np.pi) # Pulse amplitude [2π Γ]
print('ampl', ampl)
mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"detuning_positive": true,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"on": -1.0,
"fwhm": 0.3796628587572578
},
"rabi_freq_t_func": "ramp_on"
},
{
"coupled_levels": [[0, 2]],
"detuning": 0.0,
"detuning_positive": true,
"label": "coupling",
"rabi_freq": 1.6768028730843334,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"width": 0.3796628587572578
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 120,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 140,
"z_steps_inner": 2,
"interaction_strengths": [10.0, 10.0],
"savefile": "mbs-vee-weak-cw-sech-4pi"
}
"""
from maxwellbloch import mb_solve
mb_solve_00 = mb_solve.MBSolve().from_json_str(mb_solve_json)
%time Omegas_zt, states_zt = mb_solve_00.mbsolve(recalc=False)
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 12))
# Probe
ax = fig.add_subplot(211)
cmap_range = np.linspace(0.0, 2.5e-3, 11)
cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist,
np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Probe',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes,
color='k', fontsize=16, alpha=0.5)
plt.colorbar(cf)
# Coupling
ax = fig.add_subplot(212)
cmap_range = np.linspace(0.0, 2.5, 11)
cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist,
np.abs(mb_solve_00.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Coupling',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes,
color='k', fontsize=15, alpha=0.5)
plt.colorbar(cf)
# Both
for ax in fig.axes:
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.tight_layout();
total_area = np.sqrt(mb_solve_00.fields_area()[0]**2 + mb_solve_00.fields_area()[1]**2)
fig, ax = plt.subplots(figsize=(16, 4))
ax.plot(mb_solve_00.zlist, mb_solve_00.fields_area()[0]/np.pi, label='Probe', clip_on=False)
ax.plot(mb_solve_00.zlist, mb_solve_00.fields_area()[1]/np.pi, label='Coupling', clip_on=False)
ax.plot(mb_solve_00.zlist, total_area/np.pi, label='Total', ls='dashed', clip_on=False)
ax.legend()
ax.set_ylim([0.0, 4.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)');
| 0.447943 | 0.854217 |
# Using Fuzzingbook Code in your own Programs
This notebook has instructions on how to use the `fuzzingbook` code in your own programs.
In short, there are three ways:
1. Simply run the notebooks in your browser, using the "mybinder" environment. Choose "Resources→Edit as Notebook" in any of the `fuzzingbook.org` pages; this will lead you to a preconfigured Jupyter Notebook environment where you can toy around at your leisure.
2. Import the code for your own Python programs. Using `pip install fuzzingbook`, you can install all code and start using it from your own code. See "Can I import the code for my own Python projects?", below.
3. Download or check out the code and/or the notebooks from the project site. This allows you to edit and run all things locally. However, be sure to also install the required packages; see below for details.
```
import bookutils
from bookutils import YouTubeVideo
YouTubeVideo("fGu3uwHcTRc")
```
## Can I import the code for my own Python projects?
Yes, you can! (If you like Python, that is.) We provide a `fuzzingbook` Python package that you can install using the `pip` package manager:
```shell
$ pip install fuzzingbook
```
As of `fuzzingbook 1.0`, this is set up such that almost all additional required packages are also installed. For a full installation, also follow the steps in "Which other Packages do I need to use the Python Modules?" below.
Once `pip` is complete, you can import individual classes, constants, or functions from each notebook using
```python
>>> from fuzzingbook.<notebook> import <identifier>
```
where `<identifier>` is the name of the class, constant, or function to use, and `<notebook>` is the name of the respective notebook. (If you read this at fuzzingbook.org, then the notebook name is the identifier preceding `".html"` in the URL).
Here is an example importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb), whose notebook name is `Fuzzer`:
```python
>>> from fuzzingbook.Fuzzer import RandomFuzzer
>>> f = RandomFuzzer()
>>> f.fuzz()
'!7#%"*#0=)$;%6*;>638:*>80"=</>(/*:-(2<4 !:5*6856&?""11<7+%<%7,4.8,*+&,,$,."5%<%76< -5'
```
The "Synopsis" section at the beginning of a chapter gives a short survey on useful code features you can use.
## Which OS and Python versions are required?
As of `fuzzingbook 1.0`, Python 3.9 and later is required. Specifically, we use Python 3.9.7 for development and testing. This is also the version to be used if you check out the code from git, and the version you get if you use the debugging book within the "mybinder" environment.
To use the `fuzzingbook` code with earlier Python version, use
```shell
$ pip install 'fuzzingbook=0.95'
```
Our notebooks generally assume a Unix-like environment; the code is tested on Linux and macOS. System-independent code may also run on Windows.
## Can I use the code from within a Jupyter notebook?
Yes, you can! You would first install the `fuzzingbook` package (as above); you can then access all code right from your notebook.
Another way to use the code is to _import the notebooks directly_. Download the notebooks from the menu. Then, add your own notebooks into the same folder. After importing `bookutils`, you can then simply import the code from other notebooks, just as our own notebooks do.
Here is again the above example, importing `RandomFuzzer` from [the chapter on fuzzers](Fuzzer.ipynb) – but now from a notebook:
```
import bookutils
from Fuzzer import RandomFuzzer
f = RandomFuzzer()
f.fuzz()
```
If you'd like to share your notebook, let us know; we can integrate it in the repository or even in the book.
## Can I check out the code from git and get the latest and greatest?
Yes, you can! We have a few continuous integration (CI) workflows running which do exactly that. After cloning the repository from [the project page](https://github.com/uds-se/fuzzingbook/) and installing the additional packages (see below), you can `cd` into `notebooks` and start `jupyter` right away!
There also is a `Makefile` provided with literally hundreds of targets; most important are the ones we also use in continuous integration:
* `make check-imports` checks whether your code is free of syntax errors
* `make check-style` checks whether your code is free of type errors
* `make check-code` runs all derived code, testing it
* `make check-notebooks` runs all notebooks, testing them
If you want to contribute to the project, ensure that the above tests run through.
The `Makefile` has many more, often experimental, targets. `make markdown` creates a `.md` variant in `markdown/`, and there's also `make word` and `make epub`, which are set to create Word and EPUB variants (with mixed results). Try `make help` for commonly used targets.
## Can I just run the Python code? I mean, without notebooks?
Yes, you can! You can download the code as Python programs; simply select "Resources $\rightarrow$ Download Code" for one chapter or "Resources $\rightarrow$ All Code" for all chapters. These code files can be executed, yielding (hopefully) the same results as the notebooks.
The code files can also be edited if you wish, but (a) they are very obviously generated from notebooks, (b) therefore not much fun to work with, and (c) if you fix any errors, you'll have to back-propagate them to the notebook before you can make a pull request. Use code files only under severely constrained circumstances.
If you only want to **use** the Python code, install the code package (see above).
## Which other Packages do I need to use the Python Modules?
After downloaded `fuzzingbook` code, installing the `fuzzingbook` package, or checking out `fuzzingbook` from the repository, here's what to do to obtain a complete set of packages.
### Step 1: Install Required Python Packages
The [`requirements.txt` file within the project root folder](https://github.com/uds-se/fuzzingbook/tree/master/) lists all _Python packages required_.
You can do
```sh
$ pip install -r requirements.txt
```
to install all required packages (but using `pipenv` is preferred; see below).
### Step 2: Install Additional Non-Python Packages
The [`apt.txt` file in the `binder/` folder](https://github.com/uds-se/fuzzingbook/tree/master/binder) lists all _Linux_ packages required.
In most cases, however, it suffices to install the `dot` graph drawing program (part of the `graphviz` package). Here are some instructions:
#### Installing Graphviz on Linux
```sh
$ sudo apt-get install graphviz
```
to install it.
#### Installing Graphviz on macOS
On macOS, if you use `conda`, run
```sh
$ conda install graphviz
```
If you use HomeBrew, run
```sh
$ brew install graphviz
```
## Installing Fuzzingbook Code in an Isolated Environment
If you wish to install the `fuzzingbook` code in an environment that is isolated from your system interpreter,
we recommend using [Pipenv](https://pipenv.pypa.io/), which can automatically create a so called *virtual environment* hosting all required packages.
To accomplish this, please follow these steps:
### Step 1: Install PyEnv
Optionally install `pyenv` following the [official instructions](https://github.com/pyenv/pyenv#installation) if you are on a Unix operating system.
If you are on Windows, consider using [pyenv-win](https://github.com/pyenv-win/pyenv-win) instead.
This will allow you to seamlessly install any version of Python.
### Step 2: Install PipEnv
Install Pipenv following the official [installation instructions](https://pypi.org/project/pipenv/).
If you have `pyenv` installed, Pipenv can automatically download and install the appropriate version of the Python distribution.
Otherwise, Pipenv will use your system interpreter, which may or may not be the right version.
### Step 3: Install Python Packages
Run
```sh
$ pipenv install -r requirements.txt
```
in the `fuzzingbook` root directory.
### Step 4: Install Additional Non-Python Packages
See above for instructions on how to install additional non-python packages.
### Step 5: Enter the Environment
Enter the environment with
```sh
$ pipenv shell
```
where you can now execute
```sh
$ make -k check-code
```
to run the tests.
|
github_jupyter
|
import bookutils
from bookutils import YouTubeVideo
YouTubeVideo("fGu3uwHcTRc")
$ pip install fuzzingbook
>>> from fuzzingbook.<notebook> import <identifier>
>>> from fuzzingbook.Fuzzer import RandomFuzzer
>>> f = RandomFuzzer()
>>> f.fuzz()
'!7#%"*#0=)$;%6*;>638:*>80"=</>(/*:-(2<4 !:5*6856&?""11<7+%<%7,4.8,*+&,,$,."5%<%76< -5'
$ pip install 'fuzzingbook=0.95'
import bookutils
from Fuzzer import RandomFuzzer
f = RandomFuzzer()
f.fuzz()
$ pip install -r requirements.txt
$ sudo apt-get install graphviz
$ conda install graphviz
$ brew install graphviz
$ pipenv install -r requirements.txt
$ pipenv shell
$ make -k check-code
| 0.169028 | 0.956836 |
## Imports and loading data
```
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
from tensorflow.keras.utils import to_categorical
(train_ds, train_labels), (test_ds, test_labels) = tfds.load(
"tf_flowers",
split=["train[:70%]", "train[:30%]"],
batch_size=-1,
as_supervised=True, # Include labels
)
train_ds.shape
```
## Preprocessing data
```
size = (150, 150)
train_ds = tf.image.resize(train_ds, (150, 150))
test_ds = tf.image.resize(test_ds, (150, 150))
train_labels = to_categorical(train_labels, num_classes=5)
test_labels = to_categorical(test_labels, num_classes=5)
train_ds.shape
```
## Loading VGG16 model
```
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input
train_ds = preprocess_input(train_ds)
test_ds = preprocess_input(test_ds)
base_model = VGG16(weights="imagenet", include_top=False, input_shape=train_ds[0].shape)
base_model.trainable = False
base_model.summary()
```
## Adding Layers
```
from tensorflow.keras import layers, models
flatten_layer = layers.Flatten()
dense_layer_1 = layers.Dense(50, activation='relu')
dense_layer_2 = layers.Dense(20, activation='relu')
prediction_layer = layers.Dense(5, activation='softmax')
model = models.Sequential([
base_model,
flatten_layer,
dense_layer_1,
dense_layer_2,
prediction_layer
])
model.summary()
```
## Training model
```
from tensorflow.keras.callbacks import EarlyStopping
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
es = EarlyStopping(monitor='val_accuracy', mode='max', patience=5, restore_best_weights=True)
model.fit(train_ds, train_labels, epochs=50, validation_split=0.2, batch_size=32, callbacks=[es])
model.evaluate(test_ds, test_labels)
```
## Hand Made Model
```
from tensorflow.keras import Sequential, layers
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers.experimental.preprocessing import Rescaling
hand_made_model = Sequential()
hand_made_model.add(Rescaling(1./255, input_shape=(150,150,3)))
hand_made_model.add(layers.Conv2D(16, kernel_size=10, activation='relu'))
hand_made_model.add(layers.MaxPooling2D(3))
hand_made_model.add(layers.Conv2D(32, kernel_size=8, activation="relu"))
hand_made_model.add(layers.MaxPooling2D(2))
hand_made_model.add(layers.Conv2D(32, kernel_size=6, activation="relu"))
hand_made_model.add(layers.MaxPooling2D(2))
hand_made_model.add(layers.Flatten())
hand_made_model.add(layers.Dense(50, activation='relu'))
hand_made_model.add(layers.Dense(20, activation='relu'))
hand_made_model.add(layers.Dense(5, activation='softmax'))
hand_made_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
es = EarlyStopping(monitor='val_accuracy', mode='max', patience=5, restore_best_weights=True)
hand_made_model.fit(train_ds, train_labels, epochs=50, validation_split=0.2, batch_size=32, callbacks=[es])
hand_made_model.evaluate(test_ds, test_labels)
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
from tensorflow.keras.utils import to_categorical
(train_ds, train_labels), (test_ds, test_labels) = tfds.load(
"tf_flowers",
split=["train[:70%]", "train[:30%]"],
batch_size=-1,
as_supervised=True, # Include labels
)
train_ds.shape
size = (150, 150)
train_ds = tf.image.resize(train_ds, (150, 150))
test_ds = tf.image.resize(test_ds, (150, 150))
train_labels = to_categorical(train_labels, num_classes=5)
test_labels = to_categorical(test_labels, num_classes=5)
train_ds.shape
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input
train_ds = preprocess_input(train_ds)
test_ds = preprocess_input(test_ds)
base_model = VGG16(weights="imagenet", include_top=False, input_shape=train_ds[0].shape)
base_model.trainable = False
base_model.summary()
from tensorflow.keras import layers, models
flatten_layer = layers.Flatten()
dense_layer_1 = layers.Dense(50, activation='relu')
dense_layer_2 = layers.Dense(20, activation='relu')
prediction_layer = layers.Dense(5, activation='softmax')
model = models.Sequential([
base_model,
flatten_layer,
dense_layer_1,
dense_layer_2,
prediction_layer
])
model.summary()
from tensorflow.keras.callbacks import EarlyStopping
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
es = EarlyStopping(monitor='val_accuracy', mode='max', patience=5, restore_best_weights=True)
model.fit(train_ds, train_labels, epochs=50, validation_split=0.2, batch_size=32, callbacks=[es])
model.evaluate(test_ds, test_labels)
from tensorflow.keras import Sequential, layers
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers.experimental.preprocessing import Rescaling
hand_made_model = Sequential()
hand_made_model.add(Rescaling(1./255, input_shape=(150,150,3)))
hand_made_model.add(layers.Conv2D(16, kernel_size=10, activation='relu'))
hand_made_model.add(layers.MaxPooling2D(3))
hand_made_model.add(layers.Conv2D(32, kernel_size=8, activation="relu"))
hand_made_model.add(layers.MaxPooling2D(2))
hand_made_model.add(layers.Conv2D(32, kernel_size=6, activation="relu"))
hand_made_model.add(layers.MaxPooling2D(2))
hand_made_model.add(layers.Flatten())
hand_made_model.add(layers.Dense(50, activation='relu'))
hand_made_model.add(layers.Dense(20, activation='relu'))
hand_made_model.add(layers.Dense(5, activation='softmax'))
hand_made_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
es = EarlyStopping(monitor='val_accuracy', mode='max', patience=5, restore_best_weights=True)
hand_made_model.fit(train_ds, train_labels, epochs=50, validation_split=0.2, batch_size=32, callbacks=[es])
hand_made_model.evaluate(test_ds, test_labels)
| 0.830663 | 0.889193 |
<a href="https://colab.research.google.com/github/Enrico-Call/DL/blob/main/_downloads/17a7c7cb80916fcdf921097825a0f562/cifar10_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
```
Training a Classifier
=====================
This is it. You have seen how to define neural networks, compute loss and make
updates to the weights of the network.
Now you might be thinking,
What about data?
----------------
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a ``torch.*Tensor``.
- For images, packages such as Pillow, OpenCV are useful
- For audio, packages such as scipy and librosa
- For text, either raw Python or Cython based loading, or NLTK and
SpaCy are useful
Specifically for vision, we have created a package called
``torchvision``, that has data loaders for common datasets such as
ImageNet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset.
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
.. figure:: /_static/img/cifar10.png
:alt: cifar10
cifar10
Training an image classifier
----------------------------
We will do the following steps in order:
1. Load and normalize the CIFAR10 training and test datasets using
``torchvision``
2. Define a Convolutional Neural Network
3. Define a loss function
4. Train the network on the training data
5. Test the network on the test data
1. Load and normalize CIFAR10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using ``torchvision``, it’s extremely easy to load CIFAR10.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
The output of torchvision datasets are PILImage images of range [0, 1].
We transform them to Tensors of normalized range [-1, 1].
<div class="alert alert-info"><h4>Note</h4><p>If running on Windows and you get a BrokenPipeError, try setting
the num_worker of torch.utils.data.DataLoader() to 0.</p></div>
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
Let us show some of the training images, for fun.
```
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
```
2. Define a Convolutional Neural Network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Copy the neural network from the Neural Networks section before and modify it to
take 3-channel images (instead of 1-channel images as it was defined).
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
3. Define a Loss function and optimizer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let's use a Classification Cross-Entropy loss and SGD with momentum.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
4. Train the network
^^^^^^^^^^^^^^^^^^^^
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to the
network and optimize.
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
Let's quickly save our trained model:
```
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
```
See `here <https://pytorch.org/docs/stable/notes/serialization.html>`_
for more details on saving PyTorch models.
5. Test the network on the test data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We have trained the network for 2 passes over the training dataset.
But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network
outputs, and checking it against the ground-truth. If the prediction is
correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
Next, let's load back in our saved model (note: saving and re-loading the model
wasn't necessary here, we only did it to illustrate how to do so):
```
net = Net()
net.load_state_dict(torch.load(PATH))
```
Okay, now let us see what the neural network thinks these examples above are:
```
outputs = net(images)
```
The outputs are energies for the 10 classes.
The higher the energy for a class, the more the network
thinks that the image is of the particular class.
So, let's get the index of the highest energy:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
The results seem pretty good.
Let us look at how the network performs on the whole dataset.
```
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
That looks way better than chance, which is 10% accuracy (randomly picking
a class out of 10 classes).
Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that did
not perform well:
```
# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# again no gradients needed
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# collect the correct predictions for each class
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# print accuracy for each class
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy))
```
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU
----------------
Just like how you transfer a Tensor onto the GPU, you transfer the neural
net onto the GPU.
Let's first define our device as the first visible cuda device if we have
CUDA available:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
```
The rest of this section assumes that ``device`` is a CUDA device.
Then these methods will recursively go over all modules and convert their
parameters and buffers to CUDA tensors:
.. code:: python
net.to(device)
Remember that you will have to send the inputs and targets at every step
to the GPU too:
.. code:: python
inputs, labels = data[0].to(device), data[1].to(device)
Why don't I notice MASSIVE speedup compared to CPU? Because your network
is really small.
**Exercise:** Try increasing the width of your network (argument 2 of
the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
they need to be the same number), see what kind of speedup you get.
**Goals achieved**:
- Understanding PyTorch's Tensor library and neural networks at a high level.
- Train a small neural network to classify images
Training on multiple GPUs
-------------------------
If you want to see even more MASSIVE speedup using all of your GPUs,
please check out :doc:`data_parallel_tutorial`.
Where do I go next?
-------------------
- :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
- `Train a state-of-the-art ResNet network on imagenet`_
- `Train a face generator using Generative Adversarial Networks`_
- `Train a word-level language model using Recurrent LSTM networks`_
- `More examples`_
- `More tutorials`_
- `Discuss PyTorch on the Forums`_
- `Chat with other users on Slack`_
```
```
|
github_jupyter
|
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
net = Net()
net.load_state_dict(torch.load(PATH))
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# again no gradients needed
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# collect the correct predictions for each class
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# print accuracy for each class
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
| 0.871393 | 0.980186 |
# Обучение модели LSTM
```
import os
import sys
from google.colab import drive
drive.mount('/content/drive')
%tensorflow_version 2.x
os.chdir('/content/drive/Shared drives/Кредитные риски')
!pip install category_encoders catboost
sys.path.append(os.path.abspath(os.path.join('.', 'CreditRisks/metrics_library')))
sys.path.append(os.path.abspath(os.path.join('.', 'CreditRisks/PythonBackend')))
import pandas as pd
import numpy as np
import os
import io
import pickle
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_auc_score
import category_encoders
import matplotlib.pyplot as plt
from calc_model.lr_model import Winsorizator
from feature_generation import add_features
import profits
DIR_IN = 'Датасеты/revision_006/'
```
## Считывание данных
```
df_train = pd.read_pickle(f'{DIR_IN}companies_ready_train.pkl')
y_train = df_train['target']
x_train = df_train.drop(columns=['target'])
df_test = pd.read_pickle(f'{DIR_IN}companies_ready_test.pkl')
y_test = df_test['target']
x_test = df_test.drop(columns=['target'])
df_prod = pd.read_pickle(f'{DIR_IN}companies_ready_prod.pkl')
y_prod = df_prod['target']
x_prod = df_prod.drop(columns=['target'])
```
## Предобработка данных
### Добавление новых признаков
```
add_features(x_train)
add_features(x_test)
add_features(x_prod)
features_all = ['region', 'year_-1_1100', 'year_-1_1150', 'year_-1_1200',
'year_-1_1210', 'year_-1_1300', 'year_-1_1310', 'year_-1_1500',
'year_-1_1520', 'year_-1_2110', 'year_-1_2120',
'year_-1_AssetTurnover',
'year_-1_CoverageDebtWithAccumulatedProfit',
'year_-1_CreditLeverage', 'year_-1_CurrentLiquidity',
'year_-1_DebtBurden', 'year_-1_LevelOfOperatingAssets',
'year_-1_LiabilityCoverageOperatingProfit',
'year_-1_NetProfitMargin',
'year_-1_OperatingProfitFinancialDebtRatio',
'year_-1_QuickLiquidity', 'year_-1_ReturnAssetsNetProfit',
'year_-1_okved', 'year_-1_okved1', 'year_-1_okved2',
'year_-2_1150', 'year_-2_1200', 'year_-2_1230', 'year_-2_1310',
'year_-2_1500', 'year_-2_1520', 'year_-2_1600', 'year_-2_2100',
'year_-2_2110', 'year_-2_2120', 'year_-2_2300', 'year_-2_2400',
'year_0_1100', 'year_0_1150', 'year_0_1200', 'year_0_1210',
'year_0_1230', 'year_0_1250', 'year_0_1300', 'year_0_1310',
'year_0_1500', 'year_0_1520', 'year_0_2300', 'year_0_2400',
'year_0_AssetTurnover', 'year_0_CoverageDebtWithAccumulatedProfit',
'year_0_CreditLeverage', 'year_0_DebtBurden',
'year_0_FinancialCycle', 'year_0_InstantLiquidity',
'year_0_LevelOfOperatingAssets',
'year_0_LiabilityCoverageOperatingProfit',
'year_0_NetProfitMargin',
'year_0_OperatingProfitFinancialDebtRatio',
'year_0_ReturnAssetsNetProfit', 'year_0_financialDebt',
'year_0_okved', 'year_0_okved1', 'year_0_okved2',
'year_0_turnoverCreditDebt', 'year_0_turnoverDebtorDebt',
'year_0_turnoverReserves']
features_all = np.array(features_all)
del features_all
# x_train = x_train[features_all]
# x_test = x_test[features_all]
# x_prod = x_prod[features_all]
```
### Стандартизация, винзоризация и выброс категориальных признаков
```
features_all = x_train.columns.values
features_cat = ['region', ]
for year in ['-1', '0']:
for col in ['okved', 'okved2', 'okved1', ]:
features_cat.append(f'year_{year}_{col}')
assert set(features_cat) & set(features_all) == set(features_cat)
features_float = np.array(sorted(list(set(features_all) - set(features_cat))))
catboost_encoder = category_encoders.CatBoostEncoder(cols=features_cat, random_state=42)
__x_train_cat = catboost_encoder.fit_transform(x_train[features_cat], y_train)
__x_test_cat = catboost_encoder.transform(x_test[features_cat])
catboost_encoder_prod = category_encoders.CatBoostEncoder(cols=features_cat, random_state=42)
__x_prod_cat = catboost_encoder_prod.fit_transform(x_prod[features_cat], y_prod)
sc = StandardScaler()
winz = Winsorizator(0.3, 0.7)
__x_train_float = x_train[features_float].copy()
winz.fit_transform(__x_train_float)
__x_train = pd.concat([__x_train_float, __x_train_cat], axis=1)
__x_train = pd.DataFrame(sc.fit_transform(__x_train), columns=__x_train.columns, index=__x_train.index)
__x_test_float = x_test[features_float].copy()
winz.transform(__x_test_float)
__x_test = pd.concat([__x_test_float, __x_test_cat], axis=1)
__x_test = pd.DataFrame(sc.transform(__x_test), columns=__x_test.columns, index=__x_test.index)
sc_prod = StandardScaler()
winz_prod = Winsorizator(0.3, 0.7)
__x_prod_float = x_prod[features_float].copy()
winz_prod.fit_transform(__x_prod_float)
__x_prod = pd.concat([__x_prod_float, __x_prod_cat], axis=1)
__x_prod = pd.DataFrame(sc_prod.fit_transform(__x_prod), columns=__x_prod.columns, index=__x_prod.index)
```
## Обучение
```
def plt_to_bytes():
b = io.BytesIO()
plt.savefig(b, format="png")
b.seek(0)
return b.read()
def measure_quality(y_true: np.array, y_predict: np.array, name=None):
print("ROC AUC: ", roc_auc_score(y_true, y_predict))
plts = {}
profits.plt_profit(y_true, y_predict, percent_space=[0.10, 0.15, 0.20, 0.25, 0.35], title=name)
plts['plt_profit'] = plt_to_bytes()
profits.plt_profit_recall(y_true, y_predict, percent_space=[0.10, 0.15, 0.20, 0.25, 0.35], title=name)
plts['plt_profit_recall'] = plt_to_bytes()
profits.plt_popularity(y_predict, title=name)
plts['plt_popularity'] = plt_to_bytes()
return plts
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, Bidirectional, GlobalMaxPool1D, GlobalAveragePooling1D, concatenate
from tensorflow.keras.models import Model
from tensorflow.keras import initializers, regularizers, constraints, optimizers, layers
from tensorflow.keras.optimizers import Adamax, Adam
lag_cols = []
for year in [-2, -1, 0]:
cols = []
for col in [1150, 1200, 1310, 1500, 1520, 1230, 1600, 2110, 2400]:
cols.append(f'year_{year}_{col}')
lag_cols.append(cols)
lag_cols2 = []
for year in [-1, 0]:
cols = []
for col in ['AssetTurnover', 'CoverageDebtWithAccumulatedProfit', 'CreditLeverage',
'DebtBurden', 'LevelOfOperatingAssets', 'LiabilityCoverageOperatingProfit',
'NetProfitMargin', 'OperatingProfitFinancialDebtRatio', 'ReturnAssetsNetProfit',
'CurrentLiquidity', 'FinancialCycle', 'FinancialDebtRevenueRatio', 'FinancialIndependence',
'InstantLiquidity', 'OperatingMargin', 'QuickLiquidity', 'ReturnAssetsOperatingProfit',
'financialDebt', 'turnoverCreditDebt', 'turnoverDebtorDebt', 'turnoverReserves',
'okved', 'okved1', 'okved2', '1100', '1210', '1300', '1400', '1250', '2200', ]:
cols.append(f'year_{year}_{col}')
lag_cols2.append(cols)
ordinal_cols = np.array(sorted(list(set(__x_train.columns) - set(lag_cols[0] + lag_cols[1] + lag_cols[2]) - set(lag_cols2[0] + lag_cols2[1]))))
ordinal_cols
lag_cols
data_train_1 = np.stack([__x_train[cols].values for cols in lag_cols])
data_train_1 = np.moveaxis(data_train_1, 1, 0)
__x_train.shape, data_train_1.shape
data_train_2 = np.stack([__x_train[cols].values for cols in lag_cols2])
data_train_2 = np.moveaxis(data_train_2, 1, 0)
__x_train.shape, data_train_2.shape
data_train_3 = __x_train[ordinal_cols]
data_test_1 = np.stack([__x_test[cols].values for cols in lag_cols])
data_test_1 = np.moveaxis(data_test_1, 1, 0)
__x_test.shape, data_test_1.shape
data_test_2 = np.stack([__x_test[cols].values for cols in lag_cols2])
data_test_2 = np.moveaxis(data_test_2, 1, 0)
__x_test.shape, data_test_2.shape
data_test_3 = __x_test[ordinal_cols]
def make_lstm(inp, size):
x = Bidirectional(LSTM(size, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(inp)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
return x
inp1 = Input(shape=data_train_1.shape[1:], name='3_years')
inp2 = Input(shape=data_train_2.shape[1:], name='2_years')
inp3 = Input(shape=data_train_3.shape[1:], name='other')
x1 = make_lstm(inp1, 9)
x2 = make_lstm(inp2, 30)
x3 = Dense(20, activation="relu")(inp3)
x = concatenate([x1, x2, x3])
x = Dense(50, activation="relu")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp1, inp2, inp3], outputs=x)
model.compile(loss='binary_crossentropy', optimizer=Adam(clipvalue=2, clipnorm=2), metrics=['AUC'])
model.summary()
model.fit([data_train_1, data_train_2, data_train_3], y_train, batch_size=256, epochs=5, validation_data=([data_test_1, data_test_2, data_test_3], y_test))
def make_lstm(inp, size):
x = Bidirectional(LSTM(size, return_sequences=False, dropout=0.1, recurrent_dropout=0.1))(inp)
return x
inp1 = Input(shape=data_train_1.shape[1:], name='3_years')
inp2 = Input(shape=data_train_2.shape[1:], name='2_years')
inp3 = Input(shape=data_train_3.shape[1:], name='other')
x1 = make_lstm(inp1, 9)
x2 = make_lstm(inp2, 30)
x3 = Dense(20, activation="relu")(inp3)
x = concatenate([x1, x2, x3])
x = Dense(50, activation="relu")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp1, inp2, inp3], outputs=x)
model.compile(loss='binary_crossentropy', optimizer=Adam(clipvalue=2, clipnorm=2), metrics=['AUC'])
model.summary()
model.fit([data_train_1, data_train_2, data_train_3], y_train, batch_size=256, epochs=3, validation_data=([data_test_1, data_test_2, data_test_3], y_test))
proba = model.predict([data_test_1, data_test_2, data_test_3], batch_size=1024, verbose=1)[:, 0]
plts = measure_quality(y_test, proba, name='Алгоритм "LSTM"')
```
|
github_jupyter
|
import os
import sys
from google.colab import drive
drive.mount('/content/drive')
%tensorflow_version 2.x
os.chdir('/content/drive/Shared drives/Кредитные риски')
!pip install category_encoders catboost
sys.path.append(os.path.abspath(os.path.join('.', 'CreditRisks/metrics_library')))
sys.path.append(os.path.abspath(os.path.join('.', 'CreditRisks/PythonBackend')))
import pandas as pd
import numpy as np
import os
import io
import pickle
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_auc_score
import category_encoders
import matplotlib.pyplot as plt
from calc_model.lr_model import Winsorizator
from feature_generation import add_features
import profits
DIR_IN = 'Датасеты/revision_006/'
df_train = pd.read_pickle(f'{DIR_IN}companies_ready_train.pkl')
y_train = df_train['target']
x_train = df_train.drop(columns=['target'])
df_test = pd.read_pickle(f'{DIR_IN}companies_ready_test.pkl')
y_test = df_test['target']
x_test = df_test.drop(columns=['target'])
df_prod = pd.read_pickle(f'{DIR_IN}companies_ready_prod.pkl')
y_prod = df_prod['target']
x_prod = df_prod.drop(columns=['target'])
add_features(x_train)
add_features(x_test)
add_features(x_prod)
features_all = ['region', 'year_-1_1100', 'year_-1_1150', 'year_-1_1200',
'year_-1_1210', 'year_-1_1300', 'year_-1_1310', 'year_-1_1500',
'year_-1_1520', 'year_-1_2110', 'year_-1_2120',
'year_-1_AssetTurnover',
'year_-1_CoverageDebtWithAccumulatedProfit',
'year_-1_CreditLeverage', 'year_-1_CurrentLiquidity',
'year_-1_DebtBurden', 'year_-1_LevelOfOperatingAssets',
'year_-1_LiabilityCoverageOperatingProfit',
'year_-1_NetProfitMargin',
'year_-1_OperatingProfitFinancialDebtRatio',
'year_-1_QuickLiquidity', 'year_-1_ReturnAssetsNetProfit',
'year_-1_okved', 'year_-1_okved1', 'year_-1_okved2',
'year_-2_1150', 'year_-2_1200', 'year_-2_1230', 'year_-2_1310',
'year_-2_1500', 'year_-2_1520', 'year_-2_1600', 'year_-2_2100',
'year_-2_2110', 'year_-2_2120', 'year_-2_2300', 'year_-2_2400',
'year_0_1100', 'year_0_1150', 'year_0_1200', 'year_0_1210',
'year_0_1230', 'year_0_1250', 'year_0_1300', 'year_0_1310',
'year_0_1500', 'year_0_1520', 'year_0_2300', 'year_0_2400',
'year_0_AssetTurnover', 'year_0_CoverageDebtWithAccumulatedProfit',
'year_0_CreditLeverage', 'year_0_DebtBurden',
'year_0_FinancialCycle', 'year_0_InstantLiquidity',
'year_0_LevelOfOperatingAssets',
'year_0_LiabilityCoverageOperatingProfit',
'year_0_NetProfitMargin',
'year_0_OperatingProfitFinancialDebtRatio',
'year_0_ReturnAssetsNetProfit', 'year_0_financialDebt',
'year_0_okved', 'year_0_okved1', 'year_0_okved2',
'year_0_turnoverCreditDebt', 'year_0_turnoverDebtorDebt',
'year_0_turnoverReserves']
features_all = np.array(features_all)
del features_all
# x_train = x_train[features_all]
# x_test = x_test[features_all]
# x_prod = x_prod[features_all]
features_all = x_train.columns.values
features_cat = ['region', ]
for year in ['-1', '0']:
for col in ['okved', 'okved2', 'okved1', ]:
features_cat.append(f'year_{year}_{col}')
assert set(features_cat) & set(features_all) == set(features_cat)
features_float = np.array(sorted(list(set(features_all) - set(features_cat))))
catboost_encoder = category_encoders.CatBoostEncoder(cols=features_cat, random_state=42)
__x_train_cat = catboost_encoder.fit_transform(x_train[features_cat], y_train)
__x_test_cat = catboost_encoder.transform(x_test[features_cat])
catboost_encoder_prod = category_encoders.CatBoostEncoder(cols=features_cat, random_state=42)
__x_prod_cat = catboost_encoder_prod.fit_transform(x_prod[features_cat], y_prod)
sc = StandardScaler()
winz = Winsorizator(0.3, 0.7)
__x_train_float = x_train[features_float].copy()
winz.fit_transform(__x_train_float)
__x_train = pd.concat([__x_train_float, __x_train_cat], axis=1)
__x_train = pd.DataFrame(sc.fit_transform(__x_train), columns=__x_train.columns, index=__x_train.index)
__x_test_float = x_test[features_float].copy()
winz.transform(__x_test_float)
__x_test = pd.concat([__x_test_float, __x_test_cat], axis=1)
__x_test = pd.DataFrame(sc.transform(__x_test), columns=__x_test.columns, index=__x_test.index)
sc_prod = StandardScaler()
winz_prod = Winsorizator(0.3, 0.7)
__x_prod_float = x_prod[features_float].copy()
winz_prod.fit_transform(__x_prod_float)
__x_prod = pd.concat([__x_prod_float, __x_prod_cat], axis=1)
__x_prod = pd.DataFrame(sc_prod.fit_transform(__x_prod), columns=__x_prod.columns, index=__x_prod.index)
def plt_to_bytes():
b = io.BytesIO()
plt.savefig(b, format="png")
b.seek(0)
return b.read()
def measure_quality(y_true: np.array, y_predict: np.array, name=None):
print("ROC AUC: ", roc_auc_score(y_true, y_predict))
plts = {}
profits.plt_profit(y_true, y_predict, percent_space=[0.10, 0.15, 0.20, 0.25, 0.35], title=name)
plts['plt_profit'] = plt_to_bytes()
profits.plt_profit_recall(y_true, y_predict, percent_space=[0.10, 0.15, 0.20, 0.25, 0.35], title=name)
plts['plt_profit_recall'] = plt_to_bytes()
profits.plt_popularity(y_predict, title=name)
plts['plt_popularity'] = plt_to_bytes()
return plts
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, Bidirectional, GlobalMaxPool1D, GlobalAveragePooling1D, concatenate
from tensorflow.keras.models import Model
from tensorflow.keras import initializers, regularizers, constraints, optimizers, layers
from tensorflow.keras.optimizers import Adamax, Adam
lag_cols = []
for year in [-2, -1, 0]:
cols = []
for col in [1150, 1200, 1310, 1500, 1520, 1230, 1600, 2110, 2400]:
cols.append(f'year_{year}_{col}')
lag_cols.append(cols)
lag_cols2 = []
for year in [-1, 0]:
cols = []
for col in ['AssetTurnover', 'CoverageDebtWithAccumulatedProfit', 'CreditLeverage',
'DebtBurden', 'LevelOfOperatingAssets', 'LiabilityCoverageOperatingProfit',
'NetProfitMargin', 'OperatingProfitFinancialDebtRatio', 'ReturnAssetsNetProfit',
'CurrentLiquidity', 'FinancialCycle', 'FinancialDebtRevenueRatio', 'FinancialIndependence',
'InstantLiquidity', 'OperatingMargin', 'QuickLiquidity', 'ReturnAssetsOperatingProfit',
'financialDebt', 'turnoverCreditDebt', 'turnoverDebtorDebt', 'turnoverReserves',
'okved', 'okved1', 'okved2', '1100', '1210', '1300', '1400', '1250', '2200', ]:
cols.append(f'year_{year}_{col}')
lag_cols2.append(cols)
ordinal_cols = np.array(sorted(list(set(__x_train.columns) - set(lag_cols[0] + lag_cols[1] + lag_cols[2]) - set(lag_cols2[0] + lag_cols2[1]))))
ordinal_cols
lag_cols
data_train_1 = np.stack([__x_train[cols].values for cols in lag_cols])
data_train_1 = np.moveaxis(data_train_1, 1, 0)
__x_train.shape, data_train_1.shape
data_train_2 = np.stack([__x_train[cols].values for cols in lag_cols2])
data_train_2 = np.moveaxis(data_train_2, 1, 0)
__x_train.shape, data_train_2.shape
data_train_3 = __x_train[ordinal_cols]
data_test_1 = np.stack([__x_test[cols].values for cols in lag_cols])
data_test_1 = np.moveaxis(data_test_1, 1, 0)
__x_test.shape, data_test_1.shape
data_test_2 = np.stack([__x_test[cols].values for cols in lag_cols2])
data_test_2 = np.moveaxis(data_test_2, 1, 0)
__x_test.shape, data_test_2.shape
data_test_3 = __x_test[ordinal_cols]
def make_lstm(inp, size):
x = Bidirectional(LSTM(size, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(inp)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
return x
inp1 = Input(shape=data_train_1.shape[1:], name='3_years')
inp2 = Input(shape=data_train_2.shape[1:], name='2_years')
inp3 = Input(shape=data_train_3.shape[1:], name='other')
x1 = make_lstm(inp1, 9)
x2 = make_lstm(inp2, 30)
x3 = Dense(20, activation="relu")(inp3)
x = concatenate([x1, x2, x3])
x = Dense(50, activation="relu")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp1, inp2, inp3], outputs=x)
model.compile(loss='binary_crossentropy', optimizer=Adam(clipvalue=2, clipnorm=2), metrics=['AUC'])
model.summary()
model.fit([data_train_1, data_train_2, data_train_3], y_train, batch_size=256, epochs=5, validation_data=([data_test_1, data_test_2, data_test_3], y_test))
def make_lstm(inp, size):
x = Bidirectional(LSTM(size, return_sequences=False, dropout=0.1, recurrent_dropout=0.1))(inp)
return x
inp1 = Input(shape=data_train_1.shape[1:], name='3_years')
inp2 = Input(shape=data_train_2.shape[1:], name='2_years')
inp3 = Input(shape=data_train_3.shape[1:], name='other')
x1 = make_lstm(inp1, 9)
x2 = make_lstm(inp2, 30)
x3 = Dense(20, activation="relu")(inp3)
x = concatenate([x1, x2, x3])
x = Dense(50, activation="relu")(x)
x = tf.keras.layers.BatchNormalization()(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=[inp1, inp2, inp3], outputs=x)
model.compile(loss='binary_crossentropy', optimizer=Adam(clipvalue=2, clipnorm=2), metrics=['AUC'])
model.summary()
model.fit([data_train_1, data_train_2, data_train_3], y_train, batch_size=256, epochs=3, validation_data=([data_test_1, data_test_2, data_test_3], y_test))
proba = model.predict([data_test_1, data_test_2, data_test_3], batch_size=1024, verbose=1)[:, 0]
plts = measure_quality(y_test, proba, name='Алгоритм "LSTM"')
| 0.366136 | 0.675109 |
<a href="https://colab.research.google.com/github/keirwilliamsxyz/keirxyz/blob/main/stylegan_nada.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Welcome to StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators!
# Step 1: Setup required libraries and models.
This may take a few minutes.
You may optionally enable downloads with pydrive in order to authenticate and avoid drive download limits when fetching pre-trained ReStyle and StyleGAN2 models.
```
#@title Setup
%tensorflow_version 1.x
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
pretrained_model_dir = os.path.join("/content", "models")
os.makedirs(pretrained_model_dir, exist_ok=True)
restyle_dir = os.path.join("/content", "restyle")
stylegan_ada_dir = os.path.join("/content", "stylegan_ada")
stylegan_nada_dir = os.path.join("/content", "stylegan_nada")
output_dir = os.path.join("/content", "output")
output_model_dir = os.path.join(output_dir, "models")
output_image_dir = os.path.join(output_dir, "images")
download_with_pydrive = True #@param {type:"boolean"}
class Downloader(object):
def __init__(self, use_pydrive):
self.use_pydrive = use_pydrive
if self.use_pydrive:
self.authenticate()
def authenticate(self):
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
self.drive = GoogleDrive(gauth)
def download_file(self, file_id, file_dst):
if self.use_pydrive:
downloaded = self.drive.CreateFile({'id':file_id})
downloaded.FetchMetadata(fetch_all=True)
downloaded.GetContentFile(file_dst)
else:
!gdown --id $file_id -O $file_dst
downloader = Downloader(download_with_pydrive)
# install requirements
!git clone https://github.com/yuval-alaluf/restyle-encoder.git $restyle_dir
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
!pip install ftfy regex tqdm
!pip install git+https://github.com/openai/CLIP.git
!git clone https://github.com/NVlabs/stylegan2-ada/ $stylegan_ada_dir
!git clone https://github.com/rinongal/stylegan-nada.git $stylegan_nada_dir
from argparse import Namespace
import sys
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms
sys.path.append(restyle_dir)
sys.path.append(stylegan_nada_dir)
sys.path.append(os.path.join(stylegan_nada_dir, "ZSSGAN"))
device = 'cuda'
%load_ext autoreload
%autoreload 2
```
# Step 2: Choose a model type.
Model will be downloaded and converted to a pytorch compatible version.
Re-runs of the cell with the same model will re-use the previously downloaded version. Feel free to experiment and come back to previous models :)
```
source_model_type = 'ffhq' #@param['ffhq', 'cat', 'dog', 'church', 'horse', 'car']
source_model_download_path = {"ffhq": "1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT",
"cat": "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqcat.pkl",
"dog": "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqdog.pkl",
"church": "1iDo5cUgbwsJEt2uwfgDy_iPlaT-lLZmi",
"car": "1i-39ztut-VdUVUiFuUrwdsItR--HF81w",
"horse": "1irwWI291DolZhnQeW-ZyNWqZBjlWyJUn"}
model_names = {"ffhq": "ffhq.pt",
"cat": "afhqcat.pkl",
"dog": "afhqdog.pkl",
"church": "stylegan2-church-config-f.pkl",
"car": "stylegan2-car-config-f.pkl",
"horse": "stylegan2-horse-config-f.pkl"}
download_string = source_model_download_path[source_model_type]
file_name = model_names[source_model_type]
pt_file_name = file_name.split(".")[0] + ".pt"
dataset_sizes = {
"ffhq": 1024,
"cat": 512,
"dog": 512,
"church": 256,
"horse": 256,
"car": 512,
}
if not os.path.isfile(os.path.join(pretrained_model_dir, file_name)):
print("Downloading chosen model...")
if download_string.endswith(".pkl"):
!wget $download_string -O $pretrained_model_dir/$file_name
else:
downloader.download_file(download_string, os.path.join(pretrained_model_dir, file_name))
if not os.path.isfile(os.path.join(pretrained_model_dir, pt_file_name)):
print("Converting sg2 model. This may take a few minutes...")
tf_path = next(filter(lambda x: "tensorflow" in x, sys.path), None)
py_path = tf_path + f":{stylegan_nada_dir}/ZSSGAN"
convert_script = os.path.join(stylegan_nada_dir, "convert_weight.py")
!PYTHONPATH=$py_path python $convert_script --repo $stylegan_ada_dir --gen $pretrained_model_dir/$file_name
```
# Step 3: Train the model.
Describe your source and target class. These describe the direction of change you're trying to apply (e.g. "photo" to "sketch", "dog" to "the joker" or "dog" to "avocado dog").
Alternatively, upload a directory with a small (~3) set of target style images (there is no need to preprocess them in any way) and set `style_image_dir` to point at them. This will use the images as a target rather than the source/class texts.
We reccomend leaving the 'improve shape' button unticked at first, as it will lead to an increase in running times and is often not needed.
For more drastic changes, turn it on and increase the number of iterations.
As a rule of thumb:
- Style and minor domain changes ('photo' -> 'sketch') require ~200-400 iterations.
- Identity changes ('person' -> 'taylor swift') require ~150-200 iterations.
- Simple in-domain changes ('face' -> 'smiling face') may require as few as 50.
- The `style_image_dir` option often requires ~400-600 iterations.
> Updates: <br>
> 03/10 - Added support for style image targets. <br>
> 03/08 - Added support for saving model checkpoints. If you want to save, set save_interval > 0.
```
from ZSSGAN.model.ZSSGAN import ZSSGAN
import numpy as np
import torch
from tqdm import notebook
from ZSSGAN.utils.file_utils import save_images, get_dir_img_list
from ZSSGAN.utils.training_utils import mixing_noise
from IPython.display import display
source_class = "Photo" #@param {"type": "string"}
target_class = "Sketch" #@param {"type": "string"}
style_image_dir = "" #@param {'type': 'string'}
target_img_list = get_dir_img_list(style_image_dir) if style_image_dir else None
improve_shape = False #@param{type:"boolean"}
model_choice = ["ViT-B/32", "ViT-B/16"]
model_weights = [1.0, 0.0]
if improve_shape or style_image_dir:
model_weights[1] = 1.0
mixing = 0.9 if improve_shape else 0.0
auto_layers_k = int(2 * (2 * np.log2(dataset_sizes[source_model_type]) - 2) / 3) if improve_shape else 0
auto_layer_iters = 1 if improve_shape else 0
training_iterations = 151 #@param {type: "integer"}
output_interval = 50 #@param {type: "integer"}
save_interval = 0 #@param {type: "integer"}
training_args = {
"size": dataset_sizes[source_model_type],
"batch": 2,
"n_sample": 4,
"output_dir": output_dir,
"lr": 0.002,
"frozen_gen_ckpt": os.path.join(pretrained_model_dir, pt_file_name),
"train_gen_ckpt": os.path.join(pretrained_model_dir, pt_file_name),
"iter": training_iterations,
"source_class": source_class,
"target_class": target_class,
"lambda_direction": 1.0,
"lambda_patch": 0.0,
"lambda_global": 0.0,
"lambda_texture": 0.0,
"lambda_manifold": 0.0,
"auto_layer_k": auto_layers_k,
"auto_layer_iters": auto_layer_iters,
"auto_layer_batch": 8,
"output_interval": 50,
"clip_models": model_choice,
"clip_model_weights": model_weights,
"mixing": mixing,
"phase": None,
"sample_truncation": 0.7,
"save_interval": save_interval,
"target_img_list": target_img_list,
"img2img_batch": 16,
"channel_multiplier": 2,
}
args = Namespace(**training_args)
print("Loading base models...")
net = ZSSGAN(args)
print("Models loaded! Starting training...")
g_reg_ratio = 4 / 5
g_optim = torch.optim.Adam(
net.generator_trainable.parameters(),
lr=args.lr * g_reg_ratio,
betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio),
)
# Set up output directories.
sample_dir = os.path.join(args.output_dir, "sample")
ckpt_dir = os.path.join(args.output_dir, "checkpoint")
os.makedirs(sample_dir, exist_ok=True)
os.makedirs(ckpt_dir, exist_ok=True)
seed = 3 #@param {"type": "integer"}
torch.manual_seed(seed)
np.random.seed(seed)
# Training loop
fixed_z = torch.randn(args.n_sample, 512, device=device)
for i in notebook.tqdm(range(args.iter)):
net.train()
sample_z = mixing_noise(args.batch, 512, args.mixing, device)
[sampled_src, sampled_dst], clip_loss = net(sample_z)
net.zero_grad()
clip_loss.backward()
g_optim.step()
if i % output_interval == 0:
net.eval()
with torch.no_grad():
[sampled_src, sampled_dst], loss = net([fixed_z], truncation=args.sample_truncation)
if source_model_type == 'car':
sampled_dst = sampled_dst[:, :, 64:448, :]
grid_rows = 4
save_images(sampled_dst, sample_dir, "dst", grid_rows, i)
img = Image.open(os.path.join(sample_dir, f"dst_{str(i).zfill(6)}.jpg")).resize((1024, 256))
display(img)
if (args.save_interval > 0) and (i > 0) and (i % args.save_interval == 0):
torch.save(
{
"g_ema": net.generator_trainable.generator.state_dict(),
"g_optim": g_optim.state_dict(),
},
f"{ckpt_dir}/{str(i).zfill(6)}.pt",
)
```
# Step 4: Generate samples with the new model
```
truncation = 0.7 #@param {type:"slider", min:0, max:1, step:0.05}
samples = 9
with torch.no_grad():
net.eval()
sample_z = torch.randn(samples, 512, device=device)
[sampled_src, sampled_dst], loss = net([sample_z], truncation=truncation)
if source_model_type == 'car':
sampled_dst = sampled_dst[:, :, 64:448, :]
grid_rows = int(samples ** 0.5)
save_images(sampled_dst, sample_dir, "sampled", grid_rows, 0)
display(Image.open(os.path.join(sample_dir, f"sampled_{str(0).zfill(6)}.jpg")).resize((768, 768)))
```
## Editing a real image with Re-Style inversion (currently only FFHQ inversion is supported):
Step 1: Set up Re-Style.
This may take a few minutes
```
from restyle.utils.common import tensor2im
from restyle.models.psp import pSp
from restyle.models.e4e import e4e
downloader.download_file("1sw6I2lRIB0MpuJkpc8F5BJiSZrc0hjfE", os.path.join(pretrained_model_dir, "restyle_psp_ffhq_encode.pt"))
downloader.download_file("1e2oXVeBPXMQoUoC_4TNwAWpOPpSEhE_e", os.path.join(pretrained_model_dir, "restyle_e4e_ffhq_encode.pt"))
```
Step 2: Choose a re-style model
We reccomend choosing the e4e model as it performs better under domain translations. Choose pSp for better reconstructions on minor domain changes (typically those that require less than 150 training steps).
```
encoder_type = 'e4e' #@param['psp', 'e4e']
restyle_experiment_args = {
"model_path": os.path.join(pretrained_model_dir, f"restyle_{encoder_type}_ffhq_encode.pt"),
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
}
model_path = restyle_experiment_args['model_path']
ckpt = torch.load(model_path, map_location='cpu')
opts = ckpt['opts']
opts['checkpoint_path'] = model_path
opts = Namespace(**opts)
restyle_net = (pSp if encoder_type == 'psp' else e4e)(opts)
restyle_net.eval()
restyle_net.cuda()
print('Model successfully loaded!')
```
Step 3: Align and invert an image
```
def run_alignment(image_path):
import dlib
from scripts.align_faces_parallel import align_face
if not os.path.exists("shape_predictor_68_face_landmarks.dat"):
print('Downloading files for aligning face image...')
os.system('wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2')
os.system('bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2')
print('Done.')
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
aligned_image = align_face(filepath=image_path, predictor=predictor)
print("Aligned image has shape: {}".format(aligned_image.size))
return aligned_image
image_path = "/content/ariana.jpg" #@param {'type': 'string'}
original_image = Image.open(image_path).convert("RGB")
input_image = run_alignment(image_path)
display(input_image)
img_transforms = restyle_experiment_args['transform']
transformed_image = img_transforms(input_image)
def get_avg_image(net):
avg_image = net(net.latent_avg.unsqueeze(0),
input_code=True,
randomize_noise=False,
return_latents=False,
average_code=True)[0]
avg_image = avg_image.to('cuda').float().detach()
return avg_image
opts.n_iters_per_batch = 5
opts.resize_outputs = False # generate outputs at full resolution
from restyle.utils.inference_utils import run_on_batch
with torch.no_grad():
avg_image = get_avg_image(restyle_net)
result_batch, result_latents = run_on_batch(transformed_image.unsqueeze(0).cuda(), restyle_net, opts, avg_image)
```
Step 4: Convert the image to the new domain
```
#@title Convert inverted image.
inverted_latent = torch.Tensor(result_latents[0][4]).cuda().unsqueeze(0).unsqueeze(1)
with torch.no_grad():
net.eval()
[sampled_src, sampled_dst] = net(inverted_latent, input_is_latent=True)[0]
joined_img = torch.cat([sampled_src, sampled_dst], dim=0)
save_images(joined_img, sample_dir, "joined", 2, 0)
display(Image.open(os.path.join(sample_dir, f"joined_{str(0).zfill(6)}.jpg")).resize((512, 256)))
```
|
github_jupyter
|
#@title Setup
%tensorflow_version 1.x
import os
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
pretrained_model_dir = os.path.join("/content", "models")
os.makedirs(pretrained_model_dir, exist_ok=True)
restyle_dir = os.path.join("/content", "restyle")
stylegan_ada_dir = os.path.join("/content", "stylegan_ada")
stylegan_nada_dir = os.path.join("/content", "stylegan_nada")
output_dir = os.path.join("/content", "output")
output_model_dir = os.path.join(output_dir, "models")
output_image_dir = os.path.join(output_dir, "images")
download_with_pydrive = True #@param {type:"boolean"}
class Downloader(object):
def __init__(self, use_pydrive):
self.use_pydrive = use_pydrive
if self.use_pydrive:
self.authenticate()
def authenticate(self):
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
self.drive = GoogleDrive(gauth)
def download_file(self, file_id, file_dst):
if self.use_pydrive:
downloaded = self.drive.CreateFile({'id':file_id})
downloaded.FetchMetadata(fetch_all=True)
downloaded.GetContentFile(file_dst)
else:
!gdown --id $file_id -O $file_dst
downloader = Downloader(download_with_pydrive)
# install requirements
!git clone https://github.com/yuval-alaluf/restyle-encoder.git $restyle_dir
!wget https://github.com/ninja-build/ninja/releases/download/v1.8.2/ninja-linux.zip
!sudo unzip ninja-linux.zip -d /usr/local/bin/
!sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
!pip install ftfy regex tqdm
!pip install git+https://github.com/openai/CLIP.git
!git clone https://github.com/NVlabs/stylegan2-ada/ $stylegan_ada_dir
!git clone https://github.com/rinongal/stylegan-nada.git $stylegan_nada_dir
from argparse import Namespace
import sys
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms
sys.path.append(restyle_dir)
sys.path.append(stylegan_nada_dir)
sys.path.append(os.path.join(stylegan_nada_dir, "ZSSGAN"))
device = 'cuda'
%load_ext autoreload
%autoreload 2
source_model_type = 'ffhq' #@param['ffhq', 'cat', 'dog', 'church', 'horse', 'car']
source_model_download_path = {"ffhq": "1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT",
"cat": "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqcat.pkl",
"dog": "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqdog.pkl",
"church": "1iDo5cUgbwsJEt2uwfgDy_iPlaT-lLZmi",
"car": "1i-39ztut-VdUVUiFuUrwdsItR--HF81w",
"horse": "1irwWI291DolZhnQeW-ZyNWqZBjlWyJUn"}
model_names = {"ffhq": "ffhq.pt",
"cat": "afhqcat.pkl",
"dog": "afhqdog.pkl",
"church": "stylegan2-church-config-f.pkl",
"car": "stylegan2-car-config-f.pkl",
"horse": "stylegan2-horse-config-f.pkl"}
download_string = source_model_download_path[source_model_type]
file_name = model_names[source_model_type]
pt_file_name = file_name.split(".")[0] + ".pt"
dataset_sizes = {
"ffhq": 1024,
"cat": 512,
"dog": 512,
"church": 256,
"horse": 256,
"car": 512,
}
if not os.path.isfile(os.path.join(pretrained_model_dir, file_name)):
print("Downloading chosen model...")
if download_string.endswith(".pkl"):
!wget $download_string -O $pretrained_model_dir/$file_name
else:
downloader.download_file(download_string, os.path.join(pretrained_model_dir, file_name))
if not os.path.isfile(os.path.join(pretrained_model_dir, pt_file_name)):
print("Converting sg2 model. This may take a few minutes...")
tf_path = next(filter(lambda x: "tensorflow" in x, sys.path), None)
py_path = tf_path + f":{stylegan_nada_dir}/ZSSGAN"
convert_script = os.path.join(stylegan_nada_dir, "convert_weight.py")
!PYTHONPATH=$py_path python $convert_script --repo $stylegan_ada_dir --gen $pretrained_model_dir/$file_name
from ZSSGAN.model.ZSSGAN import ZSSGAN
import numpy as np
import torch
from tqdm import notebook
from ZSSGAN.utils.file_utils import save_images, get_dir_img_list
from ZSSGAN.utils.training_utils import mixing_noise
from IPython.display import display
source_class = "Photo" #@param {"type": "string"}
target_class = "Sketch" #@param {"type": "string"}
style_image_dir = "" #@param {'type': 'string'}
target_img_list = get_dir_img_list(style_image_dir) if style_image_dir else None
improve_shape = False #@param{type:"boolean"}
model_choice = ["ViT-B/32", "ViT-B/16"]
model_weights = [1.0, 0.0]
if improve_shape or style_image_dir:
model_weights[1] = 1.0
mixing = 0.9 if improve_shape else 0.0
auto_layers_k = int(2 * (2 * np.log2(dataset_sizes[source_model_type]) - 2) / 3) if improve_shape else 0
auto_layer_iters = 1 if improve_shape else 0
training_iterations = 151 #@param {type: "integer"}
output_interval = 50 #@param {type: "integer"}
save_interval = 0 #@param {type: "integer"}
training_args = {
"size": dataset_sizes[source_model_type],
"batch": 2,
"n_sample": 4,
"output_dir": output_dir,
"lr": 0.002,
"frozen_gen_ckpt": os.path.join(pretrained_model_dir, pt_file_name),
"train_gen_ckpt": os.path.join(pretrained_model_dir, pt_file_name),
"iter": training_iterations,
"source_class": source_class,
"target_class": target_class,
"lambda_direction": 1.0,
"lambda_patch": 0.0,
"lambda_global": 0.0,
"lambda_texture": 0.0,
"lambda_manifold": 0.0,
"auto_layer_k": auto_layers_k,
"auto_layer_iters": auto_layer_iters,
"auto_layer_batch": 8,
"output_interval": 50,
"clip_models": model_choice,
"clip_model_weights": model_weights,
"mixing": mixing,
"phase": None,
"sample_truncation": 0.7,
"save_interval": save_interval,
"target_img_list": target_img_list,
"img2img_batch": 16,
"channel_multiplier": 2,
}
args = Namespace(**training_args)
print("Loading base models...")
net = ZSSGAN(args)
print("Models loaded! Starting training...")
g_reg_ratio = 4 / 5
g_optim = torch.optim.Adam(
net.generator_trainable.parameters(),
lr=args.lr * g_reg_ratio,
betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio),
)
# Set up output directories.
sample_dir = os.path.join(args.output_dir, "sample")
ckpt_dir = os.path.join(args.output_dir, "checkpoint")
os.makedirs(sample_dir, exist_ok=True)
os.makedirs(ckpt_dir, exist_ok=True)
seed = 3 #@param {"type": "integer"}
torch.manual_seed(seed)
np.random.seed(seed)
# Training loop
fixed_z = torch.randn(args.n_sample, 512, device=device)
for i in notebook.tqdm(range(args.iter)):
net.train()
sample_z = mixing_noise(args.batch, 512, args.mixing, device)
[sampled_src, sampled_dst], clip_loss = net(sample_z)
net.zero_grad()
clip_loss.backward()
g_optim.step()
if i % output_interval == 0:
net.eval()
with torch.no_grad():
[sampled_src, sampled_dst], loss = net([fixed_z], truncation=args.sample_truncation)
if source_model_type == 'car':
sampled_dst = sampled_dst[:, :, 64:448, :]
grid_rows = 4
save_images(sampled_dst, sample_dir, "dst", grid_rows, i)
img = Image.open(os.path.join(sample_dir, f"dst_{str(i).zfill(6)}.jpg")).resize((1024, 256))
display(img)
if (args.save_interval > 0) and (i > 0) and (i % args.save_interval == 0):
torch.save(
{
"g_ema": net.generator_trainable.generator.state_dict(),
"g_optim": g_optim.state_dict(),
},
f"{ckpt_dir}/{str(i).zfill(6)}.pt",
)
truncation = 0.7 #@param {type:"slider", min:0, max:1, step:0.05}
samples = 9
with torch.no_grad():
net.eval()
sample_z = torch.randn(samples, 512, device=device)
[sampled_src, sampled_dst], loss = net([sample_z], truncation=truncation)
if source_model_type == 'car':
sampled_dst = sampled_dst[:, :, 64:448, :]
grid_rows = int(samples ** 0.5)
save_images(sampled_dst, sample_dir, "sampled", grid_rows, 0)
display(Image.open(os.path.join(sample_dir, f"sampled_{str(0).zfill(6)}.jpg")).resize((768, 768)))
from restyle.utils.common import tensor2im
from restyle.models.psp import pSp
from restyle.models.e4e import e4e
downloader.download_file("1sw6I2lRIB0MpuJkpc8F5BJiSZrc0hjfE", os.path.join(pretrained_model_dir, "restyle_psp_ffhq_encode.pt"))
downloader.download_file("1e2oXVeBPXMQoUoC_4TNwAWpOPpSEhE_e", os.path.join(pretrained_model_dir, "restyle_e4e_ffhq_encode.pt"))
encoder_type = 'e4e' #@param['psp', 'e4e']
restyle_experiment_args = {
"model_path": os.path.join(pretrained_model_dir, f"restyle_{encoder_type}_ffhq_encode.pt"),
"transform": transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
}
model_path = restyle_experiment_args['model_path']
ckpt = torch.load(model_path, map_location='cpu')
opts = ckpt['opts']
opts['checkpoint_path'] = model_path
opts = Namespace(**opts)
restyle_net = (pSp if encoder_type == 'psp' else e4e)(opts)
restyle_net.eval()
restyle_net.cuda()
print('Model successfully loaded!')
def run_alignment(image_path):
import dlib
from scripts.align_faces_parallel import align_face
if not os.path.exists("shape_predictor_68_face_landmarks.dat"):
print('Downloading files for aligning face image...')
os.system('wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2')
os.system('bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2')
print('Done.')
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
aligned_image = align_face(filepath=image_path, predictor=predictor)
print("Aligned image has shape: {}".format(aligned_image.size))
return aligned_image
image_path = "/content/ariana.jpg" #@param {'type': 'string'}
original_image = Image.open(image_path).convert("RGB")
input_image = run_alignment(image_path)
display(input_image)
img_transforms = restyle_experiment_args['transform']
transformed_image = img_transforms(input_image)
def get_avg_image(net):
avg_image = net(net.latent_avg.unsqueeze(0),
input_code=True,
randomize_noise=False,
return_latents=False,
average_code=True)[0]
avg_image = avg_image.to('cuda').float().detach()
return avg_image
opts.n_iters_per_batch = 5
opts.resize_outputs = False # generate outputs at full resolution
from restyle.utils.inference_utils import run_on_batch
with torch.no_grad():
avg_image = get_avg_image(restyle_net)
result_batch, result_latents = run_on_batch(transformed_image.unsqueeze(0).cuda(), restyle_net, opts, avg_image)
#@title Convert inverted image.
inverted_latent = torch.Tensor(result_latents[0][4]).cuda().unsqueeze(0).unsqueeze(1)
with torch.no_grad():
net.eval()
[sampled_src, sampled_dst] = net(inverted_latent, input_is_latent=True)[0]
joined_img = torch.cat([sampled_src, sampled_dst], dim=0)
save_images(joined_img, sample_dir, "joined", 2, 0)
display(Image.open(os.path.join(sample_dir, f"joined_{str(0).zfill(6)}.jpg")).resize((512, 256)))
| 0.277375 | 0.703116 |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
```
## Introduction
Who Speaks When? Speaker Diarization is the task of segmenting audio recordings by speaker labels.
A diarization system consists of Voice Activity Detection (VAD) model to get the time stamps of audio where speech is being spoken ignoring the background and Speaker Embeddings model to get speaker embedddings on segments that were previously time stampped. These speaker embeddings would then be clustered in to clusters based on number of speakers present in the audio recording.
In NeMo we support both **oracle VAD** and **non-oracle VAD** diarization.
In this tutorial, we shall first demonstrate how to perform diarization with a oracle VAD time stamps (we assume we already have speech time stamps) and pretrained speaker verification model which can be found in tutorial for [Speaker and Recognition and Verification in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/speaker_recognition/Speaker_Recognition_Verification.ipynb]).
In [second part](#ORACLE-VAD-DIARIZATION) we show how to perform VAD and then diarization if ground truth timestamped speech were not available (non-oracle VAD). We also have tutorials for [VAD training in NeMo](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/06_Voice_Activiy_Detection.ipynb) and [online offline microphone inference](https://github.com/NVIDIA/NeMo/blob/main/tutorials/asr/07_Online_Offline_Microphone_VAD_Demo.ipynb), where you can custom your model and training/finetuning on your own data.
For demonstration purposes we would be using simulated audio from [an4 dataset](http://www.speech.cs.cmu.edu/databases/an4/)
```
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_audio = wget.download(an4_audio_url, data_dir)
an4_rttm = wget.download(an4_rttm_url, data_dir)
```
Let's plot and listen to the audio and visualize the RTTM speaker labels
```
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
```
We would use [pyannote_metrics](https://pyannote.github.io/pyannote-metrics/) for visualization and score calculation purposes. Hence all the labels in rttm formats would eventually be converted to pyannote objects, we created two helper functions rttm_to_labels (for NeMo intermediate processing) and labels_to_pyannote_object for scoring and visualization format
```
from nemo.collections.asr.parts.speaker_utils import rttm_to_labels, labels_to_pyannote_object
```
Let's load ground truth RTTM labels and view the reference Annotation timestamps visually
```
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
```
Speaker Diarization scripts commonly expects two files:
1. paths2audio_files : either list of audio file paths or file containing paths to audio files for which we need to perform diarization.
2. path2groundtruth_rttm_files (optional): either list of rttm file paths or file containing paths to rttm files (this can be passed if we need to calculate DER rate based on our ground truth rttm files).
**Note** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**.
For eg: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)
Now lets create paths2audio_files list (or file) for which we need to perform diarization
```
paths2audio_files = [an4_audio]
print(paths2audio_files)
```
Similarly create` path2groundtruth_rttm_files` list (this is optional, and needed for score calculation)
```
path2groundtruth_rttm_files = [an4_rttm]
print(path2groundtruth_rttm_files)
```
# ORACLE-VAD DIARIZATION
Oracle-vad diarization is to compute speaker embeddings from known speech label timestamps rather than depending on VAD output. This step can also used to run speaker diarization with rttms generated from any external VAD, not just VAD model from NeMo.
For it, the first step is to start converting reference audio rttm(vad) time stamps to oracle manifest file. This manifest file would be sent to our speaker diarizer to extract embeddings.
For that let's use write_rttm2manifest function, that takes paths2audio_files and paths2rttm_files as arguments
```
from nemo.collections.asr.parts.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=paths2audio_files,
paths2rttm_files=path2groundtruth_rttm_files,
manifest_file=oracle_manifest)
!cat {oracle_manifest}
```
Our config file is based on [hydra](https://hydra.cc/docs/intro/).
With hydra config, we ask users to provide values to variables that were filled with **???**, these are mandatory fields and scripts expect them for succesfull runs. And notice some variables were filled with **null** are optional variables. Those could be provided if needed but are not mandatory.
```
from omegaconf import OmegaConf
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_recognition/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
```
Now we can perform speaker diarization based on timestamps generated from ground truth rttms rather than generating through VAD
```
pretrained_speaker_model='SpeakerNet_verification'
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
```
With DER 0 -> means it clustered speaker embeddings correctly. Lets view
```
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
```
# VAD DIARIZATION
In this method we compute VAD time stamps using NeMo VAD model on `paths2audio_files` and then use these time stamps of speech label to find speaker embeddings followed by clustering them into num of speakers
Before we proceed let's look at the speaker diarization config, which we would be depending up on for vad computation
and speaker embedding extraction
```
print(OmegaConf.to_yaml(config))
```
As can be seen most of the variables in config are self explanatory
with VAD variables under vad section and speaker related variables under speaker embeddings section.
To perform VAD based diarization we can ignore `oracle_vad_manifest` in `speaker_embeddings` section for now and needs to fill up the rest. We also needs to provide pretrained `model_path` of vad and speaker embeddings .nemo models
```
pretrained_vad = 'MarbleNet-3x2x64'
pretrained_speaker_model = 'SpeakerNet_verification'
```
Note in this tutorial, we use the VAD model MarbleNet-3x2 introduced and publiced in [ICASSP MarbleNet](https://arxiv.org/pdf/2010.13886.pdf). You might need to to tune on dev set similar to your dataset if you would like to improve the performance.
And the speakerNet-M-Diarization model achieves 7.3% confusion error rate on CH109 set with oracle vad. This model is trained on voxceleb1, voxceleb2, Fisher, SwitchBoard datasets. So for more improved performance specific to your dataset, finetune speaker verification model with a devset similar to your test set.
```
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.threshold = 0.8
```
Now that we passed all the variables we needed lets initialize the clustering model with above config
```
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
```
And Diarize with single line of code
```
sd_model.diarize()
```
As can be seen, we first performed VAD, then with the timestamps created in `{output_dir}/vad_outputs` by VAD we calculated speaker embeddings (`{output_dir}/speaker_outputs/embeddings/`) which are then clustered using spectral clustering.
To generate VAD predicted time step. We perform VAD inference to have frame level prediction → (optional: use decision smoothing) → given `threshold`, write speech segment to RTTM-like time stamps manifest.
we use vad decision smoothing (87.5% ovalap median here) as discribe [here](https://github.com/NVIDIA/NeMo/blob/speaker_diarization/nemo/collections/asr/parts/vad_utils.py#L169)
we can also tune the threshold on your dev set use this provided [script](https://github.com/NVIDIA/NeMo/blob/speaker_diarization/scripts/vad_tune_threshold.py)
```
# VAD predicted time stamps
from nemo.collections.asr.parts.vad_utils import extract_labels, plot
plot(paths2audio_files[0],
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
path2groundtruth_rttm_files[0],
threshold=config.diarizer.vad.threshold)
print(f"threshold: {config.diarizer.vad.threshold}")
```
Predicted outputs are written to `output_dir/pred_rttms` and see how we predicted along with VAD prediction
```
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
```
# Storing and Restoring models
Now we can save the whole config and model parameters in a single .nemo and restore from it anytime.
```
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
```
Restore from saved model
```
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
```
# ADD ON - ASR
```
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip(paths2audio_files, quartznet.transcribe(paths2audio_files=paths2audio_files)):
print(f"Audio in {fname} was recognized as: {transcription}")
```
|
github_jupyter
|
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'r1.0.0rc1'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
import os
import wget
ROOT = os.getcwd()
data_dir = os.path.join(ROOT,'data')
os.makedirs(data_dir, exist_ok=True)
an4_audio_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav"
an4_rttm_url = "https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.rttm"
an4_audio = wget.download(an4_audio_url, data_dir)
an4_rttm = wget.download(an4_rttm_url, data_dir)
import IPython
import matplotlib.pyplot as plt
import numpy as np
import librosa
sr = 16000
signal, sr = librosa.load(an4_audio,sr=sr)
fig,ax = plt.subplots(1,1)
fig.set_figwidth(20)
fig.set_figheight(2)
plt.plot(np.arange(len(signal)),signal,'gray')
fig.suptitle('Reference merged an4 audio', fontsize=16)
plt.xlabel('time (secs)', fontsize=18)
ax.margins(x=0)
plt.ylabel('signal strength', fontsize=16);
a,_ = plt.xticks();plt.xticks(a,a/sr);
IPython.display.Audio(an4_audio)
from nemo.collections.asr.parts.speaker_utils import rttm_to_labels, labels_to_pyannote_object
# view the sample rttm file
!cat {an4_rttm}
labels = rttm_to_labels(an4_rttm)
reference = labels_to_pyannote_object(labels)
print(labels)
reference
paths2audio_files = [an4_audio]
print(paths2audio_files)
path2groundtruth_rttm_files = [an4_rttm]
print(path2groundtruth_rttm_files)
from nemo.collections.asr.parts.speaker_utils import write_rttm2manifest
output_dir = os.path.join(ROOT, 'oracle_vad')
os.makedirs(output_dir,exist_ok=True)
oracle_manifest = os.path.join(output_dir,'oracle_manifest.json')
write_rttm2manifest(paths2audio_files=paths2audio_files,
paths2rttm_files=path2groundtruth_rttm_files,
manifest_file=oracle_manifest)
!cat {oracle_manifest}
from omegaconf import OmegaConf
config_url = "https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/speaker_recognition/conf/speaker_diarization.yaml"
MODEL_CONFIG = wget.download(config_url,data_dir)
config = OmegaConf.load(MODEL_CONFIG)
print(OmegaConf.to_yaml(config))
pretrained_speaker_model='SpeakerNet_verification'
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir #Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
# Ignoring vad we just need to pass the manifest file we created
config.diarizer.speaker_embeddings.oracle_vad_manifest = oracle_manifest
from nemo.collections.asr.models import ClusteringDiarizer
oracle_model = ClusteringDiarizer(cfg=config)
# And lets diarize
oracle_model.diarize()
!cat {output_dir}/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels(output_dir+'/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
print(OmegaConf.to_yaml(config))
pretrained_vad = 'MarbleNet-3x2x64'
pretrained_speaker_model = 'SpeakerNet_verification'
output_dir = os.path.join(ROOT,'outputs')
config.diarizer.paths2audio_files = paths2audio_files
config.diarizer.path2groundtruth_rttm_files = path2groundtruth_rttm_files
config.diarizer.out_dir = output_dir # Directory to store intermediate files and prediction outputs
config.diarizer.speaker_embeddings.model_path = pretrained_speaker_model
#Here we use our inhouse pretrained NeMo VAD
config.diarizer.vad.model_path = pretrained_vad
config.diarizer.vad.window_length_in_sec = 0.15
config.diarizer.vad.shift_length_in_sec = 0.01
config.diarizer.vad.threshold = 0.8
from nemo.collections.asr.models import ClusteringDiarizer
sd_model = ClusteringDiarizer(cfg=config)
sd_model.diarize()
# VAD predicted time stamps
from nemo.collections.asr.parts.vad_utils import extract_labels, plot
plot(paths2audio_files[0],
'outputs/vad_outputs/overlap_smoothing_output_median_0.875/an4_diarize_test.median',
path2groundtruth_rttm_files[0],
threshold=config.diarizer.vad.threshold)
print(f"threshold: {config.diarizer.vad.threshold}")
!cat outputs/pred_rttms/an4_diarize_test.rttm
pred_labels = rttm_to_labels('outputs/pred_rttms/an4_diarize_test.rttm')
hypothesis = labels_to_pyannote_object(pred_labels)
hypothesis
reference
oracle_model.save_to(os.path.join(output_dir,'diarize.nemo'))
del oracle_model
import nemo.collections.asr as nemo_asr
restored_model = nemo_asr.models.ClusteringDiarizer.restore_from(os.path.join(output_dir,'diarize.nemo'))
IPython.display.Audio(an4_audio)
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
for fname, transcription in zip(paths2audio_files, quartznet.transcribe(paths2audio_files=paths2audio_files)):
print(f"Audio in {fname} was recognized as: {transcription}")
| 0.759627 | 0.884888 |
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 2
## Assignment
You'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.
- [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
- [ ] Engineer at least two new features. (See below for explanation & ideas.)
- [ ] Fit a linear regression model with at least two features.
- [ ] Get the model's coefficients and intercept.
- [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.
- [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!
- [ ] As always, commit your notebook to your fork of the GitHub repo.
#### [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)
> "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — Pedro Domingos, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)
> "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — Andrew Ng, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf)
> Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.
#### Feature Ideas
- Does the apartment have a description?
- How long is the description?
- How many total perks does each apartment have?
- Are cats _or_ dogs allowed?
- Are cats _and_ dogs allowed?
- Total number of rooms (beds + baths)
- Ratio of beds to baths
- What's the neighborhood, based on address or latitude & longitude?
## Stretch Goals
- [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression
- [ ] If you want more introduction, watch [Brandon Foltz, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)
(20 minutes, over 1 million views)
- [ ] Do the [Plotly Dash](https://dash.plot.ly/) Tutorial, Parts 1 & 2.
- [ ] Add your own stretch goal(s) !
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module1')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
# Read New York City apartment rental listing data
df = pd.read_csv('../data/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# Import block
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
# Despite filtering the most extreme prices, we still have the apartment with 10 bathrooms.
df[df['bathrooms'] >= 7]
# Let's add a little more filtering, on bedrooms and bathrooms.
df = df.query('bedrooms <= 7 and bathrooms <= 5')
print(df.shape)
df.head()
# Making a month feature to split data on
df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)
df['month'] = df['created'].dt.month
df['interest_level'].value_counts()
# Mapping interest level to digits
interest_dict = {
'low': 1,
'medium': 2,
'high': 3
}
df['interest_level'] = df['interest_level'].map(interest_dict)
df.head()
# Adding features - first, the amenities feature we used last time, a sum of the amenities
boolfeatures = df.columns.tolist()
del boolfeatures[:10]
df['amenities'] = df[boolfeatures].sum(axis=1)
df.head()
# Second feature - An actual Manhattan norm! Distance from the Empire State Building.
#df['Manhattan_norm'] = (((df['latitude']-40.7484)**2)+((df['longitude']-(-73.9857))**2))**0.5
df['Manhattan_norm'] = (((df['latitude']-40.7484)**2)+((df['longitude']-(-73.9857))**2))**0.5
df.head()
# Train-test split.
train = df[(df['month'] == 4) | (df['month'] == 5)]
test = df[df['month'] == 6]
# Define our features and target.
features = ['bathrooms','bedrooms','interest_level','amenities','Manhattan_norm']
target = 'price'
# Instantiate X and y for train and test.
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Instantiate and fit model
model = LinearRegression()
model.fit(X_train,y_train)
model.predict([[1,2,2,5,0.050]])
# Get coefficients and intercept
print(model.coef_)
print(model.intercept_)
# Predictions for train and test
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# Error metrics - MAE, RMSE, R^2
# MAE
print('MAE')
print('train',mean_absolute_error(y_train, y_pred_train))
print('test',mean_absolute_error(y_test, y_pred_test))
print('_________________')
# RMSE
print('RMSE')
print('train',np.sqrt(mean_squared_error(y_train, y_pred_train)))
print('test',np.sqrt(mean_squared_error(y_test, y_pred_test)))
print('__________________')
#R^2
print('R^2')
print('train',r2_score(y_train, y_pred_train))
print('test',r2_score(y_test, y_pred_test))
# Some feature optimization? Why not.
# Two gradient descents, one for lat and one for long.
df2 = df.copy()
latmin = df2['latitude'].min()
latmax = df2['latitude'].max()
bounddict = {
'lowbound': latmin,
'midbound': ((latmin+latmax)/2),
'upbound': latmax
}
for i in range(1,100):
bounderrors={}
# Set the central value according to the bounds
bounddict['midbound'] = (bounddict['lowbound']+bounddict['upbound'])/2
for key,value in bounddict.items():
# Set the feature's parameter to the bound we're testing
df2['Manhattan_norm'] = (((df2['latitude']-value)**2)+((df2['longitude']-(-73.9857))**2))**0.5
# Split the data
train2 = df2[(df2['month'] == 4) | (df2['month'] == 5)]
test2 = df2[df2['month'] == 6]
# Instantiate X and y for train and test.
X_train2 = train2[features]
y_train2 = train2[target]
X_test2 = test2[features]
y_test2 = test2[target]
# Instantiate and fit model
model2 = LinearRegression()
model2.fit(X_train2,y_train2)
# Predictions for train and test
y_pred_train2 = model2.predict(X_train2)
y_pred_test2 = model2.predict(X_test2)
# Get the error for the value
bounderrors[key] = mean_absolute_error(y_test2, y_pred_test2)
#Eliminate whichever extremal bound is worse
if bounderrors['lowbound'] > bounderrors['upbound']:
bounddict['lowbound'] = bounddict['midbound']
else:
bounddict['upbound'] = bounddict['midbound']
print(bounddict)
print(bounderrors)
# Our top bound is the best one, we'll set the parameter in Manhattan_norm accordingly.
df2['Manhattan_norm'] = (((df2['latitude']-40.7308375)**2)+((df2['longitude']-(-73.9857))**2))**0.5
# Same story for the longitude.
longmin = df2['longitude'].min()
longmax = df2['longitude'].max()
bounddict = {
'lowbound': longmin,
'midbound': ((longmin+longmax)/2),
'upbound': longmax
}
for i in range(1,100):
bounderrors={}
# Set the central value according to the bounds
bounddict['midbound'] = (bounddict['lowbound']+bounddict['upbound'])/2
for key,value in bounddict.items():
# Set the feature's parameter to the bound we're testing
df2['Manhattan_norm'] = (((df2['latitude']-40.7308375)**2)+((df2['longitude']-(value))**2))**0.5
# Split the data
train2 = df2[(df2['month'] == 4) | (df2['month'] == 5)]
test2 = df2[df2['month'] == 6]
# Instantiate X and y for train and test.
X_train2 = train2[features]
y_train2 = train2[target]
X_test2 = test2[features]
y_test2 = test2[target]
# Instantiate and fit model
model2 = LinearRegression()
model2.fit(X_train2,y_train2)
# Predictions for train and test
y_pred_train2 = model2.predict(X_train2)
y_pred_test2 = model2.predict(X_test2)
# Get the error for the value
bounderrors[key] = mean_absolute_error(y_test2, y_pred_test2)
#Eliminate whichever extremal bound is worse
if bounderrors['lowbound'] > bounderrors['upbound']:
bounddict['lowbound'] = bounddict['midbound']
else:
bounddict['upbound'] = bounddict['midbound']
print(bounddict)
print(bounderrors)
# Nice convergence.
# Take it from the top with our original dataframe.
# Set the feature's parameter to the bound we're testing
df['Manhattan_norm'] = (((df['latitude']-40.7308375)**2)+((df['longitude']-(-74.0147))**2))**0.5
# Split the data
train = df[(df['month'] == 4) | (df['month'] == 5)]
test = df[df['month'] == 6]
# Instantiate X and y for train and test.
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Instantiate and fit model
model = LinearRegression()
model.fit(X_train,y_train)
# Predictions for train and test
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# Error metrics - MAE, RMSE, R^2
# MAE
print('MAE')
print('train',mean_absolute_error(y_train, y_pred_train))
print('test',mean_absolute_error(y_test, y_pred_test))
print('_________________')
# RMSE
print('RMSE')
print('train',np.sqrt(mean_squared_error(y_train, y_pred_train)))
print('test',np.sqrt(mean_squared_error(y_test, y_pred_test)))
print('__________________')
#R^2
print('R^2')
print('train',r2_score(y_train, y_pred_train))
print('test',r2_score(y_test, y_pred_test))
```
|
github_jupyter
|
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module1')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
# Read New York City apartment rental listing data
df = pd.read_csv('../data/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# Import block
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
# Despite filtering the most extreme prices, we still have the apartment with 10 bathrooms.
df[df['bathrooms'] >= 7]
# Let's add a little more filtering, on bedrooms and bathrooms.
df = df.query('bedrooms <= 7 and bathrooms <= 5')
print(df.shape)
df.head()
# Making a month feature to split data on
df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True)
df['month'] = df['created'].dt.month
df['interest_level'].value_counts()
# Mapping interest level to digits
interest_dict = {
'low': 1,
'medium': 2,
'high': 3
}
df['interest_level'] = df['interest_level'].map(interest_dict)
df.head()
# Adding features - first, the amenities feature we used last time, a sum of the amenities
boolfeatures = df.columns.tolist()
del boolfeatures[:10]
df['amenities'] = df[boolfeatures].sum(axis=1)
df.head()
# Second feature - An actual Manhattan norm! Distance from the Empire State Building.
#df['Manhattan_norm'] = (((df['latitude']-40.7484)**2)+((df['longitude']-(-73.9857))**2))**0.5
df['Manhattan_norm'] = (((df['latitude']-40.7484)**2)+((df['longitude']-(-73.9857))**2))**0.5
df.head()
# Train-test split.
train = df[(df['month'] == 4) | (df['month'] == 5)]
test = df[df['month'] == 6]
# Define our features and target.
features = ['bathrooms','bedrooms','interest_level','amenities','Manhattan_norm']
target = 'price'
# Instantiate X and y for train and test.
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Instantiate and fit model
model = LinearRegression()
model.fit(X_train,y_train)
model.predict([[1,2,2,5,0.050]])
# Get coefficients and intercept
print(model.coef_)
print(model.intercept_)
# Predictions for train and test
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# Error metrics - MAE, RMSE, R^2
# MAE
print('MAE')
print('train',mean_absolute_error(y_train, y_pred_train))
print('test',mean_absolute_error(y_test, y_pred_test))
print('_________________')
# RMSE
print('RMSE')
print('train',np.sqrt(mean_squared_error(y_train, y_pred_train)))
print('test',np.sqrt(mean_squared_error(y_test, y_pred_test)))
print('__________________')
#R^2
print('R^2')
print('train',r2_score(y_train, y_pred_train))
print('test',r2_score(y_test, y_pred_test))
# Some feature optimization? Why not.
# Two gradient descents, one for lat and one for long.
df2 = df.copy()
latmin = df2['latitude'].min()
latmax = df2['latitude'].max()
bounddict = {
'lowbound': latmin,
'midbound': ((latmin+latmax)/2),
'upbound': latmax
}
for i in range(1,100):
bounderrors={}
# Set the central value according to the bounds
bounddict['midbound'] = (bounddict['lowbound']+bounddict['upbound'])/2
for key,value in bounddict.items():
# Set the feature's parameter to the bound we're testing
df2['Manhattan_norm'] = (((df2['latitude']-value)**2)+((df2['longitude']-(-73.9857))**2))**0.5
# Split the data
train2 = df2[(df2['month'] == 4) | (df2['month'] == 5)]
test2 = df2[df2['month'] == 6]
# Instantiate X and y for train and test.
X_train2 = train2[features]
y_train2 = train2[target]
X_test2 = test2[features]
y_test2 = test2[target]
# Instantiate and fit model
model2 = LinearRegression()
model2.fit(X_train2,y_train2)
# Predictions for train and test
y_pred_train2 = model2.predict(X_train2)
y_pred_test2 = model2.predict(X_test2)
# Get the error for the value
bounderrors[key] = mean_absolute_error(y_test2, y_pred_test2)
#Eliminate whichever extremal bound is worse
if bounderrors['lowbound'] > bounderrors['upbound']:
bounddict['lowbound'] = bounddict['midbound']
else:
bounddict['upbound'] = bounddict['midbound']
print(bounddict)
print(bounderrors)
# Our top bound is the best one, we'll set the parameter in Manhattan_norm accordingly.
df2['Manhattan_norm'] = (((df2['latitude']-40.7308375)**2)+((df2['longitude']-(-73.9857))**2))**0.5
# Same story for the longitude.
longmin = df2['longitude'].min()
longmax = df2['longitude'].max()
bounddict = {
'lowbound': longmin,
'midbound': ((longmin+longmax)/2),
'upbound': longmax
}
for i in range(1,100):
bounderrors={}
# Set the central value according to the bounds
bounddict['midbound'] = (bounddict['lowbound']+bounddict['upbound'])/2
for key,value in bounddict.items():
# Set the feature's parameter to the bound we're testing
df2['Manhattan_norm'] = (((df2['latitude']-40.7308375)**2)+((df2['longitude']-(value))**2))**0.5
# Split the data
train2 = df2[(df2['month'] == 4) | (df2['month'] == 5)]
test2 = df2[df2['month'] == 6]
# Instantiate X and y for train and test.
X_train2 = train2[features]
y_train2 = train2[target]
X_test2 = test2[features]
y_test2 = test2[target]
# Instantiate and fit model
model2 = LinearRegression()
model2.fit(X_train2,y_train2)
# Predictions for train and test
y_pred_train2 = model2.predict(X_train2)
y_pred_test2 = model2.predict(X_test2)
# Get the error for the value
bounderrors[key] = mean_absolute_error(y_test2, y_pred_test2)
#Eliminate whichever extremal bound is worse
if bounderrors['lowbound'] > bounderrors['upbound']:
bounddict['lowbound'] = bounddict['midbound']
else:
bounddict['upbound'] = bounddict['midbound']
print(bounddict)
print(bounderrors)
# Nice convergence.
# Take it from the top with our original dataframe.
# Set the feature's parameter to the bound we're testing
df['Manhattan_norm'] = (((df['latitude']-40.7308375)**2)+((df['longitude']-(-74.0147))**2))**0.5
# Split the data
train = df[(df['month'] == 4) | (df['month'] == 5)]
test = df[df['month'] == 6]
# Instantiate X and y for train and test.
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# Instantiate and fit model
model = LinearRegression()
model.fit(X_train,y_train)
# Predictions for train and test
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)
# Error metrics - MAE, RMSE, R^2
# MAE
print('MAE')
print('train',mean_absolute_error(y_train, y_pred_train))
print('test',mean_absolute_error(y_test, y_pred_test))
print('_________________')
# RMSE
print('RMSE')
print('train',np.sqrt(mean_squared_error(y_train, y_pred_train)))
print('test',np.sqrt(mean_squared_error(y_test, y_pred_test)))
print('__________________')
#R^2
print('R^2')
print('train',r2_score(y_train, y_pred_train))
print('test',r2_score(y_test, y_pred_test))
| 0.471467 | 0.975083 |
# Advanced Altair: Multiple Coordinated Views
```
import altair as alt
import pandas as pd
import numpy as np
flu = pd.read_csv('flunet2010_11countries.csv', header=[0,1])
cols = flu.columns.tolist()
normed = pd.melt(flu, id_vars=[cols[0]], value_vars=cols[1:], var_name=['continent','country'])
normed = normed.rename(columns={normed.columns[0]: 'week'})
print(normed.shape)
normed.head()
# setup renderer for Jupyter Notebooks (not needed for Juptyer Lab)
alt.renderers.enable('notebook')
```
## Visualization 1
#### Create Linked Plots Showing Flu Cases per Country and Total Flu Cases per Week
#### Selections:
* Click to select individual countries.
* Hold shift and click to select multiple countries.
* Brush barchart to narrow top view.
```
click = alt.selection_multi(encodings=['color'])
brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['week','value']
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country",
selection=click
).transform_filter(
brush
).transform_filter(
click
)
bar = alt.Chart(normed).mark_bar().encode(
alt.X('week:N'),
alt.Y('sum(value):Q',title=None),
color = alt.value('pink')
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country"
).add_selection(
brush
)
legend = alt.Chart(normed).mark_circle().encode(
y = alt.Y('country:N',title=None),
color = alt.condition(click, alt.Color('country:N', legend=None), alt.value('gray'))
).properties(
selection=click
)
legend | line & bar
```
## Visualization 2
#### Create an Overview+Detail Plot Showing Flu Cases per Country
```
click = alt.selection_multi(encodings=['y'])
brush = alt.selection_interval(encodings=['x'])
bar = alt.Chart(normed).mark_bar(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['country','week','value']
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country"
).transform_filter(
click
).transform_filter(
brush
)
bar_overview = alt.Chart(normed).mark_bar(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['week','value']
).properties(
height=100,
width=750,
selection=brush
)
legend = alt.Chart(normed).mark_circle().encode(
y = alt.Y('continent:N',title=None),
color = alt.condition(click, alt.value('black'), alt.value('lightgray'))
).properties(
selection=click
)
legend2 = alt.Chart(normed).mark_circle().encode(
y = alt.Y('country:N',title=None),
color = alt.condition(click, 'country:N', alt.value('lightgray'))
).properties(
selection=click
)
legend | legend2 | bar & bar_overview
```
## Visualization 3
#### Create Linked Plots Showing Flu Cases per Country per Week and Total Flu Cases per Country
For this visualization we create two linked plots. One that shows flu cases per country per week and a second on that show the total of all flu cases per country.
|
github_jupyter
|
import altair as alt
import pandas as pd
import numpy as np
flu = pd.read_csv('flunet2010_11countries.csv', header=[0,1])
cols = flu.columns.tolist()
normed = pd.melt(flu, id_vars=[cols[0]], value_vars=cols[1:], var_name=['continent','country'])
normed = normed.rename(columns={normed.columns[0]: 'week'})
print(normed.shape)
normed.head()
# setup renderer for Jupyter Notebooks (not needed for Juptyer Lab)
alt.renderers.enable('notebook')
click = alt.selection_multi(encodings=['color'])
brush = alt.selection_interval(encodings=['x'])
line = alt.Chart(normed).mark_line(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['week','value']
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country",
selection=click
).transform_filter(
brush
).transform_filter(
click
)
bar = alt.Chart(normed).mark_bar().encode(
alt.X('week:N'),
alt.Y('sum(value):Q',title=None),
color = alt.value('pink')
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country"
).add_selection(
brush
)
legend = alt.Chart(normed).mark_circle().encode(
y = alt.Y('country:N',title=None),
color = alt.condition(click, alt.Color('country:N', legend=None), alt.value('gray'))
).properties(
selection=click
)
legend | line & bar
click = alt.selection_multi(encodings=['y'])
brush = alt.selection_interval(encodings=['x'])
bar = alt.Chart(normed).mark_bar(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['country','week','value']
).properties(
height=250,
width=750,
title="Number of Flu Cases per Week, per Country"
).transform_filter(
click
).transform_filter(
brush
)
bar_overview = alt.Chart(normed).mark_bar(point=alt.MarkConfig(shape='circle',size=20)).encode(
y='value:Q',
x='week:N',
color=alt.Color('country:N',legend=None),
tooltip=['week','value']
).properties(
height=100,
width=750,
selection=brush
)
legend = alt.Chart(normed).mark_circle().encode(
y = alt.Y('continent:N',title=None),
color = alt.condition(click, alt.value('black'), alt.value('lightgray'))
).properties(
selection=click
)
legend2 = alt.Chart(normed).mark_circle().encode(
y = alt.Y('country:N',title=None),
color = alt.condition(click, 'country:N', alt.value('lightgray'))
).properties(
selection=click
)
legend | legend2 | bar & bar_overview
| 0.407216 | 0.793306 |
Things to do here:
- show the difference between kappa=1 and kappa=2
- look at the error estimates from one of the relevant papers. how does the estimate vary with distance from a singularity? with the order of the singularity? what if only the derivatives are singular?
- maybe stage2 refinement should be modified near a singularity?
Fault tips:
- Identify or specify singularities and then make sure that the QBX and quadrature account for the singularities. This would be helpful for avoiding the need to have the sigmoid transition.
- *Would it be useful to use an interpolation that includes the end points so that I can easily make sure that slip goes to zero at a fault tip?* --> I should test this!
```
from config import setup, import_and_display_fnc
setup()
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from common import (
gauss_rule,
qbx_matrix2,
single_layer_matrix,
double_layer_matrix,
adjoint_double_layer_matrix,
hypersingular_matrix,
stage1_refine,
qbx_panel_setup,
stage2_refine,
pts_grid,
)
import quadpy
def clencurt(n1):
"""Computes the Clenshaw Curtis quadrature nodes and weights"""
C = quadpy.c1.clenshaw_curtis(n1)
return (C.points, C.weights)
log(np.sqrt(2) * 0.001) / log(0.03125)
panel_width = 0.125
nq = 6
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=4)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=5)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(1,2,2)
slip = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip = np.ones_like(fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
slip = 1 + np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.tight_layout()
plt.show()
panel_width = 0.125
nq = 6
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=4, kappa=10)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=5, kappa=10)
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(2,2,2)
slip_cos = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip_cos) - M2.dot(slip_cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip_ones = np.ones_like(fault.pts[:,1])
slip_err = M.dot(slip_ones) - M2.dot(slip_ones)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
slip_1cos = 1 + np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip_1cos) - M2.dot(slip_1cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.subplot(2,2,3)
plt.plot(y, M2.dot(slip_ones)[:,0], 'r-', label='one')
plt.plot(y, M2.dot(slip_cos)[:,0], 'b-', label='cos')
plt.plot(y, M2.dot(slip_1cos)[:,0], 'k-', label='1+cos')
plt.ylim([-1, 2])
plt.legend()
plt.tight_layout()
plt.show()
```
## Convergence with r
```
panel_width = 0.75
nq = 16
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
#print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
Ms = []
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=20, kappa=3)
Ms = []
for p in range(4, 20, 2):
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=p, kappa=3)
Ms.append(M)
slip_errs = []
svs = []
for i in range(len(Ms)):
slip = np.ones_like(fault.pts[:,1])
#slip = np.cos(0.5 * np.pi * fault.pts[:,1])
#slip = 0.5 + 0.5 * np.cos(np.pi * fault.pts[:,1])
slip_err = Ms[i].dot(slip) - M2.dot(slip)
svs.append(Ms[i].dot(slip))
slip_errs.append(slip_err)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), label=str(4 + 2 * i))
#plt.xlim([-1.1, -0.7])
plt.legend(loc='right')
plt.tight_layout()
plt.show()
np.array(svs)[:,-1,0]
np.array(slip_errs)[:,-1,0]
np.array(svs)[:,-1,0]
np.array(slip_errs)[:,-1,0]
```
## What if I use clenshaw-curtis and just set the endpoints to zero?
```
panel_width = 0.125
nq = 6
t = sp.var("t")
qx, qw = clencurt(nq)
fault, = stage1_refine([(t, t * 0, t)], (qx, qw), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=8, kappa=10)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=9, kappa=10)
fault.panel_bounds
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(2,2,2)
slip_cos = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip_cos) - M2.dot(slip_cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip_ones = np.ones_like(fault.pts[:,1])
slip_ones[:nq] = 1 + (fault.pts[:nq,1] - fault.panel_bounds[0,1]) / (fault.panel_bounds[0, 1] - fault.panel_bounds[0,0])
slip_ones[-nq:] = 1 - (fault.pts[-nq:,1] - fault.panel_bounds[-1,0]) / (fault.panel_bounds[-1, 1] - fault.panel_bounds[-1,0])
slip_err = M.dot(slip_ones) - M2.dot(slip_ones)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
def sigmoid(x0, W):
return 1.0 / (1 + np.exp((fault.pts[:, 1] - x0) / W))
#slip_1cos = sigmoid(0.5, 0.05) - sigmoid(-0.5, 0.05)
slip_1cos = 0.5 + 0.5 * np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip_1cos) - M2.dot(slip_1cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.subplot(2,2,3)
plt.plot(y, M2.dot(slip_ones)[:,0], 'r-', label='one')
plt.plot(y, M2.dot(slip_cos)[:,0], 'b-', label='cos')
plt.plot(y, M2.dot(slip_1cos)[:,0], 'k-', label='1+cos')
plt.ylim([-1, 2])
plt.legend()
plt.subplot(2,2,4)
plt.plot(y, slip_ones, 'r-o', markersize=4.0, label='one')
plt.plot(y, slip_cos, 'b-o', markersize=4.0, label='cos')
plt.plot(y, slip_1cos, 'k-o', markersize=4.0, label='1+cos')
plt.legend()
plt.tight_layout()
plt.show()
nq = 256
panel_width = 4.0
qx, qw = gauss_rule(nq)
#qx, qw = clencurt(nq)
def trial(qx, qw, panel_width, f):
t = sp.var("t")
cp = [(0, 0, 1.0, panel_width)]
fault, = stage1_refine([(t, t * 0, t)], (qx, qw), control_points=cp)
fault_expansions, = qbx_panel_setup([fault], directions=[0], p=10)
fault_slip_to_fault_stress = qbx_matrix2(
hypersingular_matrix, fault, fault.pts, fault_expansions
)
# from common import build_interpolator, interpolate_fnc
# slip = 1 - np.abs(qx)
# #slip[0] = 0
# #slip[-1] = 0
# evalx = np.linspace(-1, 1, 1000)
# evalslip = interpolate_fnc(build_interpolator(qx), slip, evalx)
# plt.plot(evalx, evalslip, 'k-')
# plt.show()
fy = fault.pts[:,1]
slip = f(fault.pts[:,1])#np.ones(fault.n_pts)
# slip[0] = 0
# slip[-1] = 0
# plt.plot(fy, slip)
# plt.show()
stress = fault_slip_to_fault_stress.dot(slip)
plt.plot(fy, stress[:,0], 'r-')
plt.plot(fy, stress[:,1], 'b-')
plt.show()
def f(y):
return np.cos(y * np.pi * 0.5)
trial(*gauss_rule(256), 4.0, f)
trial(*gauss_rule(64), 4.0, f)
trial(*gauss_rule(128), 1.0, f)
trial(*gauss_rule(8), 1.0 / 8.0, f)
trial(*gauss_rule(8), 1.0 / 16.0, f)
def approach_test(obs_pts, slip):
panel_width = 0.24
nq = 16
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
obs_pts.shape
V1 = hypersingular_matrix(fault, obs_pts).dot(slip(fault.pts[:,1]))[:,0]
panel_width = 0.12
nq = 32
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
obs_pts.shape
V2 = hypersingular_matrix(fault, obs_pts).dot(slip(fault.pts[:,1]))[:,0]
# plt.plot(fault.pts[:,1], slip(fault.pts[:,1]))
# plt.show()
return V1 - V2
seq1 = []
yvs = np.linspace(-1.1, 1.1, 23)
for yv in yvs:
dist = 2.0 ** -np.arange(10)
obs_pts = np.stack((dist, np.full_like(dist, yv)), axis=1)
#print(approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5)))
err = approach_test(obs_pts, lambda x: np.ones_like(x))
seq1.append(err[6])
plt.plot(yvs, np.log10(np.abs(seq1)))
plt.show()
seq = []
yvs = np.linspace(-1.1, 1.1, 23)
for yv in yvs:
dist = 2.0 ** -np.arange(10)
obs_pts = np.stack((dist, np.full_like(dist, yv)), axis=1)
#print(approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5)))
err = approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5))
seq.append(err[6])
plt.plot(yvs, np.log10(np.abs(seq)), 'r-')
plt.plot(yvs, np.log10(np.abs(seq1)), 'k-')
plt.show()
```
|
github_jupyter
|
from config import setup, import_and_display_fnc
setup()
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from common import (
gauss_rule,
qbx_matrix2,
single_layer_matrix,
double_layer_matrix,
adjoint_double_layer_matrix,
hypersingular_matrix,
stage1_refine,
qbx_panel_setup,
stage2_refine,
pts_grid,
)
import quadpy
def clencurt(n1):
"""Computes the Clenshaw Curtis quadrature nodes and weights"""
C = quadpy.c1.clenshaw_curtis(n1)
return (C.points, C.weights)
log(np.sqrt(2) * 0.001) / log(0.03125)
panel_width = 0.125
nq = 6
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=4)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=5)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(1,2,2)
slip = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip = np.ones_like(fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
slip = 1 + np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip) - M2.dot(slip)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.tight_layout()
plt.show()
panel_width = 0.125
nq = 6
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=4, kappa=10)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=5, kappa=10)
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(2,2,2)
slip_cos = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip_cos) - M2.dot(slip_cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip_ones = np.ones_like(fault.pts[:,1])
slip_err = M.dot(slip_ones) - M2.dot(slip_ones)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
slip_1cos = 1 + np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip_1cos) - M2.dot(slip_1cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.subplot(2,2,3)
plt.plot(y, M2.dot(slip_ones)[:,0], 'r-', label='one')
plt.plot(y, M2.dot(slip_cos)[:,0], 'b-', label='cos')
plt.plot(y, M2.dot(slip_1cos)[:,0], 'k-', label='1+cos')
plt.ylim([-1, 2])
plt.legend()
plt.tight_layout()
plt.show()
panel_width = 0.75
nq = 16
t = sp.var("t")
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
#print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
Ms = []
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=20, kappa=3)
Ms = []
for p in range(4, 20, 2):
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=p, kappa=3)
Ms.append(M)
slip_errs = []
svs = []
for i in range(len(Ms)):
slip = np.ones_like(fault.pts[:,1])
#slip = np.cos(0.5 * np.pi * fault.pts[:,1])
#slip = 0.5 + 0.5 * np.cos(np.pi * fault.pts[:,1])
slip_err = Ms[i].dot(slip) - M2.dot(slip)
svs.append(Ms[i].dot(slip))
slip_errs.append(slip_err)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), label=str(4 + 2 * i))
#plt.xlim([-1.1, -0.7])
plt.legend(loc='right')
plt.tight_layout()
plt.show()
np.array(svs)[:,-1,0]
np.array(slip_errs)[:,-1,0]
np.array(svs)[:,-1,0]
np.array(slip_errs)[:,-1,0]
panel_width = 0.125
nq = 6
t = sp.var("t")
qx, qw = clencurt(nq)
fault, = stage1_refine([(t, t * 0, t)], (qx, qw), control_points=[(0, 0, 1.0, panel_width)])
fault_expansions, = qbx_panel_setup(
[fault], directions=[1], mult=0.5, singularities=np.array([[0,-1], [0,1]])
)
print(fault_expansions.pts[:,0])
print(fault.n_panels, fault.n_pts)
K = hypersingular_matrix
#K = double_layer_matrix
#K = single_layer_matrix
M = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=8, kappa=10)
M2 = qbx_matrix2(K, fault, fault.pts, fault_expansions, p=9, kappa=10)
fault.panel_bounds
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.imshow(np.log10(np.abs((M - M2) / M))[:,0,:])
plt.colorbar()
plt.subplot(2,2,2)
slip_cos = np.cos(np.pi * 0.5 * fault.pts[:,1])
slip_err = M.dot(slip_cos) - M2.dot(slip_cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'b-', label='cos')
y = fault.pts[:,1]
slip_ones = np.ones_like(fault.pts[:,1])
slip_ones[:nq] = 1 + (fault.pts[:nq,1] - fault.panel_bounds[0,1]) / (fault.panel_bounds[0, 1] - fault.panel_bounds[0,0])
slip_ones[-nq:] = 1 - (fault.pts[-nq:,1] - fault.panel_bounds[-1,0]) / (fault.panel_bounds[-1, 1] - fault.panel_bounds[-1,0])
slip_err = M.dot(slip_ones) - M2.dot(slip_ones)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'r-', label='one')
def sigmoid(x0, W):
return 1.0 / (1 + np.exp((fault.pts[:, 1] - x0) / W))
#slip_1cos = sigmoid(0.5, 0.05) - sigmoid(-0.5, 0.05)
slip_1cos = 0.5 + 0.5 * np.cos(np.pi * fault.pts[:,1])
slip_err = M.dot(slip_1cos) - M2.dot(slip_1cos)
plt.plot(fault.pts[:,1], np.log10(np.abs(slip_err[:,0])), 'k-', label='1+cos')
plt.legend()
plt.subplot(2,2,3)
plt.plot(y, M2.dot(slip_ones)[:,0], 'r-', label='one')
plt.plot(y, M2.dot(slip_cos)[:,0], 'b-', label='cos')
plt.plot(y, M2.dot(slip_1cos)[:,0], 'k-', label='1+cos')
plt.ylim([-1, 2])
plt.legend()
plt.subplot(2,2,4)
plt.plot(y, slip_ones, 'r-o', markersize=4.0, label='one')
plt.plot(y, slip_cos, 'b-o', markersize=4.0, label='cos')
plt.plot(y, slip_1cos, 'k-o', markersize=4.0, label='1+cos')
plt.legend()
plt.tight_layout()
plt.show()
nq = 256
panel_width = 4.0
qx, qw = gauss_rule(nq)
#qx, qw = clencurt(nq)
def trial(qx, qw, panel_width, f):
t = sp.var("t")
cp = [(0, 0, 1.0, panel_width)]
fault, = stage1_refine([(t, t * 0, t)], (qx, qw), control_points=cp)
fault_expansions, = qbx_panel_setup([fault], directions=[0], p=10)
fault_slip_to_fault_stress = qbx_matrix2(
hypersingular_matrix, fault, fault.pts, fault_expansions
)
# from common import build_interpolator, interpolate_fnc
# slip = 1 - np.abs(qx)
# #slip[0] = 0
# #slip[-1] = 0
# evalx = np.linspace(-1, 1, 1000)
# evalslip = interpolate_fnc(build_interpolator(qx), slip, evalx)
# plt.plot(evalx, evalslip, 'k-')
# plt.show()
fy = fault.pts[:,1]
slip = f(fault.pts[:,1])#np.ones(fault.n_pts)
# slip[0] = 0
# slip[-1] = 0
# plt.plot(fy, slip)
# plt.show()
stress = fault_slip_to_fault_stress.dot(slip)
plt.plot(fy, stress[:,0], 'r-')
plt.plot(fy, stress[:,1], 'b-')
plt.show()
def f(y):
return np.cos(y * np.pi * 0.5)
trial(*gauss_rule(256), 4.0, f)
trial(*gauss_rule(64), 4.0, f)
trial(*gauss_rule(128), 1.0, f)
trial(*gauss_rule(8), 1.0 / 8.0, f)
trial(*gauss_rule(8), 1.0 / 16.0, f)
def approach_test(obs_pts, slip):
panel_width = 0.24
nq = 16
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
obs_pts.shape
V1 = hypersingular_matrix(fault, obs_pts).dot(slip(fault.pts[:,1]))[:,0]
panel_width = 0.12
nq = 32
fault, = stage1_refine([(t, t * 0, t)], gauss_rule(nq), control_points=[(0, 0, 1.0, panel_width)])
obs_pts.shape
V2 = hypersingular_matrix(fault, obs_pts).dot(slip(fault.pts[:,1]))[:,0]
# plt.plot(fault.pts[:,1], slip(fault.pts[:,1]))
# plt.show()
return V1 - V2
seq1 = []
yvs = np.linspace(-1.1, 1.1, 23)
for yv in yvs:
dist = 2.0 ** -np.arange(10)
obs_pts = np.stack((dist, np.full_like(dist, yv)), axis=1)
#print(approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5)))
err = approach_test(obs_pts, lambda x: np.ones_like(x))
seq1.append(err[6])
plt.plot(yvs, np.log10(np.abs(seq1)))
plt.show()
seq = []
yvs = np.linspace(-1.1, 1.1, 23)
for yv in yvs:
dist = 2.0 ** -np.arange(10)
obs_pts = np.stack((dist, np.full_like(dist, yv)), axis=1)
#print(approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5)))
err = approach_test(obs_pts, lambda x: np.cos(x * np.pi * 0.5))
seq.append(err[6])
plt.plot(yvs, np.log10(np.abs(seq)), 'r-')
plt.plot(yvs, np.log10(np.abs(seq1)), 'k-')
plt.show()
| 0.309024 | 0.951818 |
### Import Data and Packages
```
import pandas as pd
aisles = pd.read_csv('aisles.csv')
departments = pd.read_csv('departments.csv')
order_products__prior = pd.read_csv('order_products__prior.csv')
order_products__train = pd.read_csv('order_products__train.csv')
orders = pd.read_csv('orders.csv')
products = pd.read_csv('products.csv')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import seaborn as sns
plt.style.use('ggplot')
from sklearn.metrics import roc_auc_score,accuracy_score,f1_score
from sklearn.metrics import confusion_matrix
import lightgbm as lgb
from sklearn.grid_search import GridSearchCV
```
### Data Info
```
print(aisles.info())
aisles.head()
print(departments.info())
departments.head()
print(order_products__prior.info())
order_products__prior.head()
print(order_products__train.info())
order_products__train.head()
print(orders.info())
orders.head()
print(products.info())
products.head()
```
### Analysis of Orders Data
```
plt.hist('order_dow',data=orders, bins=[0,1,2,3,4,5,6,7])
plt.xlabel('Day of Week')
plt.ylabel('Count')
plt.title('Orders by Day of Week')
```
Day 0 and Day 1 have the most number of orders. It is likely that Day 0 and Day 1 are Saturday and Sunday.
```
plt.hist('order_hour_of_day',data=orders, bins=np.arange(0,24))
plt.xlabel('Hour of Day')
plt.ylabel('Count')
plt.title('Orders by Hour of Day')
```
The peak hours are between 9AM and 5PM but from this plot we cannot really see clearly the corresponding day and time combination. Use heatmap (below) instead.
```
grouped = orders.groupby(["order_dow", "order_hour_of_day"])["order_number"].count().reset_index()
grouped = grouped.pivot('order_hour_of_day', 'order_dow', 'order_number')
sns.heatmap(grouped)
```
From the heatmap, the peak day and hours combination occurs on day 0 and day 1 between 9AM and 5PM.
```
plt.hist('days_since_prior_order',data=orders.dropna(),bins=np.arange(0,31)) #NaNs are dropped
plt.xlabel('Days Since Prior Order')
plt.ylabel('Count')
plt.title('Days Since Prior Order')
```
A lot of customers put another order after 7 days or 29 days. Why is there a lot of customers that put another order after 29 days?
### Analysis of Products Data
```
#merge orders_products__prior with products, aisles, and departments
temp_merged = pd.merge(products,aisles,on='aisle_id')
products_merged = pd.merge(temp_merged,departments,on='department_id')
merged_order_products__prior = pd.merge(order_products__prior, products_merged, on='product_id',how='left')
merged_order_products__prior.head()
#Top 20 products
product_counts = merged_order_products__prior['product_name'].value_counts()
product_counts.head(20)
```
The top 20 products are mostly fruits and vegetables (except Whole Milk).
```
#Top 20 products aisles
aisle_counts = merged_order_products__prior['aisle'].value_counts().head(20)
sns.barplot(aisle_counts.index,aisle_counts.values,color='green')
plt.xticks(rotation=90)
```
The bar plot of top aisles confirms that the top products and aisles are those of fruits and vegetables.
```
#Top product departments
department_counts = merged_order_products__prior['department'].value_counts()
sns.barplot(department_counts.index,department_counts.values,color='blue')
plt.xticks(rotation=90)
```
Produce department dominates orders which is consistent with fruits and vegetables as the top products.
```
products_in_order = merged_order_products__prior.groupby('order_id')['add_to_cart_order'].max()
plt.hist(products_in_order,bins=np.arange(1,50))
```
A lot of customers bought 4 to 7 products per order.
```
#products with highest reorder rate
products_reorder = merged_order_products__prior.groupby('product_name')['reordered'].mean().sort_values(ascending=False)
products_reorder = products_reorder.head(20)
sns.barplot(products_reorder.index,products_reorder.values,color='aqua')
plt.xticks(rotation='vertical')
plt.ylabel('Reorder Rate')
```
The barplot above shows various products with the highest reorder rate (>85%).
### Analysis of Users Data
```
#how many time do each user orders
prev_orders = orders.groupby('user_id')['order_number'].max().value_counts()
plt.figure(figsize=(13,6))
sns.barplot(prev_orders.index,prev_orders.values)
plt.xticks(rotation='vertical')
plt.xlabel('Number of Orders')
plt.ylabel('Number of Customers')
```
From the plot above, we can see that the number of orders per customer was between 4 and 100 and in general, the number of customers is decreasing when the number of orders is increasing.
```
merged_order_products__prior.head()
```
## Extracting Additional Features
In this part, extract and aggregate data (features) at user level, order level, product level, and user-product level.
```
merged = pd.merge(orders,merged_order_products__prior,on='order_id',how='right')
merged.head(2)
#user level variables
users = pd.DataFrame()
users['average_days_in_between'] = orders.groupby('user_id')['days_since_prior_order'].mean()
users['number_of_orders_users'] = orders.groupby('user_id').size()
users['total_items'] = merged.groupby('user_id').size()
users['all_products'] = merged.groupby('user_id')['product_id'].apply(set)
users['total_distinct_items'] = users.all_products.map(len)
users['average_basket'] = users.total_items / users.number_of_orders_users
users=users.reset_index()
users=users.set_index('user_id',drop=False)
print(users.shape)
users.head()
#product level variables
products_temp = pd.DataFrame()
products_temp['orders'] = merged.groupby('product_id').size()
products_temp['total_reorders'] = merged['reordered'].groupby(merged.product_id).sum()
products_temp['reorder_rate'] = products_temp['total_reorders'] / products_temp['orders']
products = products.join(products_temp, on='product_id')
products.set_index('product_id', drop=False, inplace=True)
del products_temp
print(products.shape)
products.head()
#user x product level variables
userproduct = merged.copy()
userproduct['user_product_id'] = userproduct.product_id + userproduct.user_id * 100000
userproduct = userproduct.sort_values('order_number')
userproduct = userproduct.groupby('user_product_id',sort=False).agg({'order_id': ['size', 'last'], 'add_to_cart_order': 'sum'})
userproduct.columns = ['number_of_orders_userproduct','last_order_id','sum_pos_in_cart']
userproduct=userproduct.reset_index()
userproduct=userproduct.set_index('user_product_id',drop=False)
print(userproduct.shape)
del merged
userproduct.head()
#order level variables
orders=orders.set_index('order_id',drop=False)
orders.head()
```
## Train-Test Split
Because the data is obtained from kaggle and kaggle does not post any test cases, the data is split for training (80%) and testing (20%) so that I am able to measure the model's performance measure.
```
order_products__train = pd.read_csv('order_products__train.csv')
from sklearn.cross_validation import train_test_split
big_train_orders = orders[orders.eval_set == 'train']
train_orders,test_orders = train_test_split(big_train_orders,test_size=0.2)
print(train_orders.shape)
print(test_orders.shape)
train_order_id = train_orders['order_id'].tolist()
test_order_id = test_orders['order_id'].tolist()
train=order_products__train[order_products__train['order_id'].isin(train_order_id)]
test=order_products__train[order_products__train['order_id'].isin(test_order_id)]
train.set_index(['order_id', 'product_id'], inplace=True, drop=False)
test.set_index(['order_id', 'product_id'], inplace=True, drop=False)
print(train.shape)
print(test.shape)
```
## Building Features Dataframe
The function belows help create a single final dataframe that consists of all of the features (user level, order level, product level, user-product level)
```
def build_features_df(str_train_or_test):
if str_train_or_test=='train':
train_or_test = train_orders
elif str_train_or_test=='test':
train_or_test = test_orders
order_list = []
product_list = []
labels = []
for row in train_or_test.itertuples():
order_id = row.order_id
user_id = row.user_id
user_products = users.all_products[user_id]
product_list += user_products
order_list += [order_id] * len(user_products)
if str_train_or_test=='train':
labels += [(order_id, product) in train.index for product in user_products]
elif str_train_or_test=='test':
labels += [(order_id, product) in test.index for product in user_products]
df = pd.DataFrame({'order_id':order_list, 'product_id':product_list}, dtype=np.int32)
labels = np.array(labels, dtype=np.int8)
del order_list
del product_list
df['user_id'] = df.order_id.map(orders.user_id)
df['user_total_orders'] = df.user_id.map(users.number_of_orders_users)
df['user_total_items'] = df.user_id.map(users.total_items)
df['total_distinct_items'] = df.user_id.map(users.total_distinct_items)
df['user_average_days_between_orders'] = df.user_id.map(users.average_days_in_between)
df['user_average_basket'] = df.user_id.map(users.average_basket)
df['order_hour_of_day'] = df.order_id.map(orders.order_hour_of_day)
df['days_since_prior_order'] = df.order_id.map(orders.days_since_prior_order)
df['days_since_ratio'] = df.days_since_prior_order / df.user_average_days_between_orders
df['aisle_id'] = df.product_id.map(products.aisle_id)
df['department_id'] = df.product_id.map(products.department_id)
df['product_orders'] = df.product_id.map(products.orders).astype(np.int32)
df['product_reorders'] = df.product_id.map(products.total_reorders)
df['product_reorder_rate'] = df.product_id.map(products.reorder_rate)
df['z'] = df.user_id * 100000 + df.product_id
df['userproduct_orders'] = df.z.map(userproduct.number_of_orders_userproduct)
df['userproduct_orders_ratio'] = (df.userproduct_orders / df.user_total_orders).astype(np.float32)
df['userproduct_last_order_id'] = df.z.map(userproduct.last_order_id)
df['userproduct_average_pos_in_cart'] = (df.z.map(userproduct.sum_pos_in_cart) / df.userproduct_orders).astype(np.float32)
df['userproduct_reorder_rate'] = (df.userproduct_orders / df.user_total_orders).astype(np.float32)
df['userproduct_orders_since_last'] = df.user_total_orders - df.userproduct_last_order_id.map(orders.order_number)
df['userproduct_delta_hour_vs_last'] = abs(df.order_hour_of_day - df.userproduct_last_order_id.map(orders.order_hour_of_day)).map(lambda x: min(x, 24-x)).astype(np.int8)
df.drop(['userproduct_last_order_id', 'z'], axis=1, inplace=True)
return (df,labels)
#build final training dataframe
train_df,labels = build_features_df('train')
train_df.head()
#content of labels, first 100 entries
labels[0:100]
```
## Model: Light Gradient Boosting
-First, I built the baseline LGB model using all of the features and standard parameters.
<br>-Then, I plot the feature importances to determine which features have no importance.
<br>-Features that are not important (~0 importance) are removed.
<br>-Then, run GridSearchCV on the training data to find which model parameters are the best.
<br>-Then, using the best parameters, predict the probability on the test data.
<br>-Fine tune the probability treshold by comparing the performance measures (auc,accuracy,f1_score) for each treshold.
```
#Baseline LGB using all features
features = ['user_total_orders', 'user_total_items', 'total_distinct_items',
'user_average_days_between_orders', 'user_average_basket',
'order_hour_of_day', 'days_since_prior_order', 'days_since_ratio',
'aisle_id', 'department_id', 'product_orders', 'product_reorders',
'product_reorder_rate', 'userproduct_orders', 'userproduct_orders_ratio',
'userproduct_average_pos_in_cart', 'userproduct_reorder_rate', 'userproduct_orders_since_last',
'userproduct_delta_hour_vs_last']
#reformat train dataset
lgb_train_df = lgb.Dataset(train_df[features],
label=labels,
categorical_feature=['aisle_id','department_id'])
#LGB classifier (binary) with standard parameters
mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary',num_leaves=50)
_ = mdl.fit(train_df[features], labels)
#plot feature importances
sns.barplot(features,mdl.feature_importances_,color='green')
plt.xticks(rotation='vertical')
#Grid Search CV using reduced number of features
features = ['days_since_prior_order', 'days_since_ratio',
'product_reorders',
'product_reorder_rate', 'userproduct_orders', 'userproduct_orders_ratio',
'userproduct_orders_since_last',
'userproduct_delta_hour_vs_last']
lgb_train_df = lgb.Dataset(train_df[features],
label=labels)
#Create parameters to search
gridParams = {
'learning_rate': [0.05,0.1],
'num_leaves': [40,60,80],
'max_depth':[4,6,8,10]
}
#Create classifier to use
mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary')
#Create the grid
grid = GridSearchCV(mdl, gridParams, verbose=1, cv=5, n_jobs=-1)
#Run the grid
grid.fit(train_df[features], labels)
#best parameters found
print(grid.best_params_)
print(grid.best_score_)
#build test dataset
df_test, labels_test = build_features_df('test')
#fit the best model
final_mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary',
learning_rate = grid.best_params_['learning_rate'],
num_leaves = grid.best_params_['num_leaves'],
max_depth = grid.best_params_['max_depth'])
final_mdl.fit(train_df[features], labels)
#predict probability
df_test['pred']=final_mdl.predict_proba(df_test[features])[:,1]
df_test['actual']=labels_test
#choose treshold based on performance measures
TRESHOLD = [0.2,0.22,0.24,0.26,0.28,0.3,0.32,0.34,0.36,0.38,0.4]
for tre in TRESHOLD:
df_test['prediction']=df_test['pred'].apply(lambda x: x>tre).astype(np.int8)
print('treshold:',tre,'| auc:',roc_auc_score(df_test.actual, df_test.prediction),
'| accuracy:',accuracy_score(df_test.actual, df_test.prediction),
'| f1_score:',f1_score(df_test.actual, df_test.prediction))
#pick 0.
df_test.head()
```
The best treshold based on f1_score is 0.32, the confusion matrix is plotted below.
```
#draw confusion matrix
treshold = 0.32
df_test['prediction']=df_test['pred'].apply(lambda x: x>treshold).astype(np.int8)
confusion_matrix(df_test.actual, df_test.prediction)
```
|
github_jupyter
|
import pandas as pd
aisles = pd.read_csv('aisles.csv')
departments = pd.read_csv('departments.csv')
order_products__prior = pd.read_csv('order_products__prior.csv')
order_products__train = pd.read_csv('order_products__train.csv')
orders = pd.read_csv('orders.csv')
products = pd.read_csv('products.csv')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import seaborn as sns
plt.style.use('ggplot')
from sklearn.metrics import roc_auc_score,accuracy_score,f1_score
from sklearn.metrics import confusion_matrix
import lightgbm as lgb
from sklearn.grid_search import GridSearchCV
print(aisles.info())
aisles.head()
print(departments.info())
departments.head()
print(order_products__prior.info())
order_products__prior.head()
print(order_products__train.info())
order_products__train.head()
print(orders.info())
orders.head()
print(products.info())
products.head()
plt.hist('order_dow',data=orders, bins=[0,1,2,3,4,5,6,7])
plt.xlabel('Day of Week')
plt.ylabel('Count')
plt.title('Orders by Day of Week')
plt.hist('order_hour_of_day',data=orders, bins=np.arange(0,24))
plt.xlabel('Hour of Day')
plt.ylabel('Count')
plt.title('Orders by Hour of Day')
grouped = orders.groupby(["order_dow", "order_hour_of_day"])["order_number"].count().reset_index()
grouped = grouped.pivot('order_hour_of_day', 'order_dow', 'order_number')
sns.heatmap(grouped)
plt.hist('days_since_prior_order',data=orders.dropna(),bins=np.arange(0,31)) #NaNs are dropped
plt.xlabel('Days Since Prior Order')
plt.ylabel('Count')
plt.title('Days Since Prior Order')
#merge orders_products__prior with products, aisles, and departments
temp_merged = pd.merge(products,aisles,on='aisle_id')
products_merged = pd.merge(temp_merged,departments,on='department_id')
merged_order_products__prior = pd.merge(order_products__prior, products_merged, on='product_id',how='left')
merged_order_products__prior.head()
#Top 20 products
product_counts = merged_order_products__prior['product_name'].value_counts()
product_counts.head(20)
#Top 20 products aisles
aisle_counts = merged_order_products__prior['aisle'].value_counts().head(20)
sns.barplot(aisle_counts.index,aisle_counts.values,color='green')
plt.xticks(rotation=90)
#Top product departments
department_counts = merged_order_products__prior['department'].value_counts()
sns.barplot(department_counts.index,department_counts.values,color='blue')
plt.xticks(rotation=90)
products_in_order = merged_order_products__prior.groupby('order_id')['add_to_cart_order'].max()
plt.hist(products_in_order,bins=np.arange(1,50))
#products with highest reorder rate
products_reorder = merged_order_products__prior.groupby('product_name')['reordered'].mean().sort_values(ascending=False)
products_reorder = products_reorder.head(20)
sns.barplot(products_reorder.index,products_reorder.values,color='aqua')
plt.xticks(rotation='vertical')
plt.ylabel('Reorder Rate')
#how many time do each user orders
prev_orders = orders.groupby('user_id')['order_number'].max().value_counts()
plt.figure(figsize=(13,6))
sns.barplot(prev_orders.index,prev_orders.values)
plt.xticks(rotation='vertical')
plt.xlabel('Number of Orders')
plt.ylabel('Number of Customers')
merged_order_products__prior.head()
merged = pd.merge(orders,merged_order_products__prior,on='order_id',how='right')
merged.head(2)
#user level variables
users = pd.DataFrame()
users['average_days_in_between'] = orders.groupby('user_id')['days_since_prior_order'].mean()
users['number_of_orders_users'] = orders.groupby('user_id').size()
users['total_items'] = merged.groupby('user_id').size()
users['all_products'] = merged.groupby('user_id')['product_id'].apply(set)
users['total_distinct_items'] = users.all_products.map(len)
users['average_basket'] = users.total_items / users.number_of_orders_users
users=users.reset_index()
users=users.set_index('user_id',drop=False)
print(users.shape)
users.head()
#product level variables
products_temp = pd.DataFrame()
products_temp['orders'] = merged.groupby('product_id').size()
products_temp['total_reorders'] = merged['reordered'].groupby(merged.product_id).sum()
products_temp['reorder_rate'] = products_temp['total_reorders'] / products_temp['orders']
products = products.join(products_temp, on='product_id')
products.set_index('product_id', drop=False, inplace=True)
del products_temp
print(products.shape)
products.head()
#user x product level variables
userproduct = merged.copy()
userproduct['user_product_id'] = userproduct.product_id + userproduct.user_id * 100000
userproduct = userproduct.sort_values('order_number')
userproduct = userproduct.groupby('user_product_id',sort=False).agg({'order_id': ['size', 'last'], 'add_to_cart_order': 'sum'})
userproduct.columns = ['number_of_orders_userproduct','last_order_id','sum_pos_in_cart']
userproduct=userproduct.reset_index()
userproduct=userproduct.set_index('user_product_id',drop=False)
print(userproduct.shape)
del merged
userproduct.head()
#order level variables
orders=orders.set_index('order_id',drop=False)
orders.head()
order_products__train = pd.read_csv('order_products__train.csv')
from sklearn.cross_validation import train_test_split
big_train_orders = orders[orders.eval_set == 'train']
train_orders,test_orders = train_test_split(big_train_orders,test_size=0.2)
print(train_orders.shape)
print(test_orders.shape)
train_order_id = train_orders['order_id'].tolist()
test_order_id = test_orders['order_id'].tolist()
train=order_products__train[order_products__train['order_id'].isin(train_order_id)]
test=order_products__train[order_products__train['order_id'].isin(test_order_id)]
train.set_index(['order_id', 'product_id'], inplace=True, drop=False)
test.set_index(['order_id', 'product_id'], inplace=True, drop=False)
print(train.shape)
print(test.shape)
def build_features_df(str_train_or_test):
if str_train_or_test=='train':
train_or_test = train_orders
elif str_train_or_test=='test':
train_or_test = test_orders
order_list = []
product_list = []
labels = []
for row in train_or_test.itertuples():
order_id = row.order_id
user_id = row.user_id
user_products = users.all_products[user_id]
product_list += user_products
order_list += [order_id] * len(user_products)
if str_train_or_test=='train':
labels += [(order_id, product) in train.index for product in user_products]
elif str_train_or_test=='test':
labels += [(order_id, product) in test.index for product in user_products]
df = pd.DataFrame({'order_id':order_list, 'product_id':product_list}, dtype=np.int32)
labels = np.array(labels, dtype=np.int8)
del order_list
del product_list
df['user_id'] = df.order_id.map(orders.user_id)
df['user_total_orders'] = df.user_id.map(users.number_of_orders_users)
df['user_total_items'] = df.user_id.map(users.total_items)
df['total_distinct_items'] = df.user_id.map(users.total_distinct_items)
df['user_average_days_between_orders'] = df.user_id.map(users.average_days_in_between)
df['user_average_basket'] = df.user_id.map(users.average_basket)
df['order_hour_of_day'] = df.order_id.map(orders.order_hour_of_day)
df['days_since_prior_order'] = df.order_id.map(orders.days_since_prior_order)
df['days_since_ratio'] = df.days_since_prior_order / df.user_average_days_between_orders
df['aisle_id'] = df.product_id.map(products.aisle_id)
df['department_id'] = df.product_id.map(products.department_id)
df['product_orders'] = df.product_id.map(products.orders).astype(np.int32)
df['product_reorders'] = df.product_id.map(products.total_reorders)
df['product_reorder_rate'] = df.product_id.map(products.reorder_rate)
df['z'] = df.user_id * 100000 + df.product_id
df['userproduct_orders'] = df.z.map(userproduct.number_of_orders_userproduct)
df['userproduct_orders_ratio'] = (df.userproduct_orders / df.user_total_orders).astype(np.float32)
df['userproduct_last_order_id'] = df.z.map(userproduct.last_order_id)
df['userproduct_average_pos_in_cart'] = (df.z.map(userproduct.sum_pos_in_cart) / df.userproduct_orders).astype(np.float32)
df['userproduct_reorder_rate'] = (df.userproduct_orders / df.user_total_orders).astype(np.float32)
df['userproduct_orders_since_last'] = df.user_total_orders - df.userproduct_last_order_id.map(orders.order_number)
df['userproduct_delta_hour_vs_last'] = abs(df.order_hour_of_day - df.userproduct_last_order_id.map(orders.order_hour_of_day)).map(lambda x: min(x, 24-x)).astype(np.int8)
df.drop(['userproduct_last_order_id', 'z'], axis=1, inplace=True)
return (df,labels)
#build final training dataframe
train_df,labels = build_features_df('train')
train_df.head()
#content of labels, first 100 entries
labels[0:100]
#Baseline LGB using all features
features = ['user_total_orders', 'user_total_items', 'total_distinct_items',
'user_average_days_between_orders', 'user_average_basket',
'order_hour_of_day', 'days_since_prior_order', 'days_since_ratio',
'aisle_id', 'department_id', 'product_orders', 'product_reorders',
'product_reorder_rate', 'userproduct_orders', 'userproduct_orders_ratio',
'userproduct_average_pos_in_cart', 'userproduct_reorder_rate', 'userproduct_orders_since_last',
'userproduct_delta_hour_vs_last']
#reformat train dataset
lgb_train_df = lgb.Dataset(train_df[features],
label=labels,
categorical_feature=['aisle_id','department_id'])
#LGB classifier (binary) with standard parameters
mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary',num_leaves=50)
_ = mdl.fit(train_df[features], labels)
#plot feature importances
sns.barplot(features,mdl.feature_importances_,color='green')
plt.xticks(rotation='vertical')
#Grid Search CV using reduced number of features
features = ['days_since_prior_order', 'days_since_ratio',
'product_reorders',
'product_reorder_rate', 'userproduct_orders', 'userproduct_orders_ratio',
'userproduct_orders_since_last',
'userproduct_delta_hour_vs_last']
lgb_train_df = lgb.Dataset(train_df[features],
label=labels)
#Create parameters to search
gridParams = {
'learning_rate': [0.05,0.1],
'num_leaves': [40,60,80],
'max_depth':[4,6,8,10]
}
#Create classifier to use
mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary')
#Create the grid
grid = GridSearchCV(mdl, gridParams, verbose=1, cv=5, n_jobs=-1)
#Run the grid
grid.fit(train_df[features], labels)
#best parameters found
print(grid.best_params_)
print(grid.best_score_)
#build test dataset
df_test, labels_test = build_features_df('test')
#fit the best model
final_mdl = lgb.LGBMClassifier(boosting_type= 'gbdt',
objective = 'binary',
learning_rate = grid.best_params_['learning_rate'],
num_leaves = grid.best_params_['num_leaves'],
max_depth = grid.best_params_['max_depth'])
final_mdl.fit(train_df[features], labels)
#predict probability
df_test['pred']=final_mdl.predict_proba(df_test[features])[:,1]
df_test['actual']=labels_test
#choose treshold based on performance measures
TRESHOLD = [0.2,0.22,0.24,0.26,0.28,0.3,0.32,0.34,0.36,0.38,0.4]
for tre in TRESHOLD:
df_test['prediction']=df_test['pred'].apply(lambda x: x>tre).astype(np.int8)
print('treshold:',tre,'| auc:',roc_auc_score(df_test.actual, df_test.prediction),
'| accuracy:',accuracy_score(df_test.actual, df_test.prediction),
'| f1_score:',f1_score(df_test.actual, df_test.prediction))
#pick 0.
df_test.head()
#draw confusion matrix
treshold = 0.32
df_test['prediction']=df_test['pred'].apply(lambda x: x>treshold).astype(np.int8)
confusion_matrix(df_test.actual, df_test.prediction)
| 0.338405 | 0.70276 |
```
#setup the environment variables
%matplotlib inline
import torch
import numpy as np
#initialize the tensor from directly data
data = [[1,1], [3,4]]
x_data = torch.tensor(data)
print(x_data)
```
From a NumPy <br>
Tensors can be created from NumPy array and vice versa.
```
#initialize from Numpy array
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
print(f"Numpy np)array value: \n {np_array} \n")
print(f"Tesor x_np value: \n {x_np} \n")
np.multiply(np_array, 2, out = np_array)
print(f"Numpy np_array after *2 operation: \n {np_array} \n")
print(f"Tesor x_np value after modifying numpy array: \n {x_np} \n")
```
From another tensor: <br>
The new tensor retains the properties (shape, data type) of the argument tensor, unless explicitly overridden.
```
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
```
With random and constant values: <br>
"shape" is a tuple of tensor dimensions. Shape shows the number of rows and columns in the tensor. eg. shape = (#of rows, #of columns)
```
shape = (2,3)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
```
Attributes of a Tensor <br>
Tensor attributes describe their shape, data type and the device on which they are stored.
```
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
```
Tensor in GPU
```
# we move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
```
Standard numpy-like indexing and slicing:
```
tensor = torch.ones(4, 4)
print('First row: ',tensor[0])
print('First column: ', tensor[:, 0])
print('Last column:', tensor[..., -1])
tensor[:,1] = 0
print(tensor)
```
Joining tensors<br>
You can use torch.cat to concatenate a sequence of tensors along a given dimension. torch.stack is another tensor joining option that is subtly different from torch.cat.
```
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
```
Arithmetic Operations
```
# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)
y3 = torch.rand_like(tensor)
torch.matmul(tensor, tensor.T, out=y3)
# This computes the element-wise product. z1, z2, z3 will have the same value
z1 = tensor * tensor
z2 = tensor.mul(tensor)
z3 = torch.rand_like(tensor)
torch.mul(tensor, tensor, out=z3)
#In-place operations
print(tensor, "\n")
tensor.add_(5)
print(tensor)
#Bridge with NumPy
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
n = np.ones(5)
t = torch.from_numpy(n)
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
```
|
github_jupyter
|
#setup the environment variables
%matplotlib inline
import torch
import numpy as np
#initialize the tensor from directly data
data = [[1,1], [3,4]]
x_data = torch.tensor(data)
print(x_data)
#initialize from Numpy array
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
print(f"Numpy np)array value: \n {np_array} \n")
print(f"Tesor x_np value: \n {x_np} \n")
np.multiply(np_array, 2, out = np_array)
print(f"Numpy np_array after *2 operation: \n {np_array} \n")
print(f"Tesor x_np value after modifying numpy array: \n {x_np} \n")
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
shape = (2,3)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
# we move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
tensor = torch.ones(4, 4)
print('First row: ',tensor[0])
print('First column: ', tensor[:, 0])
print('Last column:', tensor[..., -1])
tensor[:,1] = 0
print(tensor)
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)
y3 = torch.rand_like(tensor)
torch.matmul(tensor, tensor.T, out=y3)
# This computes the element-wise product. z1, z2, z3 will have the same value
z1 = tensor * tensor
z2 = tensor.mul(tensor)
z3 = torch.rand_like(tensor)
torch.mul(tensor, tensor, out=z3)
#In-place operations
print(tensor, "\n")
tensor.add_(5)
print(tensor)
#Bridge with NumPy
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
n = np.ones(5)
t = torch.from_numpy(n)
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
| 0.640411 | 0.98746 |
# Test notebook Meteorites
```
from pathlib import Path
import numpy as np
import pandas as pd
import requests
from IPython.display import display
from IPython.utils.capture import capture_output
import pandas_profiling
from pandas_profiling.utils.cache import cache_file
file_name = cache_file(
"meteorites.csv",
"https://data.nasa.gov/api/views/gh4g-9sfh/rows.csv?accessType=DOWNLOAD",
)
df = pd.read_csv(file_name)
# Note: Pandas does not support dates before 1880, so we ignore these for this analysis
df["year"] = pd.to_datetime(df["year"], errors="coerce")
# Example: Constant variable
df["source"] = "NASA"
# Example: Boolean variable
df["boolean"] = np.random.choice([True, False], df.shape[0])
# Example: Mixed with base types
df["mixed"] = np.random.choice([1, "A"], df.shape[0])
# Example: Highly correlated variables
df["reclat_city"] = df["reclat"] + np.random.normal(scale=5, size=(len(df)))
# Example: Duplicate observations
duplicates_to_add = pd.DataFrame(df.iloc[0:10])
duplicates_to_add["name"] = duplicates_to_add["name"] + " copy"
df = df.append(duplicates_to_add, ignore_index=True)
# Inline report without saving
with capture_output() as out:
pr = df.profile_report(
sort=None,
html={"style": {"full_width": True}},
progress_bar=False,
minimal=True,
)
display(pr)
assert len(out.outputs) == 2
assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>"
assert all(
s in out.outputs[0].data["text/html"]
for s in ["<iframe", "Profile report generated with the `pandas-profiling`"]
)
assert out.outputs[1].data["text/plain"] == ""
# There should also 2 progress bars in minimal mode
with capture_output() as out:
pfr = df.profile_report(
html={"style": {"full_width": True}},
minimal=True,
progress_bar=True,
lazy=False,
)
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 2
# Write to a file
with capture_output() as out:
pfr.to_file("/tmp/example.html")
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 2
# Print existing ProfileReport object inline
with capture_output() as out:
display(pfr)
assert len(out.outputs) == 2
assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>"
assert all(
s in out.outputs[0].data["text/html"]
for s in ["<iframe", "Profile report generated with the `pandas-profiling`"]
)
assert out.outputs[1].data["text/plain"] == ""
```
|
github_jupyter
|
from pathlib import Path
import numpy as np
import pandas as pd
import requests
from IPython.display import display
from IPython.utils.capture import capture_output
import pandas_profiling
from pandas_profiling.utils.cache import cache_file
file_name = cache_file(
"meteorites.csv",
"https://data.nasa.gov/api/views/gh4g-9sfh/rows.csv?accessType=DOWNLOAD",
)
df = pd.read_csv(file_name)
# Note: Pandas does not support dates before 1880, so we ignore these for this analysis
df["year"] = pd.to_datetime(df["year"], errors="coerce")
# Example: Constant variable
df["source"] = "NASA"
# Example: Boolean variable
df["boolean"] = np.random.choice([True, False], df.shape[0])
# Example: Mixed with base types
df["mixed"] = np.random.choice([1, "A"], df.shape[0])
# Example: Highly correlated variables
df["reclat_city"] = df["reclat"] + np.random.normal(scale=5, size=(len(df)))
# Example: Duplicate observations
duplicates_to_add = pd.DataFrame(df.iloc[0:10])
duplicates_to_add["name"] = duplicates_to_add["name"] + " copy"
df = df.append(duplicates_to_add, ignore_index=True)
# Inline report without saving
with capture_output() as out:
pr = df.profile_report(
sort=None,
html={"style": {"full_width": True}},
progress_bar=False,
minimal=True,
)
display(pr)
assert len(out.outputs) == 2
assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>"
assert all(
s in out.outputs[0].data["text/html"]
for s in ["<iframe", "Profile report generated with the `pandas-profiling`"]
)
assert out.outputs[1].data["text/plain"] == ""
# There should also 2 progress bars in minimal mode
with capture_output() as out:
pfr = df.profile_report(
html={"style": {"full_width": True}},
minimal=True,
progress_bar=True,
lazy=False,
)
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 2
# Write to a file
with capture_output() as out:
pfr.to_file("/tmp/example.html")
assert all(
any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs
)
assert len(out.outputs) == 2
# Print existing ProfileReport object inline
with capture_output() as out:
display(pfr)
assert len(out.outputs) == 2
assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>"
assert all(
s in out.outputs[0].data["text/html"]
for s in ["<iframe", "Profile report generated with the `pandas-profiling`"]
)
assert out.outputs[1].data["text/plain"] == ""
| 0.737725 | 0.585901 |
# SuperGradients Walkthrough Notebook

*Hi there and welcome to SuperGradients, a free open-source training library for PyTorch-based deep learning models. Let's have a quick look at the SuperGradients library features. The library lets you train models from any Computer Vision tasks or import pre-trained SOTA models, such as object detection, classification of images, and semantic segmentation for videos or images use cases.*
*Whether you are a beginner or an expert it is likely that you already have your own training script, model, loss function implementation etc.
In this notebook we present the modifications needed in order to launch your training so you can benefit from the various tools the SuperGradients has to offer.*
## "Wait, but what's in it for me?"
Great question! our short answer is - Easy to use SOTA DL training library.
Our long answer -
* Train models from any Computer Vision tasks or import [pre-trained SOTA models](https://github.com/Deci-AI/super-gradients#pretrained-classification-pytorch-checkpoints) (detection, segmentation, and classification - YOLOv5, DDRNet, EfficientNet, RegNet, ResNet, MobileNet, etc.)
* Shorten the training process using tested and proven [recipes](https://github.com/Deci-AI/super-gradients/tree/master/recipes) & [code examples](https://github.com/Deci-AI/super-gradients/tree/master/examples)
* Easily configure your own or use plug&play training, dataset , and architecture parameters.
* Save time and easily integrate it into your codebase.
## Walkthrough Steps:
1. Installations
2. Integrating your dataset
3. Integrating your neural network architecture
5. Integrating your loss function
6. Putting it all together
7. Defining our metrics of evaluation
8. Defining training parameters
9. Training execution
> **NOTE:** The defult hardware is CPU, if you want to use Google Collab's GPU you need to follow these steps-
- Press "Runtime" in manu bar
- Choose "Change runtime type"
- Hardware accelerator - choose "GPU"
- Press "Save"
- Restart Runtime
## Installations
```
# SuperGradients installation
# !pip install super-gradients
! pip install https://deci-build-essentials-development.s3.amazonaws.com/super_gradients-0.1.0rc666-py3-none-any.whl gwpy &> /dev/null
# To install from source instead of the last release, comment the command above and uncomment the following one.
# !pip install git+https://github.com/Deci-AI/super_gradients.git
```
## **Getting Started With Training a Model**
> **NOTE:** All code examples presented in the documentation are in PyTorch framework.
### Integrating Your Dataset
In order to integrate your own dataset with our training scheme, we introduce the *dataset_interface* concept, which wraps the *torch dataloaders* used for training.
The specified dataset interface class must inherit from deci_trainer.trainer.datasets.dataset_interfaces.dataset_interface, which is where data augmentation and data loader configurations are defined.
For instance, a dataset interface for Cifar10:
```
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from super_gradients.training import utils as core_utils
from super_gradients.training.datasets.dataset_interfaces import DatasetInterface
class UserDataset(DatasetInterface):
def __init__(self, name="cifar10", dataset_params={}):
super(UserDataset, self).__init__(dataset_params)
self.dataset_name = name
self.lib_dataset_params = {'mean': (0.4914, 0.4822, 0.4465), 'std': (0.2023, 0.1994, 0.2010)}
crop_size = core_utils.get_param(self.dataset_params, 'crop_size', default_val=32)
transform_train = transforms.Compose([
transforms.RandomCrop(crop_size, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(self.lib_dataset_params['mean'], self.lib_dataset_params['std']),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(self.lib_dataset_params['mean'], self.lib_dataset_params['std']),
])
self.trainset = datasets.CIFAR10(root=self.dataset_params.dataset_dir, train=True, download=True,
transform=transform_train)
self.valset = datasets.CIFAR10(root=self.dataset_params.dataset_dir, train=False, download=True,
transform=transform_test)
```
Required parameters can be passed using the python dataset_params argument. When implementing a dataset interface, the *trainset* and *valset* attributes are required and must be initiated with a torch.utils.data.Dataset type.
These fields will cause the SGModel instance to use them accordingly, such as during training, validation, and so on.
### Integrating Your Neural Network Architecture
This is rather straightforward- the only requirement is that the model must be of torch.nn.Module type. In our case, a simple Lenet implementation (taken from https://github.com/icpm/pytorch-cifar10/blob/master/models/LeNet.py).
```
import torch.nn as nn
import torch.nn.functional as func
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = func.relu(self.conv1(x))
x = func.max_pool2d(x, 2)
x = func.relu(self.conv2(x))
x = func.max_pool2d(x, 2)
x = x.view(x.size(0), -1)
x = func.relu(self.fc1(x))
x = func.relu(self.fc2(x))
x = self.fc3(x)
return x
```
### Integrating Your Loss Function
The loss function class must be of torch.nn.module._LOSS type. For example, our LabelSmoothingCrossEntropyLoss implementation
```
import torch.nn as nn
from super_gradients.training.losses.label_smoothing_cross_entropy_loss import cross_entropy
class LabelSmoothingCrossEntropyLoss(nn.CrossEntropyLoss):
def __init__(self, weight=None, ignore_index=-100, reduction='mean', smooth_eps=None, smooth_dist=None,
from_logits=True):
super(LabelSmoothingCrossEntropyLoss, self).__init__(weight=weight,
ignore_index=ignore_index, reduction=reduction)
self.smooth_eps = smooth_eps
self.smooth_dist = smooth_dist
self.from_logits = from_logits
def forward(self, input, target, smooth_dist=None):
if smooth_dist is None:
smooth_dist = self.smooth_dist
loss = cross_entropy(input, target, weight=self.weight, ignore_index=self.ignore_index,
reduction=self.reduction, smooth_eps=self.smooth_eps,
smooth_dist=smooth_dist, from_logits=self.from_logits)
return loss
```
### Putting It All Together
We instantiate an SgModel and a UserDatasetInterface, then call *connect_dataset_interface* which will initialize the dataloaders and pass additional dataset parameters to the SgModel instance.
```
from super_gradients.training import SgModel
sg_model = SgModel(experiment_name='LeNet_cifar10_example')
dataset_params = {"batch_size": 256}
dataset = UserDataset(dataset_params)
sg_model.connect_dataset_interface(dataset)
```
Now, we pass a LeNet instance we defined above to the SgModel:
```
network = LeNet()
sg_model.build_model(network)
```
Next, we define metrics in order to valuate our model.
The metrics objects to be logged during training must be of torchmetrics.Metric type. For more information on how to use torchmetric.Metric objects and implement your own metrics. see https://torchmetrics.readthedocs.io/en/latest/pages/overview.html.
During training, the metric's update is called with the model's raw outputs and raw targets. Therefore, any processing of the two must be taken into account and applied in the update.
For most of the familiar cases, an existing torchmetric.Metric implementation exists in super_gradients.training.metrics. Here we simply use the SuperGradients Top1 and Top5 accuracy metrics in order to define the metrics for evaluation on the train set and the validation set.
```
from super_gradients.training.metrics import Accuracy, Top5
train_metrics_list = [Accuracy(), Top5()]
valid_metrics_list = [Accuracy(), Top5()]
```
### Defining Your Training Parameters
Finally, we can define the training parameters:
```
train_params = {"max_epochs": 10,
"lr_updates": [100, 150, 200],
"lr_decay_factor": 0.1,
"lr_mode": "step",
"lr_warmup_epochs": 0,
"initial_lr": 0.1,
"loss": LabelSmoothingCrossEntropyLoss(),
"criterion_params": {},
"optimizer": "SGD",
"optimizer_params": {"weight_decay": 1e-4, "momentum": 0.9},
"launch_tensorboard": False,
"train_metrics_list": train_metrics_list,
"valid_metrics_list": valid_metrics_list,
"loss_logging_items_names": ["Loss"],
"metric_to_watch": "Accuracy",
"greater_metric_to_watch_is_better": True}
```
### Training Execution
Now that all of the parameters and integrations are done we can simply call *train*:
```
sg_model.train(train_params)
```
> **Training Parameter Notes:**
\
loss_logging_items_names parameter – Refers to the single item returned by our loss function described above.
*metric_to_watch* – Is the model’s metric that determines the checkpoint to be saved. In our example, this parameter is set to Accuracy, and can be set to any of the following:
A metric name (str) of one of the metric objects from the *valid_metrics_list* or "Loss" (which refers to the validation loss).
*greater_metric_to_watch_is_better* flag – Determines when to save a model's checkpoint according to the value of the metric_to_watch.
## Conclusion
Great job! You have finished a full walkthrough of SuperGradients components for deep learning models' training. You can now try out our [pre-trained models fine tune notebook](https://colab.research.google.com/drive/1FsMo11hFw6OS1e1x1-LFda23vrihspdz#scrollTo=0kufTIGMff9y), or train your own models using our SOTA models' [recipes](https://github.com/Deci-AI/super-gradients/tree/master/recipes).
```
```
|
github_jupyter
|
# SuperGradients installation
# !pip install super-gradients
! pip install https://deci-build-essentials-development.s3.amazonaws.com/super_gradients-0.1.0rc666-py3-none-any.whl gwpy &> /dev/null
# To install from source instead of the last release, comment the command above and uncomment the following one.
# !pip install git+https://github.com/Deci-AI/super_gradients.git
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from super_gradients.training import utils as core_utils
from super_gradients.training.datasets.dataset_interfaces import DatasetInterface
class UserDataset(DatasetInterface):
def __init__(self, name="cifar10", dataset_params={}):
super(UserDataset, self).__init__(dataset_params)
self.dataset_name = name
self.lib_dataset_params = {'mean': (0.4914, 0.4822, 0.4465), 'std': (0.2023, 0.1994, 0.2010)}
crop_size = core_utils.get_param(self.dataset_params, 'crop_size', default_val=32)
transform_train = transforms.Compose([
transforms.RandomCrop(crop_size, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(self.lib_dataset_params['mean'], self.lib_dataset_params['std']),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(self.lib_dataset_params['mean'], self.lib_dataset_params['std']),
])
self.trainset = datasets.CIFAR10(root=self.dataset_params.dataset_dir, train=True, download=True,
transform=transform_train)
self.valset = datasets.CIFAR10(root=self.dataset_params.dataset_dir, train=False, download=True,
transform=transform_test)
import torch.nn as nn
import torch.nn.functional as func
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = func.relu(self.conv1(x))
x = func.max_pool2d(x, 2)
x = func.relu(self.conv2(x))
x = func.max_pool2d(x, 2)
x = x.view(x.size(0), -1)
x = func.relu(self.fc1(x))
x = func.relu(self.fc2(x))
x = self.fc3(x)
return x
import torch.nn as nn
from super_gradients.training.losses.label_smoothing_cross_entropy_loss import cross_entropy
class LabelSmoothingCrossEntropyLoss(nn.CrossEntropyLoss):
def __init__(self, weight=None, ignore_index=-100, reduction='mean', smooth_eps=None, smooth_dist=None,
from_logits=True):
super(LabelSmoothingCrossEntropyLoss, self).__init__(weight=weight,
ignore_index=ignore_index, reduction=reduction)
self.smooth_eps = smooth_eps
self.smooth_dist = smooth_dist
self.from_logits = from_logits
def forward(self, input, target, smooth_dist=None):
if smooth_dist is None:
smooth_dist = self.smooth_dist
loss = cross_entropy(input, target, weight=self.weight, ignore_index=self.ignore_index,
reduction=self.reduction, smooth_eps=self.smooth_eps,
smooth_dist=smooth_dist, from_logits=self.from_logits)
return loss
from super_gradients.training import SgModel
sg_model = SgModel(experiment_name='LeNet_cifar10_example')
dataset_params = {"batch_size": 256}
dataset = UserDataset(dataset_params)
sg_model.connect_dataset_interface(dataset)
network = LeNet()
sg_model.build_model(network)
from super_gradients.training.metrics import Accuracy, Top5
train_metrics_list = [Accuracy(), Top5()]
valid_metrics_list = [Accuracy(), Top5()]
train_params = {"max_epochs": 10,
"lr_updates": [100, 150, 200],
"lr_decay_factor": 0.1,
"lr_mode": "step",
"lr_warmup_epochs": 0,
"initial_lr": 0.1,
"loss": LabelSmoothingCrossEntropyLoss(),
"criterion_params": {},
"optimizer": "SGD",
"optimizer_params": {"weight_decay": 1e-4, "momentum": 0.9},
"launch_tensorboard": False,
"train_metrics_list": train_metrics_list,
"valid_metrics_list": valid_metrics_list,
"loss_logging_items_names": ["Loss"],
"metric_to_watch": "Accuracy",
"greater_metric_to_watch_is_better": True}
sg_model.train(train_params)
| 0.935088 | 0.173113 |
```
#dependencies
import pandas as pd
import joblib
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVC
```
# Read the CSV and Perform Basic Data Cleaning
```
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
```
# Select your features (columns)
```
# Set features. This will also be used as your x values.
selected_features = df[['koi_disposition', 'koi_period', 'koi_duration', 'koi_srad', 'koi_prad']]
humanlegible = selected_features.rename(columns={"koi_disposition": "KOI Disposition", "koi_period": "KOI Period (days)", "koi_duration": "KOI Duration (hrs)", "koi_srad": "KOI SRad (solar radii)", "koi_prad": "KOI Prad (earth radii)"})
humanlegible.head()
selected_features.head()
```
# Create a Train Test Split
Use `koi_disposition` for the y values
```
from sklearn.model_selection import train_test_split
# use `koi_disposition` for the y values
y = selected_features["koi_disposition"]
#X values are all other values besides `koi_disposition`
X = selected_features.drop("koi_disposition", axis=1)
#set up the train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=50, stratify=y)
X_train
```
# Pre-processing
Scale the data using the MinMaxScaler and perform some feature selection
```
# Scale your data
# all variables (X_train, X_test, etc) from the train test split reflect
# the selected features
X_train_scaled = MinMaxScaler().fit(X_train).transform(X_train)
X_test_scaled = MinMaxScaler().fit(X_train).transform(X_test)
```
# Train the Model
```
#run support vector classifier
trained_model = SVC(kernel='linear')
trained_model.fit(X_train_scaled, y_train)
print(f"Training Data Score: {trained_model.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {trained_model.score(X_test_scaled, y_test)}")
```
# Hyperparameter Tuning
Use `GridSearchCV` to tune the model's parameters
```
# Create the GridSearchCV model
hyperparams = {#C values
'C': [1, 5, 10, 100],
#gamma values
'gamma': [0.0001, 0.0005, 0.001, 0.01]}
#run GridSearchCV to search linear and RBF grids
final_model = GridSearchCV(trained_model, hyperparams, verbose=3)
# Train the model with GridSearch
final_model.fit(X_train_scaled, y_train)
print(final_model.best_params_)
print(final_model.best_score_)
from sklearn.metrics import classification_report
print(classification_report(y_test, final_model.predict(X_test_scaled)))
```
# Save the Model
```
filename = 'culhane_sup_vec_model_(svm).sav'
joblib.dump(final_model, filename)
```
# Testing
```
#test to make sure the dump/load doesn't corrupt the file
loaded_model = joblib.load(filename)
#compare scores of the pre-save and post-save models
comparison1 = loaded_model.score(X_test, y_test)
comparison2 = trained_model.score(X_test_scaled, y_test)
if comparison1 == comparison2:
print("Test Successful")
else:
print("Test Failed")
```
|
github_jupyter
|
#dependencies
import pandas as pd
import joblib
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVC
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
# Set features. This will also be used as your x values.
selected_features = df[['koi_disposition', 'koi_period', 'koi_duration', 'koi_srad', 'koi_prad']]
humanlegible = selected_features.rename(columns={"koi_disposition": "KOI Disposition", "koi_period": "KOI Period (days)", "koi_duration": "KOI Duration (hrs)", "koi_srad": "KOI SRad (solar radii)", "koi_prad": "KOI Prad (earth radii)"})
humanlegible.head()
selected_features.head()
from sklearn.model_selection import train_test_split
# use `koi_disposition` for the y values
y = selected_features["koi_disposition"]
#X values are all other values besides `koi_disposition`
X = selected_features.drop("koi_disposition", axis=1)
#set up the train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=50, stratify=y)
X_train
# Scale your data
# all variables (X_train, X_test, etc) from the train test split reflect
# the selected features
X_train_scaled = MinMaxScaler().fit(X_train).transform(X_train)
X_test_scaled = MinMaxScaler().fit(X_train).transform(X_test)
#run support vector classifier
trained_model = SVC(kernel='linear')
trained_model.fit(X_train_scaled, y_train)
print(f"Training Data Score: {trained_model.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {trained_model.score(X_test_scaled, y_test)}")
# Create the GridSearchCV model
hyperparams = {#C values
'C': [1, 5, 10, 100],
#gamma values
'gamma': [0.0001, 0.0005, 0.001, 0.01]}
#run GridSearchCV to search linear and RBF grids
final_model = GridSearchCV(trained_model, hyperparams, verbose=3)
# Train the model with GridSearch
final_model.fit(X_train_scaled, y_train)
print(final_model.best_params_)
print(final_model.best_score_)
from sklearn.metrics import classification_report
print(classification_report(y_test, final_model.predict(X_test_scaled)))
filename = 'culhane_sup_vec_model_(svm).sav'
joblib.dump(final_model, filename)
#test to make sure the dump/load doesn't corrupt the file
loaded_model = joblib.load(filename)
#compare scores of the pre-save and post-save models
comparison1 = loaded_model.score(X_test, y_test)
comparison2 = trained_model.score(X_test_scaled, y_test)
if comparison1 == comparison2:
print("Test Successful")
else:
print("Test Failed")
| 0.663233 | 0.786213 |
```
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import MatplotlibDeprecationWarning
# this in general not advised, but for this specific notebook
# I'd like to ignore some deprecation warnings by matplotlib
import warnings
warnings.filterwarnings(
"ignore", category=MatplotlibDeprecationWarning
)
import astropy .units as u
from astropy.coordinates import SkyCoord
import gammapy
from gammapy.data import EventList
```
##### Let's created one by reading the Fermi-LAT 3FHL event list:
```
events_3fh1 = EventList.read("/Users/dhruvkumar/Desktop/fermi-3fhl-gc-events.fits.gz")
# let's HAve a look at the data
events_3fh1.table
len(events_3fh1.table)
events_3fh1.table.colnames # name of the columns
# Conversions along the units
x = events_3fh1.energy.to("GeV") # energy is in GeV
y = events_3fh1.energy.to("TeV") # energy is in TeV
x,y
events_3fh1.galactic
events_3fh1.time
events_3fh1.plot_image()
# select all events within a radius of 1 deg around center
from regions import CircleSkyRegion
center = SkyCoord("0d", "0d", frame="galactic") # Centre
region = CircleSkyRegion(center, radius=1 * u.deg) # specify the region
events_gc_3fhl = events_3fh1.select_region(region) # select the region
# sort events by energy
events_gc_3fhl.table.sort("ENERGY")
# and show highest energy photon
events_gc_3fhl.energy[-1].to("TeV") # hightest Energy bust {As we have taken it from the last of table}
```
### Maps
```
from gammapy.maps import Map
gc_3fh1 = Map.create(
width=(30 * u.degree , 30*u.degree),
skydir = center,
proj="CAR",
binsz=0.05 * u.degree,
map_type="wcs",
frame="galactic"
)
gc_3fh1
print(gc_3fh1.geom) # to get more idea about the geometry
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="sqrt",cmap='inferno');
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="log",cmap='inferno'); # Much sensetative
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="linear",cmap='inferno');
gc_3fh1.data
print(f"Total number of counts in the image: {gc_3fh1.data.sum():.0f}")
from gammapy.maps import MapAxis
energy_axis = MapAxis.from_energy_bounds(
energy_min = "10 GeV", energy_max="3 TeV", nbin=5
)
print(energy_axis)
# 3D data Cube
gc_3fh1_cube = Map.create(
width = (30*u.degree,30*u.degree),
skydir = center,
proj="CAR",
binsz=0.05 * u.degree,
map_type="wcs",
frame = "galactic",
axes =[energy_axis]
)
print(gc_3fh1_cube)
gc_3fh1_cube.fill_events(events_3fh1)
gc_3fh1_cube_smoothed = gc_3fh1_cube.smooth(
kernel ="gauss",width=0.1 *u.degree
)
gc_3fh1_cube_smoothed.plot_interactive(cmap='inferno')
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="sqrt"
);
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="linear"
);
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="log"
);
```
__*We can also do a rectangular cutout of a certain region in the image:*__
```
# define centre and cutout of the rectangular region
center = SkyCoord(0,0,unit ='deg',frame ='galactic')
gc_3fh1_cutout = gc_3fh1_cube_smoothed.cutout(center,9*u.deg) # width of the cutout = 9 degrees
gc_3fh1_cutout.plot_interactive(stretch='sqrt',cmap='inferno'); # Great!!!
```
### Source Catalogs
```
from gammapy.catalog import SourceCatalog3FHL
fermi_3fh1 = SourceCatalog3FHL("/Users/dhruvkumar/Desktop/gll_psch_v13.fit.gz")
fermi_3fh1.table
```
--------
__*Important*__
```
# lets perform some operation on it
# sort table by signifience
fermi_3fh1.table.sort("Signif_Avg")
# invert the order to find the highest value and take top ten
Top_10_TS_3fh1 = fermi_3fh1.table[::-1][:10]
# print top ten signifientsources with association and source class
Top_10_TS_3fh1[["Source_Name","ASSOC1","ASSOC2","CLASS","Signif_Avg"]]
# Here, we get the top ten sources according to their Signif_Avg
# To Access an Individual Source
PG_1553_3fh1 = fermi_3fh1["3FHL J1555.7+1111"]
type(PG_1553_3fh1)
print(PG_1553_3fh1) # all the imformation
PG_1553_3fh1.data["RAJ2000"],PG_1553_3fh1.data["DEJ2000"] # like this we can handle indivaidual attricutes
```
------
```
# To get the value of a particular attribute in the catalog
Crab_Nebula_3fh1 = fermi_3fh1["Crab Nebula"]
print(Crab_Nebula_3fh1.data["Signif_Avg"])
# Let's Plot of the Sources on Image
ax = gc_3fh1.smooth("0.1 deg").plot(
stretch="sqrt",cmap="inferno");
positions = fermi_3fh1.positions
ax.scatter(positions.data.lon.deg,
positions.data.lat.deg,
transform=ax.get_transform("icrs"),
color ="w",
marker ="x")
```
## __Spectral Models and Flux Points__
__In the previous section we learned how access basic data from individual sources in the catalog. Now we will go one step further and explore the full spectral information of sources.__
__As a first example we will start with the Crab Nebula:__
```
crab_3fh1 = fermi_3fh1["Crab Nebula"]
crab_3fh1_model = crab_3fh1.sky_model()
print(crab_3fh1_model)
type(crab_3fh1_model) # it is now an SkyModel object
crab_3fh1_spec = crab_3fh1_model.spectral_model
```
__The crab_3fhl.spectral_model is an instance of the PowerLaw2SpectralModel model, with the parameter values and errors taken from the 3FHL catalog.__
*Let's plot the spectral model in the energy range between 10 GeV and 2000 GeV:*
```
ax_crab_3fh1 = crab_3fh1_spec.plot(energy_bounds= [10, 2000] * u.GeV) # energy_bound is used insted of enegy_range
plt.ylabel("Flux[1/$cm^2$ s TeV]")
```
__To compute the differential flux at 100 GeV we can simply call the model like normal Python function and convert to the desired units:__
```
crab_3fh1_spec(100 * u.GeV).to("cm-2 s-1 GeV-1") # Flux at 100 GeV energy
crab_3fh1_spec([100,200,300] * u.GeV).to("cm-2 s-1 TeV-1") # Flux at 100,200,300 TeV energy
# Flux
crab_3fh1_spec.integral(
energy_min = 10 * u.GeV, energy_max = 2 * u.TeV
).to("cm-2 s-1")
# Flux
crab_3fh1.data["Flux"]
# Energy Flux
crab_3fh1_spec.energy_flux(
energy_min= 10*u.GeV,energy_max= 2*u.TeV
).to("J m-2 s-1")
# Flux point data of Crab
print(crab_3fh1.flux_points)
crab_3fh1.flux_points.table
ax = crab_3fh1_spec.plot(energy_bounds=[10, 2000] * u.GeV, energy_power=2)
ax = crab_3fh1_spec.plot_error(energy_bounds=[10, 2000] * u.GeV,energy_power=2, facecolor="tab:blue")
fp = crab_3fh1.flux_points.to_sed_type("dnde")
fp.plot(ax=ax, energy_power=2);
help(FluxPoints)
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import MatplotlibDeprecationWarning
# this in general not advised, but for this specific notebook
# I'd like to ignore some deprecation warnings by matplotlib
import warnings
warnings.filterwarnings(
"ignore", category=MatplotlibDeprecationWarning
)
import astropy .units as u
from astropy.coordinates import SkyCoord
import gammapy
from gammapy.data import EventList
events_3fh1 = EventList.read("/Users/dhruvkumar/Desktop/fermi-3fhl-gc-events.fits.gz")
# let's HAve a look at the data
events_3fh1.table
len(events_3fh1.table)
events_3fh1.table.colnames # name of the columns
# Conversions along the units
x = events_3fh1.energy.to("GeV") # energy is in GeV
y = events_3fh1.energy.to("TeV") # energy is in TeV
x,y
events_3fh1.galactic
events_3fh1.time
events_3fh1.plot_image()
# select all events within a radius of 1 deg around center
from regions import CircleSkyRegion
center = SkyCoord("0d", "0d", frame="galactic") # Centre
region = CircleSkyRegion(center, radius=1 * u.deg) # specify the region
events_gc_3fhl = events_3fh1.select_region(region) # select the region
# sort events by energy
events_gc_3fhl.table.sort("ENERGY")
# and show highest energy photon
events_gc_3fhl.energy[-1].to("TeV") # hightest Energy bust {As we have taken it from the last of table}
from gammapy.maps import Map
gc_3fh1 = Map.create(
width=(30 * u.degree , 30*u.degree),
skydir = center,
proj="CAR",
binsz=0.05 * u.degree,
map_type="wcs",
frame="galactic"
)
gc_3fh1
print(gc_3fh1.geom) # to get more idea about the geometry
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="sqrt",cmap='inferno');
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="log",cmap='inferno'); # Much sensetative
gc_3fh1.fill_events(events_3fh1)
gc_3fh1.plot(stretch="linear",cmap='inferno');
gc_3fh1.data
print(f"Total number of counts in the image: {gc_3fh1.data.sum():.0f}")
from gammapy.maps import MapAxis
energy_axis = MapAxis.from_energy_bounds(
energy_min = "10 GeV", energy_max="3 TeV", nbin=5
)
print(energy_axis)
# 3D data Cube
gc_3fh1_cube = Map.create(
width = (30*u.degree,30*u.degree),
skydir = center,
proj="CAR",
binsz=0.05 * u.degree,
map_type="wcs",
frame = "galactic",
axes =[energy_axis]
)
print(gc_3fh1_cube)
gc_3fh1_cube.fill_events(events_3fh1)
gc_3fh1_cube_smoothed = gc_3fh1_cube.smooth(
kernel ="gauss",width=0.1 *u.degree
)
gc_3fh1_cube_smoothed.plot_interactive(cmap='inferno')
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="sqrt"
);
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="linear"
);
gc_3fh1_cube_smoothed.plot_grid(
ncols=3 ,figsize=(16,12),cmap='inferno',stretch="log"
);
# define centre and cutout of the rectangular region
center = SkyCoord(0,0,unit ='deg',frame ='galactic')
gc_3fh1_cutout = gc_3fh1_cube_smoothed.cutout(center,9*u.deg) # width of the cutout = 9 degrees
gc_3fh1_cutout.plot_interactive(stretch='sqrt',cmap='inferno'); # Great!!!
from gammapy.catalog import SourceCatalog3FHL
fermi_3fh1 = SourceCatalog3FHL("/Users/dhruvkumar/Desktop/gll_psch_v13.fit.gz")
fermi_3fh1.table
# lets perform some operation on it
# sort table by signifience
fermi_3fh1.table.sort("Signif_Avg")
# invert the order to find the highest value and take top ten
Top_10_TS_3fh1 = fermi_3fh1.table[::-1][:10]
# print top ten signifientsources with association and source class
Top_10_TS_3fh1[["Source_Name","ASSOC1","ASSOC2","CLASS","Signif_Avg"]]
# Here, we get the top ten sources according to their Signif_Avg
# To Access an Individual Source
PG_1553_3fh1 = fermi_3fh1["3FHL J1555.7+1111"]
type(PG_1553_3fh1)
print(PG_1553_3fh1) # all the imformation
PG_1553_3fh1.data["RAJ2000"],PG_1553_3fh1.data["DEJ2000"] # like this we can handle indivaidual attricutes
# To get the value of a particular attribute in the catalog
Crab_Nebula_3fh1 = fermi_3fh1["Crab Nebula"]
print(Crab_Nebula_3fh1.data["Signif_Avg"])
# Let's Plot of the Sources on Image
ax = gc_3fh1.smooth("0.1 deg").plot(
stretch="sqrt",cmap="inferno");
positions = fermi_3fh1.positions
ax.scatter(positions.data.lon.deg,
positions.data.lat.deg,
transform=ax.get_transform("icrs"),
color ="w",
marker ="x")
crab_3fh1 = fermi_3fh1["Crab Nebula"]
crab_3fh1_model = crab_3fh1.sky_model()
print(crab_3fh1_model)
type(crab_3fh1_model) # it is now an SkyModel object
crab_3fh1_spec = crab_3fh1_model.spectral_model
ax_crab_3fh1 = crab_3fh1_spec.plot(energy_bounds= [10, 2000] * u.GeV) # energy_bound is used insted of enegy_range
plt.ylabel("Flux[1/$cm^2$ s TeV]")
crab_3fh1_spec(100 * u.GeV).to("cm-2 s-1 GeV-1") # Flux at 100 GeV energy
crab_3fh1_spec([100,200,300] * u.GeV).to("cm-2 s-1 TeV-1") # Flux at 100,200,300 TeV energy
# Flux
crab_3fh1_spec.integral(
energy_min = 10 * u.GeV, energy_max = 2 * u.TeV
).to("cm-2 s-1")
# Flux
crab_3fh1.data["Flux"]
# Energy Flux
crab_3fh1_spec.energy_flux(
energy_min= 10*u.GeV,energy_max= 2*u.TeV
).to("J m-2 s-1")
# Flux point data of Crab
print(crab_3fh1.flux_points)
crab_3fh1.flux_points.table
ax = crab_3fh1_spec.plot(energy_bounds=[10, 2000] * u.GeV, energy_power=2)
ax = crab_3fh1_spec.plot_error(energy_bounds=[10, 2000] * u.GeV,energy_power=2, facecolor="tab:blue")
fp = crab_3fh1.flux_points.to_sed_type("dnde")
fp.plot(ax=ax, energy_power=2);
help(FluxPoints)
| 0.407805 | 0.869825 |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 5: Regularization and Dropout**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 5 Material
* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_1_reg_ridge_lasso.ipynb)
* Part 5.2: Using K-Fold Cross Validation with Keras [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_2_kfold.ipynb)
* **Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting** [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb)
* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_4_dropout.ipynb)
* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_5_bootstrap.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 5.3: L1 and L2 Regularization to Decrease Overfitting
L1 and L2 regularization are two common regularization techniques that can reduce the effects of overfitting [[Cite:ng2004feature]](http://cseweb.ucsd.edu/~elkan/254spring05/Hammon.pdf). These algorithms can either work with an objective function or as a part of the backpropagation algorithm. In both cases, the regularization algorithm is attached to the training algorithm by adding an objective.
These algorithms work by adding a weight penalty to the neural network training. This penalty encourages the neural network to keep the weights to small values. Both L1 and L2 calculate this penalty differently. You can add this penalty calculation to the calculated gradients for gradient-descent-based algorithms, such as backpropagation. The penalty is negatively combined with the objective score for objective-function-based training, such as simulated annealing.
Both L1 and L2 work differently in that they penalize the size of the weight. L2 will force the weights into a pattern similar to a Gaussian distribution; the L1 will force the weights into a pattern similar to a Laplace distribution, as demonstrated in Figure 5.L1L2.
**Figure 5.L1L2: L1 vs L2**

As you can see, L1 algorithm is more tolerant of weights further from 0, whereas the L2 algorithm is less tolerant. We will highlight other important differences between L1 and L2 in the following sections. You also need to note that both L1 and L2 count their penalties based only on weights; they do not count penalties on bias values. Keras allows [l1/l2 to be directly added to your network](http://tensorlayer.readthedocs.io/en/stable/modules/cost.html).
```
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
```
We now create a Keras network with L1 regression.
```
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import regularizers
# Cross-validate
kf = KFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
#kernel_regularizer=regularizers.l2(0.01),
model = Sequential()
# Hidden 1
model.add(Dense(50, input_dim=x.shape[1],
activation='relu',
activity_regularizer=regularizers.l1(1e-4)))
# Hidden 2
model.add(Dense(25, activation='relu',
activity_regularizer=regularizers.l1(1e-4)))
# Output
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
```
|
github_jupyter
|
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import regularizers
# Cross-validate
kf = KFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
#kernel_regularizer=regularizers.l2(0.01),
model = Sequential()
# Hidden 1
model.add(Dense(50, input_dim=x.shape[1],
activation='relu',
activity_regularizer=regularizers.l1(1e-4)))
# Hidden 2
model.add(Dense(25, activation='relu',
activity_regularizer=regularizers.l1(1e-4)))
# Output
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
| 0.510008 | 0.98249 |
# Exercise 1
### Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data
### Step 2. Download the dataset to your computer and unzip it.
### Step 3. Use the tsv file and assign it to a dataframe called food
```
import pandas as pd
import numpy as np
food = pd.read_table('./01_Getting_&_Knowing_Your_Data/World Food Facts/en.openfoodfacts.org.products.tsv', sep='\t', low_memory=False)
food.info()
food.dtypes
food.head()[:5]
def reduce_mem_usage(df):
"""
iterate through all the columns of a dataframe and
modify the data type to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage of dataframe is {:.2f}'
'MB').format(start_mem))
for col in df.columns:
col_type = df[col].dtype
col_type_name = df[col].dtype.name
if col_type != object and col_type_name != 'category':
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max <\
np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max <\
np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max <\
np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max <\
np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max <\
np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max <\
np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
if col_type_name != 'category':
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage after optimization is: {:.2f}'
'MB').format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem)
/ start_mem))
return df
food = reduce_mem_usage(food)
```
### Step 4. See the first 5 entries
```
food.head()
```
### Step 5. What is the number of observations in the dataset?
```
food.shape
food.shape[0]
len(food.index)
```
### Step 6. What is the number of columns in the dataset?
```
food.shape[1]
len(food.columns)
```
### Step 7. Print the name of all the columns.
```
food.columns
for c in food.columns:
print(c)
```
### Step 8. What is the name of 105th column?
```
food.columns[104]
```
### Step 9. What is the type of the observations of the 105th column?
```
food.iloc[:,104].dtype
food.dtypes['-glucose_100g']
```
### Step 10. How is the dataset indexed?
```
food.index
```
### Step 11. What is the product name of the 19th observation?
```
food.loc[:,'product_name'][18]
food.values[18][7]
```
|
github_jupyter
|
import pandas as pd
import numpy as np
food = pd.read_table('./01_Getting_&_Knowing_Your_Data/World Food Facts/en.openfoodfacts.org.products.tsv', sep='\t', low_memory=False)
food.info()
food.dtypes
food.head()[:5]
def reduce_mem_usage(df):
"""
iterate through all the columns of a dataframe and
modify the data type to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage of dataframe is {:.2f}'
'MB').format(start_mem))
for col in df.columns:
col_type = df[col].dtype
col_type_name = df[col].dtype.name
if col_type != object and col_type_name != 'category':
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max <\
np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max <\
np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max <\
np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max <\
np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max <\
np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max <\
np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
if col_type_name != 'category':
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage after optimization is: {:.2f}'
'MB').format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem)
/ start_mem))
return df
food = reduce_mem_usage(food)
food.head()
food.shape
food.shape[0]
len(food.index)
food.shape[1]
len(food.columns)
food.columns
for c in food.columns:
print(c)
food.columns[104]
food.iloc[:,104].dtype
food.dtypes['-glucose_100g']
food.index
food.loc[:,'product_name'][18]
food.values[18][7]
| 0.278257 | 0.928344 |
```
%matplotlib inline
%pylab inline
matplotlib.rcParams['font.family'] = 'serif'
matplotlib.rcParams['font.serif'] = ['Arial']
matplotlib.rcParams['font.sans-serif'] = ['System Font', 'Verdana', 'Arial']
matplotlib.rcParams['figure.figsize'] = (7, 3) # Change the size of plots
matplotlib.rcParams['figure.dpi'] = 108
import numpy as np
import numpy.matlib
# Time interval between readings
d = 1.0 / 5000.0
# Stride size of the running window
M = 10000
N = 10000
# Time-series data usually come in chunks
chunk_size = 16
# A scaling constant of the clean reference
tics_per_second = 1000000
# Have the time-series to be at least X seconds
K = int(floor(max(10 * N, round(20.0 / d)) / chunk_size / 2) * chunk_size * 2)
# Emulate staggered PRT: 2, 3, 2, 3, ...
y = np.cumsum(np.reshape(np.ones((int(K / 2), 1)) * np.array([2, 3]), (K, )) * (0.5 * d))
# Clean reference of tic count
u = (y + 19760520) * tics_per_second
# Time with some arbitrary offset
y = y + 7000.0
# Add noise to x, on the orders of milliseconds
# x = y + 10.0e-3 * (np.random.random(y.shape) - 0.5)
noise = 5.0e-3 * (np.random.random(round(K / chunk_size, )) - 0.5)
x = y + np.kron(noise, np.ones(chunk_size, ))
diff(u)
h_x0 = np.ones(x.shape) * np.nan
h_u0 = np.ones(x.shape) * np.nan
h_dx = np.ones(x.shape) * np.nan
t = np.ones(x.shape) * np.nan
b = 1.0 / M
a = 1.0 - b
# Block size
L = 16
for i in range(2, len(x)):
if i < N:
dx_du = (x[i] - x[0]) / (u[i] - u[0])
else:
dx_du = (x[i] - x[i - N]) / (u[i] - u[i - N])
if i <= L:
x0 = x[i]
u0 = (u[i] - u[0]) / i + u[0]
dx = a * (x[i - 1] - x[0]) / (u[i - 1] - u[0]) + b * dx_du
else:
x0 = a * x0 + b * x[i]
u0 = a * u0 + b * u[i]
dx = a * dx + b * dx_du
t[i] = x0 + dx * (u[i] - u0)
# Keep a history
h_dx[i] = dx
h_x0[i] = x0
h_u0[i] = u0
# For plotting, set the beginning t values to be nice
t[1] = t[2] - d
t[0] = t[1] - d
fig = matplotlib.pyplot.figure(figsize=(11, 6))
ax1 = fig.add_subplot(221)
h1 = ax1.plot(y - y[0], 1.0e3 * (t - y), '-m')
ax1.grid()
ax1.text(y[-1] - y[0], 1.3, 'M = {}, N = {}'.format(M, N), ha='right')
matplotlib.pyplot.ylabel('Time Error (ms)')
matplotlib.pyplot.ylim([-3.0, 3.0])
ax1b = ax1.twinx()
h2 = ax1b.plot(y - y[0], (u - h_u0) * 1.0e-6 / M, 'g')
# matplotlib.pyplot.ylim([0, 1.0e9])
ax1b.legend(h1 + h2, ['Time', 'Reference'], loc=4)
matplotlib.pyplot.title('Accuracy of Time & Reference')
ax2 = fig.add_subplot(222)
ax2.plot(y - y[0], h_dx / d * 1.0e6, label='Estimate')
ax2.plot([0, len(x) * d], np.array([1.0, 1.0]), '--', label='True dx / du')
ax2.grid()
ax2.legend()
matplotlib.pyplot.ylim(np.array([0.99, 1.01]))
matplotlib.pyplot.title('Time History of dx / du')
ax3 = fig.add_subplot(223)
h31 = ax3.plot(y - y[0], (h_u0 + tics_per_second * M))
h32 = ax3.plot([0, K * d], np.array([u[0], u[-1]]), '--')
ax3.grid()
ax3.legend(h31 + h32, ['Estimate', 'True u0'])
ax4 = fig.add_subplot(224)
h41 = ax4.plot(y - y[0], h_x0 - 7000.0 + d * M)
h42 = ax4.plot([0, K * d], [0, K * d], '--')
ax4.grid()
ax4.legend(h41 + h42, ['Estimate', 'True x0'])
fig.savefig('/Users/boonleng/Desktop/M{0:02.0f}-N{1:02.0f}.png'.format(1.0e-3 * M, 1.0e-3 * N))
fig = matplotlib.pyplot.figure(figsize=(11, 3))
h1 = matplotlib.pyplot.plot(y[1:] - y[0], 1.0e3 * np.diff(y), '-o', label='Noisy Time')
h2 = matplotlib.pyplot.plot(y[2:] - y[0], 1.0e3 * np.diff(t)[1:], '-o', label='Pred. Time')
matplotlib.pyplot.grid()
matplotlib.pyplot.legend()
matplotlib.pyplot.title('Time History of Noisy Arrival Time and Predicted Time')
matplotlib.pyplot.xlim(np.array([0.001, 0.01]) + 1)
matplotlib.pyplot.ylim(np.array([-0.1, 1]))
print('Original jitter =', np.std(np.diff(x[-N:])) * 1.0e3, ' ms')
print('Smoothed jitter =', np.std(np.diff(t[-N:])) * 1.0e3, ' ms')
```
|
github_jupyter
|
%matplotlib inline
%pylab inline
matplotlib.rcParams['font.family'] = 'serif'
matplotlib.rcParams['font.serif'] = ['Arial']
matplotlib.rcParams['font.sans-serif'] = ['System Font', 'Verdana', 'Arial']
matplotlib.rcParams['figure.figsize'] = (7, 3) # Change the size of plots
matplotlib.rcParams['figure.dpi'] = 108
import numpy as np
import numpy.matlib
# Time interval between readings
d = 1.0 / 5000.0
# Stride size of the running window
M = 10000
N = 10000
# Time-series data usually come in chunks
chunk_size = 16
# A scaling constant of the clean reference
tics_per_second = 1000000
# Have the time-series to be at least X seconds
K = int(floor(max(10 * N, round(20.0 / d)) / chunk_size / 2) * chunk_size * 2)
# Emulate staggered PRT: 2, 3, 2, 3, ...
y = np.cumsum(np.reshape(np.ones((int(K / 2), 1)) * np.array([2, 3]), (K, )) * (0.5 * d))
# Clean reference of tic count
u = (y + 19760520) * tics_per_second
# Time with some arbitrary offset
y = y + 7000.0
# Add noise to x, on the orders of milliseconds
# x = y + 10.0e-3 * (np.random.random(y.shape) - 0.5)
noise = 5.0e-3 * (np.random.random(round(K / chunk_size, )) - 0.5)
x = y + np.kron(noise, np.ones(chunk_size, ))
diff(u)
h_x0 = np.ones(x.shape) * np.nan
h_u0 = np.ones(x.shape) * np.nan
h_dx = np.ones(x.shape) * np.nan
t = np.ones(x.shape) * np.nan
b = 1.0 / M
a = 1.0 - b
# Block size
L = 16
for i in range(2, len(x)):
if i < N:
dx_du = (x[i] - x[0]) / (u[i] - u[0])
else:
dx_du = (x[i] - x[i - N]) / (u[i] - u[i - N])
if i <= L:
x0 = x[i]
u0 = (u[i] - u[0]) / i + u[0]
dx = a * (x[i - 1] - x[0]) / (u[i - 1] - u[0]) + b * dx_du
else:
x0 = a * x0 + b * x[i]
u0 = a * u0 + b * u[i]
dx = a * dx + b * dx_du
t[i] = x0 + dx * (u[i] - u0)
# Keep a history
h_dx[i] = dx
h_x0[i] = x0
h_u0[i] = u0
# For plotting, set the beginning t values to be nice
t[1] = t[2] - d
t[0] = t[1] - d
fig = matplotlib.pyplot.figure(figsize=(11, 6))
ax1 = fig.add_subplot(221)
h1 = ax1.plot(y - y[0], 1.0e3 * (t - y), '-m')
ax1.grid()
ax1.text(y[-1] - y[0], 1.3, 'M = {}, N = {}'.format(M, N), ha='right')
matplotlib.pyplot.ylabel('Time Error (ms)')
matplotlib.pyplot.ylim([-3.0, 3.0])
ax1b = ax1.twinx()
h2 = ax1b.plot(y - y[0], (u - h_u0) * 1.0e-6 / M, 'g')
# matplotlib.pyplot.ylim([0, 1.0e9])
ax1b.legend(h1 + h2, ['Time', 'Reference'], loc=4)
matplotlib.pyplot.title('Accuracy of Time & Reference')
ax2 = fig.add_subplot(222)
ax2.plot(y - y[0], h_dx / d * 1.0e6, label='Estimate')
ax2.plot([0, len(x) * d], np.array([1.0, 1.0]), '--', label='True dx / du')
ax2.grid()
ax2.legend()
matplotlib.pyplot.ylim(np.array([0.99, 1.01]))
matplotlib.pyplot.title('Time History of dx / du')
ax3 = fig.add_subplot(223)
h31 = ax3.plot(y - y[0], (h_u0 + tics_per_second * M))
h32 = ax3.plot([0, K * d], np.array([u[0], u[-1]]), '--')
ax3.grid()
ax3.legend(h31 + h32, ['Estimate', 'True u0'])
ax4 = fig.add_subplot(224)
h41 = ax4.plot(y - y[0], h_x0 - 7000.0 + d * M)
h42 = ax4.plot([0, K * d], [0, K * d], '--')
ax4.grid()
ax4.legend(h41 + h42, ['Estimate', 'True x0'])
fig.savefig('/Users/boonleng/Desktop/M{0:02.0f}-N{1:02.0f}.png'.format(1.0e-3 * M, 1.0e-3 * N))
fig = matplotlib.pyplot.figure(figsize=(11, 3))
h1 = matplotlib.pyplot.plot(y[1:] - y[0], 1.0e3 * np.diff(y), '-o', label='Noisy Time')
h2 = matplotlib.pyplot.plot(y[2:] - y[0], 1.0e3 * np.diff(t)[1:], '-o', label='Pred. Time')
matplotlib.pyplot.grid()
matplotlib.pyplot.legend()
matplotlib.pyplot.title('Time History of Noisy Arrival Time and Predicted Time')
matplotlib.pyplot.xlim(np.array([0.001, 0.01]) + 1)
matplotlib.pyplot.ylim(np.array([-0.1, 1]))
print('Original jitter =', np.std(np.diff(x[-N:])) * 1.0e3, ' ms')
print('Smoothed jitter =', np.std(np.diff(t[-N:])) * 1.0e3, ' ms')
| 0.477067 | 0.774242 |
# Carregamento dos Dados
## Download
```
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = os.path.join("datasets","housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL,housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path,"housing.tgz")
urllib.request.urlretrieve(housing_url,tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
```
## Importação
```
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path=os.path.join(housing_path,"housing.csv")
return pd.read_csv(csv_path)
housing=load_housing_data()
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50,figsize=(20,15))
plt.show() # opcional no Jupyter
import numpy as np
```
## Separação Treino-Teste
```
def split_train_test(data,test_ratio=0.25):
shuffled_indices=np.random.permutation(len(data))
test_set_size=int(len(data)*test_ratio)
test_indices=shuffled_indices[:test_set_size]
train_indices=shuffled_indices[test_set_size:]
return data.iloc[train_indices],data.iloc[test_indices]
train_set,test_set=split_train_test(housing,0.2)
print(len(train_set),"train +",len(test_set),"test")
```
## Separação Treino-Teste `Scikit-learn`
```
from sklearn.model_selection import train_test_split
train_set,test_set=train_test_split(housing,test_size=0.2,random_state=1)
print(len(train_set),"train +",len(test_set),"test")
```
### Amostragem Estratificada
```
housing["income_cat"]=np.ceil(housing["median_income"]/1.5)
housing["income_cat"].where(housing["income_cat"]<5,5.0,inplace=True)
# sintaxe .where(CONDICAO em X,Y)
## se CONDICAO, X, senão, Y
housing["income_cat"].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split=StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=1)
for train_index,test_index in split.split(housing,housing["income_cat"]):
strat_train_set=housing.loc[train_index]
strat_test_set=housing.loc[test_index]
strat_test_set["income_cat"].value_counts()/len(strat_test_set)
housing["income_cat"].value_counts()/len(housing)
# elimina a coluna 'income_cat' dos datasets de treino e teste
for set_ in (strat_train_set,strat_test_set):
set_.drop("income_cat",axis=1,inplace=True)
```
# EDA
```
housing=strat_train_set.copy()
housing.plot(kind="scatter",x="longitude",y="latitude",alpha=0.4,
s=housing["population"]/100, # raio dos círculos porporcionais à população
label="population",figsize=(10,7),
c="median_house_value", # cor variando com o preço médio das casas
cmap=plt.get_cmap("jet"), # mapa de cores 'jet',
colorbar=True)
plt.legend()
```
## Análise de Correlação
```
corr_matrix=housing.corr()
corr_matrix
corr_matrix["median_house_value"].sort_values(ascending=False)
import pandas as pd
attributes=["median_house_value","median_income","total_rooms","housing_median_age"]
pd.plotting.scatter_matrix(housing[attributes],figsize=(12,8))
```
|
github_jupyter
|
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = os.path.join("datasets","housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL,housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path,"housing.tgz")
urllib.request.urlretrieve(housing_url,tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path=os.path.join(housing_path,"housing.csv")
return pd.read_csv(csv_path)
housing=load_housing_data()
housing.head()
housing.info()
housing["ocean_proximity"].value_counts()
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50,figsize=(20,15))
plt.show() # opcional no Jupyter
import numpy as np
def split_train_test(data,test_ratio=0.25):
shuffled_indices=np.random.permutation(len(data))
test_set_size=int(len(data)*test_ratio)
test_indices=shuffled_indices[:test_set_size]
train_indices=shuffled_indices[test_set_size:]
return data.iloc[train_indices],data.iloc[test_indices]
train_set,test_set=split_train_test(housing,0.2)
print(len(train_set),"train +",len(test_set),"test")
from sklearn.model_selection import train_test_split
train_set,test_set=train_test_split(housing,test_size=0.2,random_state=1)
print(len(train_set),"train +",len(test_set),"test")
housing["income_cat"]=np.ceil(housing["median_income"]/1.5)
housing["income_cat"].where(housing["income_cat"]<5,5.0,inplace=True)
# sintaxe .where(CONDICAO em X,Y)
## se CONDICAO, X, senão, Y
housing["income_cat"].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split=StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=1)
for train_index,test_index in split.split(housing,housing["income_cat"]):
strat_train_set=housing.loc[train_index]
strat_test_set=housing.loc[test_index]
strat_test_set["income_cat"].value_counts()/len(strat_test_set)
housing["income_cat"].value_counts()/len(housing)
# elimina a coluna 'income_cat' dos datasets de treino e teste
for set_ in (strat_train_set,strat_test_set):
set_.drop("income_cat",axis=1,inplace=True)
housing=strat_train_set.copy()
housing.plot(kind="scatter",x="longitude",y="latitude",alpha=0.4,
s=housing["population"]/100, # raio dos círculos porporcionais à população
label="population",figsize=(10,7),
c="median_house_value", # cor variando com o preço médio das casas
cmap=plt.get_cmap("jet"), # mapa de cores 'jet',
colorbar=True)
plt.legend()
corr_matrix=housing.corr()
corr_matrix
corr_matrix["median_house_value"].sort_values(ascending=False)
import pandas as pd
attributes=["median_house_value","median_income","total_rooms","housing_median_age"]
pd.plotting.scatter_matrix(housing[attributes],figsize=(12,8))
| 0.336331 | 0.768473 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import scipy.stats as stats
import datetime
import json
from api_keys import api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
base_url = "http://api.openweathermap.org/data/2.5/weather?"
units = "metric"
count = 0
city_names = []
clouds = []
countries = []
dates = []
humidities = []
latitudes = []
longitudes = []
max_temps = []
wind_speeds = []
for city in cities:
try:
query = f"{base_url}appid={api_key}&units={units}&q="
response = requests.get(query + city).json()
count = count + 1
print(f"Processing Record {count} | {city}")
country = response["sys"]["country"]
latitude = response["coord"]["lat"]
longitude = response["coord"]["lon"]
date = response["dt"]
temp = 1.8*(response["main"]["temp_max"]) + 32
humidity = response["main"]["humidity"]
cloudiness = response["clouds"]["all"]
wind_speed = 2.236936*(response["wind"]["speed"])
city_names.append(city)
latitudes.append(latitude)
longitudes.append(longitude)
countries.append(country)
dates.append(date)
max_temps.append(temp)
humidities.append(humidity)
clouds.append(cloudiness)
wind_speeds.append(wind_speed)
except KeyError:
print("Couldn't locate data. Skipping city!")
weather_data = {"City": city_names,
"Cloudiness": clouds,
"Country": countries,
"Date": dates,
"Humidity": humidities,
"Lat": latitudes,
"Lng": longitudes,
"Max Temp": max_temps,
"Wind Speeds": wind_speeds}
weather_reports = pd.DataFrame(weather_data)
weather_reports[["Max Temp", "Wind Speeds"]] = weather_reports[["Max Temp", "Wind Speeds"]].apply(pd.to_numeric)
weather_reports["Max Temp"] = weather_reports["Max Temp"].map("{:.2f}".format)
weather_reports["Wind Speeds"] = weather_reports["Wind Speeds"].map("{:.2f}".format)
weather_reports.head()
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
weather_reports.to_csv("weather_reports.csv")
weather_reports.head()
```
### Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
#### Latitude vs. Temperature Plot
```
converted_dates = []
for date in dates:
converted_date = datetime.datetime.fromtimestamp(date).strftime("%m/%d/%Y")
converted_dates.append(converted_date)
weather_reports["Converted Date"] = converted_dates
weather_reports = weather_reports[["City",
"Cloudiness",
"Country",
"Date",
"Converted Date",
"Humidity",
"Lat",
"Lng",
"Max Temp",
"Wind Speeds"]]
plot_date = weather_reports.loc[0, "Converted Date"]
plt.scatter(weather_reports["Lat"], weather_data["Max Temp"], facecolor="#66CDAA", edgecolor="black")
plt.title(f"City Latitude vs. Max Temperature ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.grid()
plt.savefig('Lat_vs_Temp.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their maximum temperatures, as recorded on June 14, 2020.")
```
#### Latitude vs. Humidity Plot
```
plt.scatter(weather_reports["Lat"], weather_data["Humidity"], facecolor="#E3CF57", edgecolor="salmon")
plt.title(f"City Latitude vs. Humidity ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.grid()
plt.savefig('Latitude vs. Humidity Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their humidity, as recorded on June 14, 2020.")
```
#### Latitude vs. Cloudiness Plot
```
plt.scatter(weather_reports["Lat"], weather_data["Cloudiness"], facecolor="#838B8B", edgecolor="darkolivegreen")
plt.title(f"City Latitude vs. Cloudiness ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.grid()
plt.savefig('Latitude vs. Cloudiness Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their cloudiness, as recorded on June 14, 2020.")
```
#### Latitude vs. Wind Speed Plot
```
plt.scatter(weather_reports["Lat"], weather_data["Wind Speeds"], facecolor="#6495ED", edgecolor="orchid")
plt.title(f"City Latitude vs. Wind Speeds ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.grid()
plt.savefig('Latitude vs. Wind Speed Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and wind speeds, as recorded on June 14, 2020.")
```
## Linear Regression
```
# OPTIONAL: Create a function to create Linear Regression plots
# Create Northern and Southern Hemisphere DataFrames
#weather_reports.head()
# Northern DF
northern_df = weather_reports.loc[weather_reports["Lat"] > 0,:]
# Southern DF
southern_df = weather_reports.loc[weather_reports["Lat"] < 0,:]
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = northern_df["Max Temp"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(30,10),fontsize=15,color="red")
# Make Labels
plt.xlabel("Max Temp (F)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = southern_df["Max Temp"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(32,-5),fontsize=15,color="red")
# Make Labels
plt.xlabel("Max Temp (F)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = northern_df["Humidity"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(10,3),fontsize=15,color="red")
# Make Labels
plt.xlabel("Humidity (%)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = southern_df["Humidity"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(20,-55),fontsize=15,color="red")
# Make Labels
plt.xlabel("Humidity (%)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = northern_df["Cloudiness"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(40,75),fontsize=15,color="red")
# Make Labels
plt.xlabel("Cloudiness (%)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = southern_df["Cloudiness"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(20,-50),fontsize=15,color="red")
# Make Labels
plt.xlabel("Cloudiness (%)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = northern_df["Wind Speeds"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(18,25),fontsize=15,color="red")
# Make Labels
plt.xlabel("Wind Speed")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
# Set X and Y vals
x_values = southern_df["Wind Speeds"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(7,-50),fontsize=15,color="red")
# Make Labels
plt.xlabel("Wind Speed")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import scipy.stats as stats
import datetime
import json
from api_keys import api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
base_url = "http://api.openweathermap.org/data/2.5/weather?"
units = "metric"
count = 0
city_names = []
clouds = []
countries = []
dates = []
humidities = []
latitudes = []
longitudes = []
max_temps = []
wind_speeds = []
for city in cities:
try:
query = f"{base_url}appid={api_key}&units={units}&q="
response = requests.get(query + city).json()
count = count + 1
print(f"Processing Record {count} | {city}")
country = response["sys"]["country"]
latitude = response["coord"]["lat"]
longitude = response["coord"]["lon"]
date = response["dt"]
temp = 1.8*(response["main"]["temp_max"]) + 32
humidity = response["main"]["humidity"]
cloudiness = response["clouds"]["all"]
wind_speed = 2.236936*(response["wind"]["speed"])
city_names.append(city)
latitudes.append(latitude)
longitudes.append(longitude)
countries.append(country)
dates.append(date)
max_temps.append(temp)
humidities.append(humidity)
clouds.append(cloudiness)
wind_speeds.append(wind_speed)
except KeyError:
print("Couldn't locate data. Skipping city!")
weather_data = {"City": city_names,
"Cloudiness": clouds,
"Country": countries,
"Date": dates,
"Humidity": humidities,
"Lat": latitudes,
"Lng": longitudes,
"Max Temp": max_temps,
"Wind Speeds": wind_speeds}
weather_reports = pd.DataFrame(weather_data)
weather_reports[["Max Temp", "Wind Speeds"]] = weather_reports[["Max Temp", "Wind Speeds"]].apply(pd.to_numeric)
weather_reports["Max Temp"] = weather_reports["Max Temp"].map("{:.2f}".format)
weather_reports["Wind Speeds"] = weather_reports["Wind Speeds"].map("{:.2f}".format)
weather_reports.head()
weather_reports.to_csv("weather_reports.csv")
weather_reports.head()
converted_dates = []
for date in dates:
converted_date = datetime.datetime.fromtimestamp(date).strftime("%m/%d/%Y")
converted_dates.append(converted_date)
weather_reports["Converted Date"] = converted_dates
weather_reports = weather_reports[["City",
"Cloudiness",
"Country",
"Date",
"Converted Date",
"Humidity",
"Lat",
"Lng",
"Max Temp",
"Wind Speeds"]]
plot_date = weather_reports.loc[0, "Converted Date"]
plt.scatter(weather_reports["Lat"], weather_data["Max Temp"], facecolor="#66CDAA", edgecolor="black")
plt.title(f"City Latitude vs. Max Temperature ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
plt.grid()
plt.savefig('Lat_vs_Temp.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their maximum temperatures, as recorded on June 14, 2020.")
plt.scatter(weather_reports["Lat"], weather_data["Humidity"], facecolor="#E3CF57", edgecolor="salmon")
plt.title(f"City Latitude vs. Humidity ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.grid()
plt.savefig('Latitude vs. Humidity Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their humidity, as recorded on June 14, 2020.")
plt.scatter(weather_reports["Lat"], weather_data["Cloudiness"], facecolor="#838B8B", edgecolor="darkolivegreen")
plt.title(f"City Latitude vs. Cloudiness ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.grid()
plt.savefig('Latitude vs. Cloudiness Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and their cloudiness, as recorded on June 14, 2020.")
plt.scatter(weather_reports["Lat"], weather_data["Wind Speeds"], facecolor="#6495ED", edgecolor="orchid")
plt.title(f"City Latitude vs. Wind Speeds ({plot_date})")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.grid()
plt.savefig('Latitude vs. Wind Speed Plot.png', dpi=300)
plt.show()
print("The above scatter plot shows the relationship between the latitude of cities and wind speeds, as recorded on June 14, 2020.")
# OPTIONAL: Create a function to create Linear Regression plots
# Create Northern and Southern Hemisphere DataFrames
#weather_reports.head()
# Northern DF
northern_df = weather_reports.loc[weather_reports["Lat"] > 0,:]
# Southern DF
southern_df = weather_reports.loc[weather_reports["Lat"] < 0,:]
# Set X and Y vals
x_values = northern_df["Max Temp"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(30,10),fontsize=15,color="red")
# Make Labels
plt.xlabel("Max Temp (F)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = southern_df["Max Temp"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(32,-5),fontsize=15,color="red")
# Make Labels
plt.xlabel("Max Temp (F)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = northern_df["Humidity"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(10,3),fontsize=15,color="red")
# Make Labels
plt.xlabel("Humidity (%)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = southern_df["Humidity"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(20,-55),fontsize=15,color="red")
# Make Labels
plt.xlabel("Humidity (%)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = northern_df["Cloudiness"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(40,75),fontsize=15,color="red")
# Make Labels
plt.xlabel("Cloudiness (%)")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = southern_df["Cloudiness"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(20,-50),fontsize=15,color="red")
# Make Labels
plt.xlabel("Cloudiness (%)")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = northern_df["Wind Speeds"].astype(float)
y_values = northern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(18,25),fontsize=15,color="red")
# Make Labels
plt.xlabel("Wind Speed")
plt.ylabel("Latitude")
plt.savefig('Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
# Set X and Y vals
x_values = southern_df["Wind Speeds"].astype(float)
y_values = southern_df["Lat"].astype(float)
# Run regression
(slope, intercept, rvalue, pvalue, stderr) = stats.linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# print(regress_values)
# To add regress line to your plot:
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
# To add the equation to your plot:
plt.annotate(line_eq,(7,-50),fontsize=15,color="red")
# Make Labels
plt.xlabel("Wind Speed")
plt.ylabel("Latitude")
plt.savefig('Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression.png', dpi=300)
plt.show()
| 0.462959 | 0.815563 |
# CNN
```
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
import requests
requests.packages.urllib3.disable_warnings()
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
# Load the dataset
(x_train,y_train),(x_test,y_test) = datasets.cifar10.load_data()
x_train.shape
x_test.shape
y_train.shape
y_train[:5]
y_train= y_train.reshape(-1,)
y_train[:10]
y_train.shape
plt.figure(figsize=(12,2))
plt.imshow(x_train[0])
y_train[0]
plt.figure(figsize=(12,2))
plt.imshow(x_train[1])
y_train[1]
x_train[0]
#Normalize the training data
x_train = x_train/255
x_test= x_test/255
# Build a simple neural network for image classification
ann = models.Sequential([
layers.Flatten(input_shape=(32,32,3)),
layers.Dense(3000,activation='relu'),
layers.Dense(1000,activation='relu'),
layers.Dense(10,activation='sigmoid'),
])
ann.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
ann.fit(x_train,y_train,epochs=10)
#Build CNN to train our images
cnn=models.Sequential([
layers.Conv2D(filters=64,kernel_size=(3,3),activation='relu',input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Conv2D(filters=32,kernel_size=(3,3),activation='relu'),
layers.MaxPooling2D((2,2)),
layers.Flatten(),
layers.Dense(64,activation='relu'),
layers.Dense(32,activation='relu'),
layers.Dense(10,activation='softmax'),
])
cnn.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
cnn.fit(x_train,y_train,epochs=20)
#Build CNN to train our images
cnn=models.Sequential([
layers.Conv2D(filters=64,kernel_size=(3,3),padding='same',strides=(1,1),activation='relu',input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Dropout((0.25)),
layers.Conv2D(filters=32,kernel_size=(3,3),padding='same',strides=(1,1),activation='relu'),
layers.MaxPooling2D((2,2)),
layers.Dropout((0.25)),
layers.Flatten(),
layers.Dense(64,activation='relu'),
layers.Dense(32,activation='relu'),
layers.Dense(10,activation='softmax'),
])
cnn.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
cnn.fit(x_train,y_train,epochs=15)
y_test[:5]
y_test = y_test.reshape(-1,)
y_test[:5]
cnn.evaluate(x_test,y_test)
y_pred = cnn.predict(x_test)
y_pred[:5]
y_label=[np.argmax(i) for i in y_pred]
y_label[:5]
y_test[:5]
from sklearn.metrics import confusion_matrix, classification_report
print(classification_report(y_test,y_label))
#### dropout layer == it randomly drops some neurons------- used to avoid overfitting
### Imagedatagenerator from keras--- rotate, rescale, zoom in
#### callbacks and learning rate------- earlystopping
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
import requests
requests.packages.urllib3.disable_warnings()
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
# Load the dataset
(x_train,y_train),(x_test,y_test) = datasets.cifar10.load_data()
x_train.shape
x_test.shape
y_train.shape
y_train[:5]
y_train= y_train.reshape(-1,)
y_train[:10]
y_train.shape
plt.figure(figsize=(12,2))
plt.imshow(x_train[0])
y_train[0]
plt.figure(figsize=(12,2))
plt.imshow(x_train[1])
y_train[1]
x_train[0]
#Normalize the training data
x_train = x_train/255
x_test= x_test/255
# Build a simple neural network for image classification
ann = models.Sequential([
layers.Flatten(input_shape=(32,32,3)),
layers.Dense(3000,activation='relu'),
layers.Dense(1000,activation='relu'),
layers.Dense(10,activation='sigmoid'),
])
ann.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
ann.fit(x_train,y_train,epochs=10)
#Build CNN to train our images
cnn=models.Sequential([
layers.Conv2D(filters=64,kernel_size=(3,3),activation='relu',input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Conv2D(filters=32,kernel_size=(3,3),activation='relu'),
layers.MaxPooling2D((2,2)),
layers.Flatten(),
layers.Dense(64,activation='relu'),
layers.Dense(32,activation='relu'),
layers.Dense(10,activation='softmax'),
])
cnn.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
cnn.fit(x_train,y_train,epochs=20)
#Build CNN to train our images
cnn=models.Sequential([
layers.Conv2D(filters=64,kernel_size=(3,3),padding='same',strides=(1,1),activation='relu',input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Dropout((0.25)),
layers.Conv2D(filters=32,kernel_size=(3,3),padding='same',strides=(1,1),activation='relu'),
layers.MaxPooling2D((2,2)),
layers.Dropout((0.25)),
layers.Flatten(),
layers.Dense(64,activation='relu'),
layers.Dense(32,activation='relu'),
layers.Dense(10,activation='softmax'),
])
cnn.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
cnn.fit(x_train,y_train,epochs=15)
y_test[:5]
y_test = y_test.reshape(-1,)
y_test[:5]
cnn.evaluate(x_test,y_test)
y_pred = cnn.predict(x_test)
y_pred[:5]
y_label=[np.argmax(i) for i in y_pred]
y_label[:5]
y_test[:5]
from sklearn.metrics import confusion_matrix, classification_report
print(classification_report(y_test,y_label))
#### dropout layer == it randomly drops some neurons------- used to avoid overfitting
### Imagedatagenerator from keras--- rotate, rescale, zoom in
#### callbacks and learning rate------- earlystopping
| 0.901276 | 0.845815 |
```
import pandas as pd
sms_spam = pd.read_csv('../data/SMSSpamCollection', sep='\t',
header=None, names=['Label', 'SMS'])
sms_spam.head()
sms_spam['Label'].value_counts(normalize=True)
# Randomize the dataset
data_randomized = sms_spam.sample(frac=1, random_state=1)
# Calculate index for split
training_test_index = round(len(data_randomized) * 0.8)
# Split into training and test sets
training_set = data_randomized[:training_test_index].reset_index(drop=True)
test_set = data_randomized[training_test_index:].reset_index(drop=True)
print(training_set.shape)
print(test_set.shape)
training_set['Label'].value_counts(normalize=True)
test_set['Label'].value_counts(normalize=True)
training_set['SMS'] = training_set['SMS'].str.replace(
'\W', ' ') # Removes punctuation
training_set['SMS'] = training_set['SMS'].str.lower()
training_set.head(3)
training_set['SMS'] = training_set['SMS'].str.split()
vocabulary = []
for sms in training_set['SMS']:
for word in sms:
vocabulary.append(word)
vocabulary = list(set(vocabulary))
len(vocabulary)
word_counts_per_sms = {unique_word: [0] * len(training_set['SMS']) for unique_word in vocabulary}
for index, sms in enumerate(training_set['SMS']):
for word in sms:
word_counts_per_sms[word][index] += 1
word_counts = pd.DataFrame(word_counts_per_sms)
word_counts.head()
training_set_clean = pd.concat([training_set, word_counts], axis=1)
training_set_clean.head()
# Isolating spam and ham messages first
spam_messages = training_set_clean[training_set_clean['Label'] == 'spam']
ham_messages = training_set_clean[training_set_clean['Label'] == 'ham']
# P(Spam) and P(Ham)
p_spam = len(spam_messages) / len(training_set_clean)
p_ham = len(ham_messages) / len(training_set_clean)
# N_Spam
n_words_per_spam_message = spam_messages['SMS'].apply(len)
n_spam = n_words_per_spam_message.sum()
# N_Ham
n_words_per_ham_message = ham_messages['SMS'].apply(len)
n_ham = n_words_per_ham_message.sum()
# N_Vocabulary
n_vocabulary = len(vocabulary)
# Laplace smoothing
alpha = 1
# Initiate parameters
parameters_spam = {unique_word:0 for unique_word in vocabulary}
parameters_ham = {unique_word:0 for unique_word in vocabulary}
# Calculate parameters
for word in vocabulary:
n_word_given_spam = spam_messages[word].sum() # spam_messages already defined
p_word_given_spam = (n_word_given_spam + alpha) / (n_spam + alpha*n_vocabulary)
parameters_spam[word] = p_word_given_spam
n_word_given_ham = ham_messages[word].sum() # ham_messages already defined
p_word_given_ham = (n_word_given_ham + alpha) / (n_ham + alpha*n_vocabulary)
parameters_ham[word] = p_word_given_ham
import re
def classify(message):
'''
message: a string
'''
message = re.sub('\W', ' ', message)
message = message.lower().split()
p_spam_given_message = p_spam
p_ham_given_message = p_ham
for word in message:
if word in parameters_spam:
p_spam_given_message *= parameters_spam[word]
if word in parameters_ham:
p_ham_given_message *= parameters_ham[word]
print('P(Spam|message):', p_spam_given_message)
print('P(Ham|message):', p_ham_given_message)
if p_ham_given_message > p_spam_given_message:
print('Label: Ham')
elif p_ham_given_message < p_spam_given_message:
print('Label: Spam')
else:
print('Equal proabilities, have a human classify this!')
classify('WINNER!! This is the secret code to unlock the money: C3421.')
classify("Sounds good, Tom, then see u there")
def classify_test_set(message):
'''
message: a string
'''
message = re.sub('\W', ' ', message)
message = message.lower().split()
p_spam_given_message = p_spam
p_ham_given_message = p_ham
for word in message:
if word in parameters_spam:
p_spam_given_message *= parameters_spam[word]
if word in parameters_ham:
p_ham_given_message *= parameters_ham[word]
if p_ham_given_message > p_spam_given_message:
return 'ham'
elif p_spam_given_message > p_ham_given_message:
return 'spam'
else:
return 'needs human classification'
test_set['predicted'] = test_set['SMS'].apply(classify_test_set)
test_set.head()
correct = 0
total = test_set.shape[0]
for row in test_set.iterrows():
row = row[1]
if row['Label'] == row['predicted']:
correct += 1
print('Correct:', correct)
print('Incorrect:', total - correct)
print('Accuracy:', correct/total)
```
|
github_jupyter
|
import pandas as pd
sms_spam = pd.read_csv('../data/SMSSpamCollection', sep='\t',
header=None, names=['Label', 'SMS'])
sms_spam.head()
sms_spam['Label'].value_counts(normalize=True)
# Randomize the dataset
data_randomized = sms_spam.sample(frac=1, random_state=1)
# Calculate index for split
training_test_index = round(len(data_randomized) * 0.8)
# Split into training and test sets
training_set = data_randomized[:training_test_index].reset_index(drop=True)
test_set = data_randomized[training_test_index:].reset_index(drop=True)
print(training_set.shape)
print(test_set.shape)
training_set['Label'].value_counts(normalize=True)
test_set['Label'].value_counts(normalize=True)
training_set['SMS'] = training_set['SMS'].str.replace(
'\W', ' ') # Removes punctuation
training_set['SMS'] = training_set['SMS'].str.lower()
training_set.head(3)
training_set['SMS'] = training_set['SMS'].str.split()
vocabulary = []
for sms in training_set['SMS']:
for word in sms:
vocabulary.append(word)
vocabulary = list(set(vocabulary))
len(vocabulary)
word_counts_per_sms = {unique_word: [0] * len(training_set['SMS']) for unique_word in vocabulary}
for index, sms in enumerate(training_set['SMS']):
for word in sms:
word_counts_per_sms[word][index] += 1
word_counts = pd.DataFrame(word_counts_per_sms)
word_counts.head()
training_set_clean = pd.concat([training_set, word_counts], axis=1)
training_set_clean.head()
# Isolating spam and ham messages first
spam_messages = training_set_clean[training_set_clean['Label'] == 'spam']
ham_messages = training_set_clean[training_set_clean['Label'] == 'ham']
# P(Spam) and P(Ham)
p_spam = len(spam_messages) / len(training_set_clean)
p_ham = len(ham_messages) / len(training_set_clean)
# N_Spam
n_words_per_spam_message = spam_messages['SMS'].apply(len)
n_spam = n_words_per_spam_message.sum()
# N_Ham
n_words_per_ham_message = ham_messages['SMS'].apply(len)
n_ham = n_words_per_ham_message.sum()
# N_Vocabulary
n_vocabulary = len(vocabulary)
# Laplace smoothing
alpha = 1
# Initiate parameters
parameters_spam = {unique_word:0 for unique_word in vocabulary}
parameters_ham = {unique_word:0 for unique_word in vocabulary}
# Calculate parameters
for word in vocabulary:
n_word_given_spam = spam_messages[word].sum() # spam_messages already defined
p_word_given_spam = (n_word_given_spam + alpha) / (n_spam + alpha*n_vocabulary)
parameters_spam[word] = p_word_given_spam
n_word_given_ham = ham_messages[word].sum() # ham_messages already defined
p_word_given_ham = (n_word_given_ham + alpha) / (n_ham + alpha*n_vocabulary)
parameters_ham[word] = p_word_given_ham
import re
def classify(message):
'''
message: a string
'''
message = re.sub('\W', ' ', message)
message = message.lower().split()
p_spam_given_message = p_spam
p_ham_given_message = p_ham
for word in message:
if word in parameters_spam:
p_spam_given_message *= parameters_spam[word]
if word in parameters_ham:
p_ham_given_message *= parameters_ham[word]
print('P(Spam|message):', p_spam_given_message)
print('P(Ham|message):', p_ham_given_message)
if p_ham_given_message > p_spam_given_message:
print('Label: Ham')
elif p_ham_given_message < p_spam_given_message:
print('Label: Spam')
else:
print('Equal proabilities, have a human classify this!')
classify('WINNER!! This is the secret code to unlock the money: C3421.')
classify("Sounds good, Tom, then see u there")
def classify_test_set(message):
'''
message: a string
'''
message = re.sub('\W', ' ', message)
message = message.lower().split()
p_spam_given_message = p_spam
p_ham_given_message = p_ham
for word in message:
if word in parameters_spam:
p_spam_given_message *= parameters_spam[word]
if word in parameters_ham:
p_ham_given_message *= parameters_ham[word]
if p_ham_given_message > p_spam_given_message:
return 'ham'
elif p_spam_given_message > p_ham_given_message:
return 'spam'
else:
return 'needs human classification'
test_set['predicted'] = test_set['SMS'].apply(classify_test_set)
test_set.head()
correct = 0
total = test_set.shape[0]
for row in test_set.iterrows():
row = row[1]
if row['Label'] == row['predicted']:
correct += 1
print('Correct:', correct)
print('Incorrect:', total - correct)
print('Accuracy:', correct/total)
| 0.339718 | 0.456289 |
#Exploring Ensemble Methods
In this homework we will explore the use of boosting. You will:
Use SFrames to do some feature engineering.
Train a boosted ensemble of decision-trees (gradient boosted trees) on the lending club dataset.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Evaluate the trained model and compare it with a baseline.
Find the most positive and negative loans using the learned model.
Explore how the number of trees influences classification performance.
#Load the Lending Club dataset
```
import pandas as pd
import numpy as np
loans = pd.read_csv('/Users/April/Downloads/lending-club-data.csv')
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis = 1)
```
Selecting features
The features we will be using are described in the code comments below. Extract these feature columns and target column from the dataset. We will only use these features.
```
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
```
#Skipping observations with missing values
Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values.
```
loans = loans[[target] + features].dropna()
loans = pd.get_dummies(loans)
import json
with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/train index.json', 'r') as f: # Reads the list of most frequent words
train_idx = json.load(f)
with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/validation index.json', 'r') as f1: # Reads the list of most frequent words
validation_idx = json.load(f1)
train_data = loans.iloc[train_idx]
validation_data = loans.iloc[validation_idx]
```
#Gradient boosted tree classifier
Now, let's use the built-in scikit learn gradient boosting classifier (sklearn.ensemble.GradientBoostingClassifier) to create a gradient boosted classifier on the training data. You will need to import sklearn, sklearn.ensemble, and numpy.
You will have to first convert the SFrame into a numpy data matrix. See the API for more information. You will also have to extract the label column. Make sure to set max_depth=6 and n_estimators=5.
#Making predictions
Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following:
Predict whether or not a loan is likely to default.
Predict the probability with which the loan is likely to default.
First, let's grab 2 positive examples and 2 negative examples.
```
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
```
For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan. (Hint: if you are using scikit-learn, you can use the .predict() method)
```
import sklearn
import sklearn.ensemble
import numpy
from sklearn.ensemble import GradientBoostingClassifier
sample_model = GradientBoostingClassifier(n_estimators=5, max_depth=6)
X = train_data.drop('safe_loans',1)
X.columns
sample_model.fit(X, train_data['safe_loans'])
sample_model.predict(sample_validation_data.drop('safe_loans',1))
```
Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct?
#Prediction Probabilities
For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe? (Hint: if you are using scikit-learn, you can use the .predict_proba() method)
```
sample_model.predict_proba(sample_validation_data.drop('safe_loans',1))
```
Quiz Question: Which loan has the highest probability of being classified as a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
#Evaluating the model on the validation data
Evaluate the accuracy of the model_5 on the validation_data. (Hint: if you are using scikit-learn, you can use the .score() method)
```
sample_model.score(validation_data.drop('safe_loans',1), validation_data['safe_loans'])
```
Calculate the number of false positives made by the model on the validation_data.
```
predict_safeloans = sample_model.predict(validation_data.drop('safe_loans',1))
predict_safeloans
sum(predict_safeloans > validation_data['safe_loans'])
```
#Comparison with decision trees
```
# false negative
sum(predict_safeloans < validation_data['safe_loans'])
```
Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set?
```
cost = 20000*1653+10000*1491
print cost
```
#Most positive & negative loans
In this section, we will find the loans that are most likely to be predicted safe.
```
validation_data['predictions'] = sample_model.predict_proba(validation_data.drop('safe_loans',1))[:,1]
validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).head(5)
validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).tail(5)
```
#Effects of adding more trees
In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees.
Train models with 10, 50, 100, 200, and 500 trees. Use the n_estimators parameter to control the number of trees. Remember to keep max_depth = 6.
Call these models model_10, model_50, model_100, model_200, and model_500, respectively. This may take a few minutes to run.
```
model_10 = GradientBoostingClassifier(n_estimators=10, max_depth=6)
model_10.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_50 = GradientBoostingClassifier(n_estimators=50, max_depth=6)
model_50.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_100 = GradientBoostingClassifier(n_estimators=100, max_depth=6)
model_100.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_200 = GradientBoostingClassifier(n_estimators=200, max_depth=6)
model_200.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_500 = GradientBoostingClassifier(n_estimators=500, max_depth=6)
model_500.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
```
#Plot the training and validation error vs. number of trees
In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots.
```
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
```
Steps to follow:
Step 1: Calculate the classification error for each model on the training data (train_data).
Step 2: Store the training errors into a list (called training_errors) that looks like this: [train_err_10, train_err_50, ..., train_err_500]
Step 3: Calculate the classification error of each model on the validation data (validation_data).
Step 4: Store the validation classification error into a list (called validation_errors) that looks like this:[validation_err_10, validation_err_50, ..., validation_err_500]
```
train_err_10 = 1 - model_10.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_50 = 1 - model_50.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_100 = 1 - model_100.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_200 = 1 - model_200.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_500 = 1 - model_500.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
validation_err_10 = 1 - model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_50 = 1 - model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_100 = 1 - model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_200 = 1 - model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_500 = 1 - model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
training_errors = [train_err_10, train_err_50, train_err_100, train_err_200, train_err_500]
validation_errors = [validation_err_10, validation_err_50, validation_err_100, validation_err_200, validation_err_500]
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
loans = pd.read_csv('/Users/April/Downloads/lending-club-data.csv')
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.drop('bad_loans', axis = 1)
target = 'safe_loans'
features = ['grade', # grade of the loan (categorical)
'sub_grade_num', # sub-grade of the loan as a number from 0 to 1
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'payment_inc_ratio', # ratio of the monthly payment to income
'delinq_2yrs', # number of delinquincies
'delinq_2yrs_zero', # no delinquincies in last 2 years
'inq_last_6mths', # number of creditor inquiries in last 6 months
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'open_acc', # number of open credit accounts
'pub_rec', # number of derogatory public records
'pub_rec_zero', # no derogatory public records
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
'int_rate', # interest rate of the loan
'total_rec_int', # interest received to date
'annual_inc', # annual income of borrower
'funded_amnt', # amount committed to the loan
'funded_amnt_inv', # amount committed by investors for the loan
'installment', # monthly payment owed by the borrower
]
loans = loans[[target] + features].dropna()
loans = pd.get_dummies(loans)
import json
with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/train index.json', 'r') as f: # Reads the list of most frequent words
train_idx = json.load(f)
with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/validation index.json', 'r') as f1: # Reads the list of most frequent words
validation_idx = json.load(f1)
train_data = loans.iloc[train_idx]
validation_data = loans.iloc[validation_idx]
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
import sklearn
import sklearn.ensemble
import numpy
from sklearn.ensemble import GradientBoostingClassifier
sample_model = GradientBoostingClassifier(n_estimators=5, max_depth=6)
X = train_data.drop('safe_loans',1)
X.columns
sample_model.fit(X, train_data['safe_loans'])
sample_model.predict(sample_validation_data.drop('safe_loans',1))
sample_model.predict_proba(sample_validation_data.drop('safe_loans',1))
sample_model.score(validation_data.drop('safe_loans',1), validation_data['safe_loans'])
predict_safeloans = sample_model.predict(validation_data.drop('safe_loans',1))
predict_safeloans
sum(predict_safeloans > validation_data['safe_loans'])
# false negative
sum(predict_safeloans < validation_data['safe_loans'])
cost = 20000*1653+10000*1491
print cost
validation_data['predictions'] = sample_model.predict_proba(validation_data.drop('safe_loans',1))[:,1]
validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).head(5)
validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).tail(5)
model_10 = GradientBoostingClassifier(n_estimators=10, max_depth=6)
model_10.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_50 = GradientBoostingClassifier(n_estimators=50, max_depth=6)
model_50.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_100 = GradientBoostingClassifier(n_estimators=100, max_depth=6)
model_100.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_200 = GradientBoostingClassifier(n_estimators=200, max_depth=6)
model_200.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_500 = GradientBoostingClassifier(n_estimators=500, max_depth=6)
model_500.fit(train_data.drop('safe_loans',1), train_data['safe_loans'])
model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
import matplotlib.pyplot as plt
%matplotlib inline
def make_figure(dim, title, xlabel, ylabel, legend):
plt.rcParams['figure.figsize'] = dim
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if legend is not None:
plt.legend(loc=legend, prop={'size':15})
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
train_err_10 = 1 - model_10.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_50 = 1 - model_50.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_100 = 1 - model_100.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_200 = 1 - model_200.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
train_err_500 = 1 - model_500.score(train_data.drop('safe_loans',1), train_data['safe_loans'])
validation_err_10 = 1 - model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_50 = 1 - model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_100 = 1 - model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_200 = 1 - model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
validation_err_500 = 1 - model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans'])
training_errors = [train_err_10, train_err_50, train_err_100, train_err_200, train_err_500]
validation_errors = [validation_err_10, validation_err_50, validation_err_100, validation_err_200, validation_err_500]
plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error')
plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error')
make_figure(dim=(10,5), title='Error vs number of trees',
xlabel='Number of trees',
ylabel='Classification error',
legend='best')
| 0.316475 | 0.987079 |
```
import os
os.sys.path.append(os.path.dirname(os.path.abspath('.')))
```
## 数据准备
```
import numpy as np
from datasets.dataset import load_breast_cancer
data=load_breast_cancer()
X,Y=data.data,data.target
del data
from model_selection.train_test_split import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2)
# print(X_train.shape,X_test.shape,Y_train.shape,Y_test.shape)
# 把X,Y拼起来便于操作
training_data=np.c_[X_train,Y_train]
testing_data=np.c_[X_test,Y_test]
# print(training_data.shape,testing_data.shape)
```
## 模型基础
CART树用做分类时,分裂依据为基尼指数:
$$
Gini(D)=1-\sum\limits_{k=1}^{K}p_{k}^{2}
$$
```
def Gini(data, y_idx=-1):
K = np.unique(data[:, y_idx])
n_sample = len(data)
gini_idx = 1 - \
np.sum([np.square(len(data[data[:, y_idx] == k])/n_sample) for k in K])
return gini_idx
# Gini(testing_data)
```
定义一个在指定特征与特征值下将数据集二分的函数,这里将小于等于分割值的数据集放入左分支,大于分割值的数据集放入右分支。
```
def BinSplitData(data,f_idx,f_val):
'''
以指定特征与特征值二分数据集
'''
data_left=data[data[:,f_idx]<=f_val]
data_right=data[data[:,f_idx]>f_val]
return data_left,data_right
# SplitData(training_data,0,0)
```
分割函数与分割指标计算函数都有了,接下来就可以在数据集中迭代寻找最佳分割特征与特征值了。
```
from scipy import stats
def Test(data, criteria='gini', min_samples_split=5, min_samples_leaf=5, min_impurity_decrease=0.0):
'''
对数据做test,找到最佳分割特征与特征值
return: best_f_idx, best_f_val,前者为空时代表叶节点,两者都为空时说明无法分裂
min_samples_split: 分裂所需的最小样本数,大于1
min_samples_leaf: 叶子节点的最小样本数,大于0
min_impurity_decrease: 分裂需要满足的最小增益
'''
n_sample, n_feature = data.shape
# 数据量小于阈值则直接返回叶节点,数据已纯净也返回叶节点
if n_sample < min_samples_split or len(np.unique(data[:,-1]))==1:
# 注意这里与回归树不同,回归树返回均值,分类树返回众数
return None, stats.mode(data[:, -1])[0][0]
Gini_before = Gini(data) # 分裂前的Gini
best_gain = 0
best_f_idx = None
best_f_val = stats.mode(data[:, -1])[0][0] # 默认分割值设为目标众数,当找不到分割点时返回该值作为叶节点
# 遍历所有特征与特征值
for f_idx in range(n_feature-1):
for f_val in np.unique(data[:, f_idx]):
data_left, data_right = BinSplitData(data, f_idx, f_val) # 二分数据
# 分割后的分支样本数小于阈值则放弃分裂
if len(data_left) < min_samples_leaf or len(data_right) < min_samples_leaf:
continue
# 分割后的加权Gini
Gini_after = len(data_left)/n_sample*Gini(data_left) + \
len(data_right)/n_sample*Gini(data_right)
gain = Gini_before-Gini_after # Gini的减小量为增益
# 分裂后的增益小于阈值或小于目前最大增益则放弃分裂
if gain < min_impurity_decrease or gain < best_gain:
continue
else:
# 否则更新最大增益
best_gain = gain
best_f_idx, best_f_val = f_idx, f_val
# 返回一个最佳分割特征与最佳分割点,注意会有空的情况
return best_f_idx, best_f_val
# Test(training_data)
```
最后就可以使用递归来生成树了。树中每一个节点需要保存的信息有:分割特征,分割点,以及左右分支。
```
def CART(data,criteria='gini',min_samples_split=5,min_samples_leaf=5,min_impurity_decrease=0.0):
# 首先是做test,数据集的质量由Test函数来保证并提供反馈
best_f_idx,best_f_val=Test(data,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
tree={}
tree['cut_f']=best_f_idx
tree['cut_val']=best_f_val
if best_f_idx==None: # f_idx为空表示需要生成叶节点
return best_f_val
data_left,data_right=BinSplitData(data,best_f_idx,best_f_val)
tree['left']=CART(data_left,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
tree['right']=CART(data_right,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
return tree
tree=CART(training_data)
# print(tree)
def predict_one(x_test, tree, default=-1):
if isinstance(tree, dict): # 非叶节点才做左右判断
cut_f_idx, cut_val = tree['cut_f'], tree['cut_val']
sub_tree = tree['left'] if x_test[cut_f_idx] <= cut_val else tree['right']
return predict_one(x_test, sub_tree)
else: # 叶节点则直接返回值
return tree
# test_idx=10
# print(predict_one(X_test[test_idx],tree),Y_test[test_idx])
def predict(X_test,tree):
return np.array([predict_one(x_test,tree) for x_test in X_test])
Y_pred=predict(X_test,tree)
print('acc:{}'.format(np.sum(Y_pred==Y_test)/len(Y_test)))
```
使用sklearn中的分类树来做效果对比。
```
from sklearn.tree import DecisionTreeClassifier
dt_clf=DecisionTreeClassifier(min_samples_split=5, min_samples_leaf=5)
dt_clf.fit(X_train,Y_train)
Y_pred=dt_clf.predict(X_test)
print('acc:{}'.format(np.sum(Y_pred==Y_test)/len(Y_test)))
```
|
github_jupyter
|
import os
os.sys.path.append(os.path.dirname(os.path.abspath('.')))
import numpy as np
from datasets.dataset import load_breast_cancer
data=load_breast_cancer()
X,Y=data.data,data.target
del data
from model_selection.train_test_split import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2)
# print(X_train.shape,X_test.shape,Y_train.shape,Y_test.shape)
# 把X,Y拼起来便于操作
training_data=np.c_[X_train,Y_train]
testing_data=np.c_[X_test,Y_test]
# print(training_data.shape,testing_data.shape)
def Gini(data, y_idx=-1):
K = np.unique(data[:, y_idx])
n_sample = len(data)
gini_idx = 1 - \
np.sum([np.square(len(data[data[:, y_idx] == k])/n_sample) for k in K])
return gini_idx
# Gini(testing_data)
def BinSplitData(data,f_idx,f_val):
'''
以指定特征与特征值二分数据集
'''
data_left=data[data[:,f_idx]<=f_val]
data_right=data[data[:,f_idx]>f_val]
return data_left,data_right
# SplitData(training_data,0,0)
from scipy import stats
def Test(data, criteria='gini', min_samples_split=5, min_samples_leaf=5, min_impurity_decrease=0.0):
'''
对数据做test,找到最佳分割特征与特征值
return: best_f_idx, best_f_val,前者为空时代表叶节点,两者都为空时说明无法分裂
min_samples_split: 分裂所需的最小样本数,大于1
min_samples_leaf: 叶子节点的最小样本数,大于0
min_impurity_decrease: 分裂需要满足的最小增益
'''
n_sample, n_feature = data.shape
# 数据量小于阈值则直接返回叶节点,数据已纯净也返回叶节点
if n_sample < min_samples_split or len(np.unique(data[:,-1]))==1:
# 注意这里与回归树不同,回归树返回均值,分类树返回众数
return None, stats.mode(data[:, -1])[0][0]
Gini_before = Gini(data) # 分裂前的Gini
best_gain = 0
best_f_idx = None
best_f_val = stats.mode(data[:, -1])[0][0] # 默认分割值设为目标众数,当找不到分割点时返回该值作为叶节点
# 遍历所有特征与特征值
for f_idx in range(n_feature-1):
for f_val in np.unique(data[:, f_idx]):
data_left, data_right = BinSplitData(data, f_idx, f_val) # 二分数据
# 分割后的分支样本数小于阈值则放弃分裂
if len(data_left) < min_samples_leaf or len(data_right) < min_samples_leaf:
continue
# 分割后的加权Gini
Gini_after = len(data_left)/n_sample*Gini(data_left) + \
len(data_right)/n_sample*Gini(data_right)
gain = Gini_before-Gini_after # Gini的减小量为增益
# 分裂后的增益小于阈值或小于目前最大增益则放弃分裂
if gain < min_impurity_decrease or gain < best_gain:
continue
else:
# 否则更新最大增益
best_gain = gain
best_f_idx, best_f_val = f_idx, f_val
# 返回一个最佳分割特征与最佳分割点,注意会有空的情况
return best_f_idx, best_f_val
# Test(training_data)
def CART(data,criteria='gini',min_samples_split=5,min_samples_leaf=5,min_impurity_decrease=0.0):
# 首先是做test,数据集的质量由Test函数来保证并提供反馈
best_f_idx,best_f_val=Test(data,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
tree={}
tree['cut_f']=best_f_idx
tree['cut_val']=best_f_val
if best_f_idx==None: # f_idx为空表示需要生成叶节点
return best_f_val
data_left,data_right=BinSplitData(data,best_f_idx,best_f_val)
tree['left']=CART(data_left,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
tree['right']=CART(data_right,criteria,min_samples_split,min_samples_leaf,min_impurity_decrease)
return tree
tree=CART(training_data)
# print(tree)
def predict_one(x_test, tree, default=-1):
if isinstance(tree, dict): # 非叶节点才做左右判断
cut_f_idx, cut_val = tree['cut_f'], tree['cut_val']
sub_tree = tree['left'] if x_test[cut_f_idx] <= cut_val else tree['right']
return predict_one(x_test, sub_tree)
else: # 叶节点则直接返回值
return tree
# test_idx=10
# print(predict_one(X_test[test_idx],tree),Y_test[test_idx])
def predict(X_test,tree):
return np.array([predict_one(x_test,tree) for x_test in X_test])
Y_pred=predict(X_test,tree)
print('acc:{}'.format(np.sum(Y_pred==Y_test)/len(Y_test)))
from sklearn.tree import DecisionTreeClassifier
dt_clf=DecisionTreeClassifier(min_samples_split=5, min_samples_leaf=5)
dt_clf.fit(X_train,Y_train)
Y_pred=dt_clf.predict(X_test)
print('acc:{}'.format(np.sum(Y_pred==Y_test)/len(Y_test)))
| 0.257485 | 0.839175 |
# Introduction
### Course structure
This is a data science course in *Python3* (hereafter referred to just *Python*) designed for participants with basic Python experience. The course will be run over **6 weeks** with the following structure.
- **Monday lectures** will start with information about Python syntax, the Jupyter notebook interface, and move through concepts such as how to write functions and handle data, using the *pandas* and *numpy* packages, how to calculate summary information from a data frame, and approaches to do plotting, modelling and basics of machine learning. Each lecture will conclude with an **assignment**.
- **During the week (Tuesday-Thursday)**, participants are invited to review the materials presented in the Monday lecture and complete the assignment with the help of an assigned tutor via the Teams chat.
- In the **Friday recap** the trainers will provide a walk-through the assigment and answer any questions
### Aims
The course will cover concepts and strategies for working with data more effectively in Python with the aim of:
- Writing **reusable** code, using Python's **functions, modules and libraries**
- Acquiring a working knowledge of **key concepts** which are prerequisites for advanced programming, data visualisation and modelling, and machine learning
- Expanding knowledge of *Python* with applications to life data sciences
### Obtaining course materials
The course materials (lectures, assignments and solutions) are accessible via GitHub: https://github.com/semacu/202101-data-science-python
We’d like you to follow along with the example code as we go through the course materials together, and attempt the assignment to practice what you’ve learned.
The course materials will be updated throughout the course, so we recommend downloading the most recent version of the materials before each lecture or recap session. The latest notebooks and relevant materials for this course can be obtained as follows:
1. Go to the GitHub page for the course: https://github.com/semacu/202101-data-science-python
2. Click on the green **Code** button (right, above the list of folders and files). This will cause a drop-down menu to appear
3. Click on the **Download ZIP** option. A zip file containing the course content will be downloaded to your computer
4. Move the zip file to wherever in your directories is preferred e.g. home
5. Decompress the zip file to get a folder containing the course materials. Depending on your operating system, you may need to double-click the zip file, or issue a command on the terminal. On Windows 10, you can right click, click **Extract All...**, click **Extract**, and the folder will be decompressed in the same location as the zip file
6. Launch Jupyter Notebook. Depending on your operating system, you may be able to search for \"Jupyter\" in the system menu and click the icon that appears, or you may need to issue a command on the terminal. On Windows, you can hit the Windows key, search for \"Jupyter\", and click the icon that appears
7. After launching, the Jupyter notebook home menu will open in your browser. Navigate to the course materials that you decompressed in step 5, and click on the lecture or recap notebook of the week to launch it.
### Audience
This course is open to any colleagues with some basic knowledge of Python. We are so excited that you want to learn Python :) ! We will start with a brief recap on Python basic concepts and we will build up from there. You will set the pace and the amount of material that we will cover.
### Feedback
Questions, suggestions and ideas from participants are welcomed via the Teams chat, e-mail or during the lectures and recaps. Enjoy!
### Python and Jupyter
**Python** is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. It was initially created by Guido van Rossum in 1991. Python is widely used in data science, bioinformatics and scientific computing, as well as in academia and industry.
It is available in all popular operating systems (Mac, Windows and Linux). The default Python installation comes with "batteries included" and the standard library (some of which we will see in this course) provides built-in support for lots of common tasks e.g. numerical & mathematical functions, interacting with files and the operating system ... There is also a wide range of external libraries for areas not covered in the standard library, such as *pandas* (the Python ANalysis DAta Library), *matplotlib* (the Python plotting library) and *biopython* which provides tools for bioinformatics.
**Jupyter** is a nonprofit organization created to "develop open-source software, open-standards, and services for interactive computing across dozens of programming languages". Jupyter supports execution environments and has developed and supported the interactive computing products e.g. Jupyter Notebook, which we will be using during this course.
The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, interactive data visualizations and explanatory text.
**How to run Python?** Python is an interpreted language, this means that your computer does not run Python code natively, but instead we run our code using the Python interpreter. There are three ways in which you can run Python code:
- Directly typing commands into the interpreter: good for experimenting with the language, and for some interactive work e.g. using the command line and/or IPython
- Using a Jupyter Notebook: great for experimenting with the language, as well as for sharing and learning
- Typing code into a file and then telling the interpreter to run the code from this file: good for larger scripts, and when you want to run the same code repeatedly
# Installations
Before starting this course, you need to have Python3 and Jupyter installed on your computer. If you do not have these installed already, we recommend installing Anaconda (a complete programming environment including Python3 and Jupyter) by following the instructions below:
### Windows
1. Open the AZ Software Store. The home screen should look like the following:
<img src="../img/az_softwarestore_1.png">
2. In the "Search Catalog" bar at the top, search for "anaconda". This should return "Anaconda3 2019.10":
<img src="../img/az_softwarestore_2.png">
3. Click the "Add to Cart" button and this should add it to to your basket. *Note: make sure you have added the correct version of Anaconda to your cart ("Anaconda3 2019.10"), as other versions e.g. "Anaconda 5.3" may not be suitable for the contents of this course. If using one of the latest versions of the Software Store, you may need to add Anaconda Navigator instead*
4. Click the Cart icon on the top right of the screen, and a preview of the contents of your cart should be displayed:
<img src="../img/az_softwarestore_3.png">
5. Click the "View cart and checkout" button, and you should be taken to a summary of your basket:
<img src="../img/az_softwarestore_4.png">
6. Click the buttons "Me on machine" and "Install Anaconda3 2019.10", then click the "Next" button. *Note: if you are using an older version of the Software Store you may have to check that the "Receive ASAP" option is selected.
7. Click the "Submit" button. You should go through to a "Request Complete" screen. Anaconda will be installed on your computer within a few hours.
If the steps above do not work, you may want to get in touch with AZ IT. If you can't get hold of IT on time, try the following [link](https://docs.anaconda.com/anaconda/install/windows/)
### Linux and macOS
Click the following links for installing Anaconda on your [linux](https://docs.anaconda.com/anaconda/install/linux/) or [macOS](https://docs.anaconda.com/anaconda/install/mac-os/) distributions
### Anaconda installation check
Once you have installed Anaconda, it is a good idea to check that your installation is working ok:
1. Open the "Anaconda Navigator (Anaconda3)" program, and you should get this home screen:
<img src="../img/anaconda_navigator.png">
2. Click the "Launch" button underneath the Jupyter Notebook icon
3. A tab should now open on your web browser, showing the Jupyter logo at the top and your file system below. You can click the "New" button in top right and a dropdown menu will appear. Then click "Python 3" - it will open up a blank new notebook for you.
If the above steps do work, Anaconda (and Python and Jupyter) should be installed fine
|
github_jupyter
|
# Introduction
### Course structure
This is a data science course in *Python3* (hereafter referred to just *Python*) designed for participants with basic Python experience. The course will be run over **6 weeks** with the following structure.
- **Monday lectures** will start with information about Python syntax, the Jupyter notebook interface, and move through concepts such as how to write functions and handle data, using the *pandas* and *numpy* packages, how to calculate summary information from a data frame, and approaches to do plotting, modelling and basics of machine learning. Each lecture will conclude with an **assignment**.
- **During the week (Tuesday-Thursday)**, participants are invited to review the materials presented in the Monday lecture and complete the assignment with the help of an assigned tutor via the Teams chat.
- In the **Friday recap** the trainers will provide a walk-through the assigment and answer any questions
### Aims
The course will cover concepts and strategies for working with data more effectively in Python with the aim of:
- Writing **reusable** code, using Python's **functions, modules and libraries**
- Acquiring a working knowledge of **key concepts** which are prerequisites for advanced programming, data visualisation and modelling, and machine learning
- Expanding knowledge of *Python* with applications to life data sciences
### Obtaining course materials
The course materials (lectures, assignments and solutions) are accessible via GitHub: https://github.com/semacu/202101-data-science-python
We’d like you to follow along with the example code as we go through the course materials together, and attempt the assignment to practice what you’ve learned.
The course materials will be updated throughout the course, so we recommend downloading the most recent version of the materials before each lecture or recap session. The latest notebooks and relevant materials for this course can be obtained as follows:
1. Go to the GitHub page for the course: https://github.com/semacu/202101-data-science-python
2. Click on the green **Code** button (right, above the list of folders and files). This will cause a drop-down menu to appear
3. Click on the **Download ZIP** option. A zip file containing the course content will be downloaded to your computer
4. Move the zip file to wherever in your directories is preferred e.g. home
5. Decompress the zip file to get a folder containing the course materials. Depending on your operating system, you may need to double-click the zip file, or issue a command on the terminal. On Windows 10, you can right click, click **Extract All...**, click **Extract**, and the folder will be decompressed in the same location as the zip file
6. Launch Jupyter Notebook. Depending on your operating system, you may be able to search for \"Jupyter\" in the system menu and click the icon that appears, or you may need to issue a command on the terminal. On Windows, you can hit the Windows key, search for \"Jupyter\", and click the icon that appears
7. After launching, the Jupyter notebook home menu will open in your browser. Navigate to the course materials that you decompressed in step 5, and click on the lecture or recap notebook of the week to launch it.
### Audience
This course is open to any colleagues with some basic knowledge of Python. We are so excited that you want to learn Python :) ! We will start with a brief recap on Python basic concepts and we will build up from there. You will set the pace and the amount of material that we will cover.
### Feedback
Questions, suggestions and ideas from participants are welcomed via the Teams chat, e-mail or during the lectures and recaps. Enjoy!
### Python and Jupyter
**Python** is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. It was initially created by Guido van Rossum in 1991. Python is widely used in data science, bioinformatics and scientific computing, as well as in academia and industry.
It is available in all popular operating systems (Mac, Windows and Linux). The default Python installation comes with "batteries included" and the standard library (some of which we will see in this course) provides built-in support for lots of common tasks e.g. numerical & mathematical functions, interacting with files and the operating system ... There is also a wide range of external libraries for areas not covered in the standard library, such as *pandas* (the Python ANalysis DAta Library), *matplotlib* (the Python plotting library) and *biopython* which provides tools for bioinformatics.
**Jupyter** is a nonprofit organization created to "develop open-source software, open-standards, and services for interactive computing across dozens of programming languages". Jupyter supports execution environments and has developed and supported the interactive computing products e.g. Jupyter Notebook, which we will be using during this course.
The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, interactive data visualizations and explanatory text.
**How to run Python?** Python is an interpreted language, this means that your computer does not run Python code natively, but instead we run our code using the Python interpreter. There are three ways in which you can run Python code:
- Directly typing commands into the interpreter: good for experimenting with the language, and for some interactive work e.g. using the command line and/or IPython
- Using a Jupyter Notebook: great for experimenting with the language, as well as for sharing and learning
- Typing code into a file and then telling the interpreter to run the code from this file: good for larger scripts, and when you want to run the same code repeatedly
# Installations
Before starting this course, you need to have Python3 and Jupyter installed on your computer. If you do not have these installed already, we recommend installing Anaconda (a complete programming environment including Python3 and Jupyter) by following the instructions below:
### Windows
1. Open the AZ Software Store. The home screen should look like the following:
<img src="../img/az_softwarestore_1.png">
2. In the "Search Catalog" bar at the top, search for "anaconda". This should return "Anaconda3 2019.10":
<img src="../img/az_softwarestore_2.png">
3. Click the "Add to Cart" button and this should add it to to your basket. *Note: make sure you have added the correct version of Anaconda to your cart ("Anaconda3 2019.10"), as other versions e.g. "Anaconda 5.3" may not be suitable for the contents of this course. If using one of the latest versions of the Software Store, you may need to add Anaconda Navigator instead*
4. Click the Cart icon on the top right of the screen, and a preview of the contents of your cart should be displayed:
<img src="../img/az_softwarestore_3.png">
5. Click the "View cart and checkout" button, and you should be taken to a summary of your basket:
<img src="../img/az_softwarestore_4.png">
6. Click the buttons "Me on machine" and "Install Anaconda3 2019.10", then click the "Next" button. *Note: if you are using an older version of the Software Store you may have to check that the "Receive ASAP" option is selected.
7. Click the "Submit" button. You should go through to a "Request Complete" screen. Anaconda will be installed on your computer within a few hours.
If the steps above do not work, you may want to get in touch with AZ IT. If you can't get hold of IT on time, try the following [link](https://docs.anaconda.com/anaconda/install/windows/)
### Linux and macOS
Click the following links for installing Anaconda on your [linux](https://docs.anaconda.com/anaconda/install/linux/) or [macOS](https://docs.anaconda.com/anaconda/install/mac-os/) distributions
### Anaconda installation check
Once you have installed Anaconda, it is a good idea to check that your installation is working ok:
1. Open the "Anaconda Navigator (Anaconda3)" program, and you should get this home screen:
<img src="../img/anaconda_navigator.png">
2. Click the "Launch" button underneath the Jupyter Notebook icon
3. A tab should now open on your web browser, showing the Jupyter logo at the top and your file system below. You can click the "New" button in top right and a dropdown menu will appear. Then click "Python 3" - it will open up a blank new notebook for you.
If the above steps do work, Anaconda (and Python and Jupyter) should be installed fine
| 0.856212 | 0.971645 |
# Lab 05 : GatedGCNs with DGL - demo
Deep Graph Library (DGL)
https://docs.dgl.ai/
```
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = '01_GatedGCNs_DGL.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
!pip install dgl==0.3 #DGL
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import dgl
from dgl import DGLGraph
from dgl.data import MiniGCDataset
import time
import numpy as np
import networkx as nx
import os
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
# select GPU
gpu_id = 0
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id)
if torch.cuda.is_available():
print('cuda available with GPU:',torch.cuda.get_device_name(0))
dtypeFloat = torch.cuda.FloatTensor
dtypeLong = torch.cuda.LongTensor
else:
print('cuda not available')
gpu_id = -1
server_id = -1
dtypeFloat = torch.FloatTensor
dtypeLong = torch.LongTensor
# GPU
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
```
# Collate function to prepare graphs
```
# collate function
def collate(samples):
graphs, labels = map(list, zip(*samples)) # samples is a list of pairs (graph, label).
labels = torch.tensor(labels)
tab_sizes_n = [ graphs[i].number_of_nodes() for i in range(len(graphs))] # graph sizes
tab_snorm_n = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_n ]
snorm_n = torch.cat(tab_snorm_n).sqrt() # normalization constant for better optimization
tab_sizes_e = [ graphs[i].number_of_edges() for i in range(len(graphs))] # nb of edges
tab_snorm_e = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_e ]
snorm_e = torch.cat(tab_snorm_e).sqrt() # normalization constant for better optimization
batched_graph = dgl.batch(graphs) # batch graphs
return batched_graph, labels, snorm_n, snorm_e
# create artifical data feature (= in degree) for each node
def create_artificial_features(dataset):
for (graph,_) in dataset:
graph.ndata['feat'] = graph.in_degrees().view(-1, 1).float()
graph.edata['feat'] = torch.ones(graph.number_of_edges(),1)
return dataset
# use artifical graph dataset of DGL
trainset = MiniGCDataset(8, 10, 20)
trainset = create_artificial_features(trainset)
print(trainset[0])
save_dataset = False
save_dataset = True
data_folder = 'data/'
if save_dataset == True:
# train, test, val datasets
trainset = MiniGCDataset(350, 10, 20)
testset = MiniGCDataset(100, 10, 20)
valset = MiniGCDataset(100, 10, 20)
data_loader = DataLoader(trainset, batch_size=20, shuffle=True, collate_fn=collate)
trainset = create_artificial_features(trainset)
testset = create_artificial_features(testset)
valset = create_artificial_features(valset)
with open(data_folder + "artificial_dataset.pickle","wb") as f:
pickle.dump([trainset,testset,valset],f)
else:
with open(data_folder + "artificial_dataset.pickle","rb") as f:
f = pickle.load(f)
trainset = f[0]
testset = f[1]
valset = f[1]
print('train, test, val sizes :',len(trainset),len(testset),len(valset))
```
# Visualize graph dataset
```
visualset = MiniGCDataset(8, 10, 20)
# visualise the 8 classes of graphs
for c in range(8):
graph, label = visualset[c]
fig, ax = plt.subplots()
nx.draw(graph.to_networkx(), ax=ax)
ax.set_title('Class: {:d}'.format(label))
plt.show()
```
# GatedGCNs
Residual Gated Graph ConvNets, X Bresson, T Laurent, ICLR 2017, [arXiv:1711.07553](https://arxiv.org/pdf/1711.07553v2.pdf) <br>
\begin{eqnarray}
h_i^{\ell+1} &=& h_i^{\ell} + \text{ReLU} \left( A^\ell h_i^{\ell} + \sum_{j\sim i} \eta(e_{ij}^{\ell}) \odot B^\ell h_j^{\ell} \right), \quad \eta(e_{ij}^{\ell}) = \frac{\sigma(e_{ij}^{\ell})}{\sum_{j'\sim i} \sigma(e_{ij'}^{\ell}) + \varepsilon} \\
e_{ij}^{\ell+1} &=& e^\ell_{ij} + \text{ReLU} \Big( C^\ell e_{ij}^{\ell} + D^\ell h^{\ell+1}_i + E^\ell h^{\ell+1}_j \Big)
\end{eqnarray}
```
class MLP_layer(nn.Module):
def __init__(self, input_dim, output_dim, L=2): # L = nb of hidden layers
super(MLP_layer, self).__init__()
list_FC_layers = [ nn.Linear( input_dim, input_dim, bias=True ) for l in range(L) ]
list_FC_layers.append(nn.Linear( input_dim, output_dim , bias=True ))
self.FC_layers = nn.ModuleList(list_FC_layers)
self.L = L
def forward(self, x):
y = x
for l in range(self.L):
y = self.FC_layers[l](y)
y = F.relu(y)
y = self.FC_layers[self.L](y)
return y
class GatedGCN_layer(nn.Module):
def __init__(self, input_dim, output_dim):
super(GatedGCN_layer, self).__init__()
self.A = nn.Linear(input_dim, output_dim, bias=True)
self.B = nn.Linear(input_dim, output_dim, bias=True)
self.C = nn.Linear(input_dim, output_dim, bias=True)
self.D = nn.Linear(input_dim, output_dim, bias=True)
self.E = nn.Linear(input_dim, output_dim, bias=True)
self.bn_node_h = nn.BatchNorm1d(output_dim)
self.bn_node_e = nn.BatchNorm1d(output_dim)
def message_func(self, edges):
Bh_j = edges.src['Bh']
e_ij = edges.data['Ce'] + edges.src['Dh'] + edges.dst['Eh'] # e_ij = Ce_ij + Dhi + Ehj
edges.data['e'] = e_ij
return {'Bh_j' : Bh_j, 'e_ij' : e_ij}
def reduce_func(self, nodes):
Ah_i = nodes.data['Ah']
Bh_j = nodes.mailbox['Bh_j']
e = nodes.mailbox['e_ij']
sigma_ij = torch.sigmoid(e) # sigma_ij = sigmoid(e_ij)
h = Ah_i + torch.sum( sigma_ij * Bh_j, dim=1 ) / torch.sum( sigma_ij, dim=1 ) # hi = Ahi + sum_j eta_ij * Bhj
return {'h' : h}
def forward(self, g, h, e, snorm_n, snorm_e):
h_in = h # residual connection
e_in = e # residual connection
g.ndata['h'] = h
g.ndata['Ah'] = self.A(h)
g.ndata['Bh'] = self.B(h)
g.ndata['Dh'] = self.D(h)
g.ndata['Eh'] = self.E(h)
g.edata['e'] = e
g.edata['Ce'] = self.C(e)
g.update_all(self.message_func,self.reduce_func)
h = g.ndata['h'] # result of graph convolution
e = g.edata['e'] # result of graph convolution
h = h* snorm_n # normalize activation w.r.t. graph node size
e = e* snorm_e # normalize activation w.r.t. graph edge size
h = self.bn_node_h(h) # batch normalization
e = self.bn_node_e(e) # batch normalization
h = F.relu(h) # non-linear activation
e = F.relu(e) # non-linear activation
h = h_in + h # residual connection
e = e_in + e # residual connection
return h, e
class GatedGCN_Net(nn.Module):
def __init__(self, net_parameters):
super(GatedGCN_Net, self).__init__()
input_dim = net_parameters['input_dim']
hidden_dim = net_parameters['hidden_dim']
output_dim = net_parameters['output_dim']
L = net_parameters['L']
self.embedding_h = nn.Linear(input_dim, hidden_dim)
self.embedding_e = nn.Linear(1, hidden_dim)
self.GatedGCN_layers = nn.ModuleList([ GatedGCN_layer(hidden_dim, hidden_dim) for _ in range(L) ])
self.MLP_layer = MLP_layer(hidden_dim, output_dim)
def forward(self, g, h, e, snorm_n, snorm_e):
# input embedding
h = self.embedding_h(h)
e = self.embedding_e(e)
# graph convnet layers
for GGCN_layer in self.GatedGCN_layers:
h,e = GGCN_layer(g,h,e,snorm_n,snorm_e)
# MLP classifier
g.ndata['h'] = h
y = dgl.mean_nodes(g,'h')
y = self.MLP_layer(y)
return y
def loss(self, y_scores, y_labels):
loss = nn.CrossEntropyLoss()(y_scores, y_labels)
return loss
def accuracy(self, scores, targets):
scores = scores.detach().argmax(dim=1)
acc = (scores==targets).float().sum().item()
return acc
def gpu_memory(self, memory):
if torch.cuda.is_available():
current_memory = torch.cuda.memory_allocated() /1e9
if current_memory > memory :
memory = current_memory
return memory
def update(self, lr):
update = torch.optim.Adam( self.parameters(), lr=lr )
return update
def update_learning_rate(self, optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
# network parameters
net_parameters = {}
net_parameters['input_dim'] = 1
net_parameters['hidden_dim'] = 100
net_parameters['output_dim'] = 8 # nb of classes
net_parameters['L'] = 2
# instantiate network
net = GatedGCN_Net(net_parameters)
net = net.to(device)
print(net)
```
# Test forward pass
```
train_loader = DataLoader(trainset, batch_size=10, shuffle=True, collate_fn=collate)
batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e = list(train_loader)[0]
print(batch_graphs)
print(batch_labels)
print(batch_snorm_n.size())
print(batch_snorm_e.size())
batch_x = batch_graphs.ndata['feat'].to(device)
print('batch_x',batch_x.size())
#print(batch_x)
batch_e = batch_graphs.edata['feat'].to(device)
print('batch_e',batch_e.size())
#print(batch_e)
batch_snorm_n = batch_snorm_n.to(device)
print('batch_snorm_n',batch_snorm_n.size())
batch_snorm_e = batch_snorm_e.to(device)
print('batch_snorm_e',batch_snorm_e.size())
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
print(batch_scores.size())
batch_labels = batch_labels.to(device)
accuracy = net.accuracy(batch_scores,batch_labels)
print(accuracy)
```
# Test backward pass
```
# optimization parameters
opt_parameters = {}
opt_parameters['lr'] = 0.0005
# Loss
loss = net.loss(batch_scores, batch_labels)
# Backward pass
lr = opt_parameters['lr']
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
# Train one epoch
```
def train_one_epoch(net, data_loader):
"""
train one epoch
"""
net.train()
epoch_loss = 0
epoch_train_acc = 0
nb_data = 0
gpu_mem = 0
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_x = batch_graphs.ndata['feat'].to(device)
batch_e = batch_graphs.edata['feat'].to(device)
batch_snorm_n = batch_snorm_n.to(device)
batch_snorm_e = batch_snorm_e.to(device)
batch_labels = batch_labels.to(device)
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
gpu_mem = net.gpu_memory(gpu_mem)
loss = net.loss(batch_scores, batch_labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.detach().item()
epoch_train_acc += net.accuracy(batch_scores,batch_labels)
nb_data += batch_labels.size(0)
epoch_loss /= (iter + 1)
epoch_train_acc /= nb_data
return epoch_loss, epoch_train_acc, gpu_mem
```
# Evaluation
```
def evaluate_network(net, data_loader):
"""
evaluate test set
"""
net.eval()
epoch_test_loss = 0
epoch_test_acc = 0
nb_data = 0
with torch.no_grad():
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_x = batch_graphs.ndata['feat'].to(device)
batch_e = batch_graphs.edata['feat'].to(device)
batch_snorm_n = batch_snorm_n.to(device)
batch_snorm_e = batch_snorm_e.to(device)
batch_labels = batch_labels.to(device)
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
loss = net.loss(batch_scores, batch_labels)
epoch_test_loss += loss.detach().item()
epoch_test_acc += net.accuracy(batch_scores,batch_labels)
nb_data += batch_labels.size(0)
epoch_test_loss /= (iter + 1)
epoch_test_acc /= nb_data
return epoch_test_loss, epoch_test_acc
```
# Train GNN
```
# datasets
train_loader = DataLoader(trainset, batch_size=50, shuffle=True, collate_fn=collate)
test_loader = DataLoader(testset, batch_size=50, shuffle=False, collate_fn=collate)
val_loader = DataLoader(valset, batch_size=50, shuffle=False, drop_last=False, collate_fn=collate)
# Create model
net_parameters = {}
net_parameters['input_dim'] = 1
net_parameters['hidden_dim'] = 100
net_parameters['output_dim'] = 8 # nb of classes
net_parameters['L'] = 4
net = GatedGCN_Net(net_parameters)
net = net.to(device)
optimizer = torch.optim.Adam(net.parameters(), lr=0.0001)
epoch_train_losses = []
epoch_test_losses = []
epoch_val_losses = []
epoch_train_accs = []
epoch_test_accs = []
epoch_val_accs = []
for epoch in range(50):
start = time.time()
epoch_train_loss, epoch_train_acc, gpu_mem = train_one_epoch(net, train_loader)
epoch_test_loss, epoch_test_acc = evaluate_network(net, test_loader)
epoch_val_loss, epoch_val_acc = evaluate_network(net, val_loader)
print('Epoch {}, time {:.4f}, train_loss: {:.4f}, test_loss: {:.4f}, val_loss: {:.4f} \n train_acc: {:.4f}, test_acc: {:.4f}, val_acc: {:.4f}'.format(epoch, time.time()-start, epoch_train_loss, epoch_test_loss, epoch_val_loss, epoch_train_acc, epoch_test_acc, epoch_val_acc))
```
|
github_jupyter
|
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = '01_GatedGCNs_DGL.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
!pip install dgl==0.3 #DGL
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import dgl
from dgl import DGLGraph
from dgl.data import MiniGCDataset
import time
import numpy as np
import networkx as nx
import os
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
# select GPU
gpu_id = 0
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_id)
if torch.cuda.is_available():
print('cuda available with GPU:',torch.cuda.get_device_name(0))
dtypeFloat = torch.cuda.FloatTensor
dtypeLong = torch.cuda.LongTensor
else:
print('cuda not available')
gpu_id = -1
server_id = -1
dtypeFloat = torch.FloatTensor
dtypeLong = torch.LongTensor
# GPU
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
# collate function
def collate(samples):
graphs, labels = map(list, zip(*samples)) # samples is a list of pairs (graph, label).
labels = torch.tensor(labels)
tab_sizes_n = [ graphs[i].number_of_nodes() for i in range(len(graphs))] # graph sizes
tab_snorm_n = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_n ]
snorm_n = torch.cat(tab_snorm_n).sqrt() # normalization constant for better optimization
tab_sizes_e = [ graphs[i].number_of_edges() for i in range(len(graphs))] # nb of edges
tab_snorm_e = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_e ]
snorm_e = torch.cat(tab_snorm_e).sqrt() # normalization constant for better optimization
batched_graph = dgl.batch(graphs) # batch graphs
return batched_graph, labels, snorm_n, snorm_e
# create artifical data feature (= in degree) for each node
def create_artificial_features(dataset):
for (graph,_) in dataset:
graph.ndata['feat'] = graph.in_degrees().view(-1, 1).float()
graph.edata['feat'] = torch.ones(graph.number_of_edges(),1)
return dataset
# use artifical graph dataset of DGL
trainset = MiniGCDataset(8, 10, 20)
trainset = create_artificial_features(trainset)
print(trainset[0])
save_dataset = False
save_dataset = True
data_folder = 'data/'
if save_dataset == True:
# train, test, val datasets
trainset = MiniGCDataset(350, 10, 20)
testset = MiniGCDataset(100, 10, 20)
valset = MiniGCDataset(100, 10, 20)
data_loader = DataLoader(trainset, batch_size=20, shuffle=True, collate_fn=collate)
trainset = create_artificial_features(trainset)
testset = create_artificial_features(testset)
valset = create_artificial_features(valset)
with open(data_folder + "artificial_dataset.pickle","wb") as f:
pickle.dump([trainset,testset,valset],f)
else:
with open(data_folder + "artificial_dataset.pickle","rb") as f:
f = pickle.load(f)
trainset = f[0]
testset = f[1]
valset = f[1]
print('train, test, val sizes :',len(trainset),len(testset),len(valset))
visualset = MiniGCDataset(8, 10, 20)
# visualise the 8 classes of graphs
for c in range(8):
graph, label = visualset[c]
fig, ax = plt.subplots()
nx.draw(graph.to_networkx(), ax=ax)
ax.set_title('Class: {:d}'.format(label))
plt.show()
class MLP_layer(nn.Module):
def __init__(self, input_dim, output_dim, L=2): # L = nb of hidden layers
super(MLP_layer, self).__init__()
list_FC_layers = [ nn.Linear( input_dim, input_dim, bias=True ) for l in range(L) ]
list_FC_layers.append(nn.Linear( input_dim, output_dim , bias=True ))
self.FC_layers = nn.ModuleList(list_FC_layers)
self.L = L
def forward(self, x):
y = x
for l in range(self.L):
y = self.FC_layers[l](y)
y = F.relu(y)
y = self.FC_layers[self.L](y)
return y
class GatedGCN_layer(nn.Module):
def __init__(self, input_dim, output_dim):
super(GatedGCN_layer, self).__init__()
self.A = nn.Linear(input_dim, output_dim, bias=True)
self.B = nn.Linear(input_dim, output_dim, bias=True)
self.C = nn.Linear(input_dim, output_dim, bias=True)
self.D = nn.Linear(input_dim, output_dim, bias=True)
self.E = nn.Linear(input_dim, output_dim, bias=True)
self.bn_node_h = nn.BatchNorm1d(output_dim)
self.bn_node_e = nn.BatchNorm1d(output_dim)
def message_func(self, edges):
Bh_j = edges.src['Bh']
e_ij = edges.data['Ce'] + edges.src['Dh'] + edges.dst['Eh'] # e_ij = Ce_ij + Dhi + Ehj
edges.data['e'] = e_ij
return {'Bh_j' : Bh_j, 'e_ij' : e_ij}
def reduce_func(self, nodes):
Ah_i = nodes.data['Ah']
Bh_j = nodes.mailbox['Bh_j']
e = nodes.mailbox['e_ij']
sigma_ij = torch.sigmoid(e) # sigma_ij = sigmoid(e_ij)
h = Ah_i + torch.sum( sigma_ij * Bh_j, dim=1 ) / torch.sum( sigma_ij, dim=1 ) # hi = Ahi + sum_j eta_ij * Bhj
return {'h' : h}
def forward(self, g, h, e, snorm_n, snorm_e):
h_in = h # residual connection
e_in = e # residual connection
g.ndata['h'] = h
g.ndata['Ah'] = self.A(h)
g.ndata['Bh'] = self.B(h)
g.ndata['Dh'] = self.D(h)
g.ndata['Eh'] = self.E(h)
g.edata['e'] = e
g.edata['Ce'] = self.C(e)
g.update_all(self.message_func,self.reduce_func)
h = g.ndata['h'] # result of graph convolution
e = g.edata['e'] # result of graph convolution
h = h* snorm_n # normalize activation w.r.t. graph node size
e = e* snorm_e # normalize activation w.r.t. graph edge size
h = self.bn_node_h(h) # batch normalization
e = self.bn_node_e(e) # batch normalization
h = F.relu(h) # non-linear activation
e = F.relu(e) # non-linear activation
h = h_in + h # residual connection
e = e_in + e # residual connection
return h, e
class GatedGCN_Net(nn.Module):
def __init__(self, net_parameters):
super(GatedGCN_Net, self).__init__()
input_dim = net_parameters['input_dim']
hidden_dim = net_parameters['hidden_dim']
output_dim = net_parameters['output_dim']
L = net_parameters['L']
self.embedding_h = nn.Linear(input_dim, hidden_dim)
self.embedding_e = nn.Linear(1, hidden_dim)
self.GatedGCN_layers = nn.ModuleList([ GatedGCN_layer(hidden_dim, hidden_dim) for _ in range(L) ])
self.MLP_layer = MLP_layer(hidden_dim, output_dim)
def forward(self, g, h, e, snorm_n, snorm_e):
# input embedding
h = self.embedding_h(h)
e = self.embedding_e(e)
# graph convnet layers
for GGCN_layer in self.GatedGCN_layers:
h,e = GGCN_layer(g,h,e,snorm_n,snorm_e)
# MLP classifier
g.ndata['h'] = h
y = dgl.mean_nodes(g,'h')
y = self.MLP_layer(y)
return y
def loss(self, y_scores, y_labels):
loss = nn.CrossEntropyLoss()(y_scores, y_labels)
return loss
def accuracy(self, scores, targets):
scores = scores.detach().argmax(dim=1)
acc = (scores==targets).float().sum().item()
return acc
def gpu_memory(self, memory):
if torch.cuda.is_available():
current_memory = torch.cuda.memory_allocated() /1e9
if current_memory > memory :
memory = current_memory
return memory
def update(self, lr):
update = torch.optim.Adam( self.parameters(), lr=lr )
return update
def update_learning_rate(self, optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
# network parameters
net_parameters = {}
net_parameters['input_dim'] = 1
net_parameters['hidden_dim'] = 100
net_parameters['output_dim'] = 8 # nb of classes
net_parameters['L'] = 2
# instantiate network
net = GatedGCN_Net(net_parameters)
net = net.to(device)
print(net)
train_loader = DataLoader(trainset, batch_size=10, shuffle=True, collate_fn=collate)
batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e = list(train_loader)[0]
print(batch_graphs)
print(batch_labels)
print(batch_snorm_n.size())
print(batch_snorm_e.size())
batch_x = batch_graphs.ndata['feat'].to(device)
print('batch_x',batch_x.size())
#print(batch_x)
batch_e = batch_graphs.edata['feat'].to(device)
print('batch_e',batch_e.size())
#print(batch_e)
batch_snorm_n = batch_snorm_n.to(device)
print('batch_snorm_n',batch_snorm_n.size())
batch_snorm_e = batch_snorm_e.to(device)
print('batch_snorm_e',batch_snorm_e.size())
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
print(batch_scores.size())
batch_labels = batch_labels.to(device)
accuracy = net.accuracy(batch_scores,batch_labels)
print(accuracy)
# optimization parameters
opt_parameters = {}
opt_parameters['lr'] = 0.0005
# Loss
loss = net.loss(batch_scores, batch_labels)
# Backward pass
lr = opt_parameters['lr']
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def train_one_epoch(net, data_loader):
"""
train one epoch
"""
net.train()
epoch_loss = 0
epoch_train_acc = 0
nb_data = 0
gpu_mem = 0
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_x = batch_graphs.ndata['feat'].to(device)
batch_e = batch_graphs.edata['feat'].to(device)
batch_snorm_n = batch_snorm_n.to(device)
batch_snorm_e = batch_snorm_e.to(device)
batch_labels = batch_labels.to(device)
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
gpu_mem = net.gpu_memory(gpu_mem)
loss = net.loss(batch_scores, batch_labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.detach().item()
epoch_train_acc += net.accuracy(batch_scores,batch_labels)
nb_data += batch_labels.size(0)
epoch_loss /= (iter + 1)
epoch_train_acc /= nb_data
return epoch_loss, epoch_train_acc, gpu_mem
def evaluate_network(net, data_loader):
"""
evaluate test set
"""
net.eval()
epoch_test_loss = 0
epoch_test_acc = 0
nb_data = 0
with torch.no_grad():
for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):
batch_x = batch_graphs.ndata['feat'].to(device)
batch_e = batch_graphs.edata['feat'].to(device)
batch_snorm_n = batch_snorm_n.to(device)
batch_snorm_e = batch_snorm_e.to(device)
batch_labels = batch_labels.to(device)
batch_scores = net.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)
loss = net.loss(batch_scores, batch_labels)
epoch_test_loss += loss.detach().item()
epoch_test_acc += net.accuracy(batch_scores,batch_labels)
nb_data += batch_labels.size(0)
epoch_test_loss /= (iter + 1)
epoch_test_acc /= nb_data
return epoch_test_loss, epoch_test_acc
# datasets
train_loader = DataLoader(trainset, batch_size=50, shuffle=True, collate_fn=collate)
test_loader = DataLoader(testset, batch_size=50, shuffle=False, collate_fn=collate)
val_loader = DataLoader(valset, batch_size=50, shuffle=False, drop_last=False, collate_fn=collate)
# Create model
net_parameters = {}
net_parameters['input_dim'] = 1
net_parameters['hidden_dim'] = 100
net_parameters['output_dim'] = 8 # nb of classes
net_parameters['L'] = 4
net = GatedGCN_Net(net_parameters)
net = net.to(device)
optimizer = torch.optim.Adam(net.parameters(), lr=0.0001)
epoch_train_losses = []
epoch_test_losses = []
epoch_val_losses = []
epoch_train_accs = []
epoch_test_accs = []
epoch_val_accs = []
for epoch in range(50):
start = time.time()
epoch_train_loss, epoch_train_acc, gpu_mem = train_one_epoch(net, train_loader)
epoch_test_loss, epoch_test_acc = evaluate_network(net, test_loader)
epoch_val_loss, epoch_val_acc = evaluate_network(net, val_loader)
print('Epoch {}, time {:.4f}, train_loss: {:.4f}, test_loss: {:.4f}, val_loss: {:.4f} \n train_acc: {:.4f}, test_acc: {:.4f}, val_acc: {:.4f}'.format(epoch, time.time()-start, epoch_train_loss, epoch_test_loss, epoch_val_loss, epoch_train_acc, epoch_test_acc, epoch_val_acc))
| 0.579162 | 0.813313 |
# Name
Data preparation using Apache Pig on YARN with Cloud Dataproc
# Label
Cloud Dataproc, GCP, Cloud Storage, YARN, Pig, Apache, Kubeflow, pipelines, components
# Summary
A Kubeflow Pipeline component to prepare data by submitting an Apache Pig job on YARN to Cloud Dataproc.
# Details
## Intended use
Use the component to run an Apache Pig job as one preprocessing step in a Kubeflow Pipeline.
## Runtime arguments
| Argument | Description | Optional | Data type | Accepted values | Default |
|----------|-------------|----------|-----------|-----------------|---------|
| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | |
| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |
| cluster_name | The name of the cluster to run the job. | No | String | | |
| queries | The queries to execute the Pig job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
| query_file_uri | The HCFS URI of the script that contains the Pig queries. | Yes | GCSPath | | None |
| script_variables | Mapping of the query’s variable names to their values (equivalent to the Pig command: SET name="value";). | Yes | Dict | | None |
| pig_job | The payload of a [PigJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PigJob). | Yes | Dict | | None |
| job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None |
| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |
## Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
## Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).
* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).
* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.
* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project.
## Detailed description
This component creates a Pig job from [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
```
%%capture --no-stderr
!pip3 install kfp --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
dataproc_submit_pig_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.1/components/gcp/dataproc/submit_pig_job/component.yaml')
help(dataproc_submit_pig_job_op)
```
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
#### Setup a Dataproc cluster
[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code.
#### Prepare a Pig query
Either put your Pig queries in the `queries` list, or upload your Pig queries into a file to a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in `query_file_uri`. In this sample, we will use a hard coded query in the `queries` list to select data from a local `passwd` file.
For more details on Apache Pig, see the [Pig documentation.](http://pig.apache.org/docs/latest/)
#### Set sample parameters
```
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
natality_csv = load 'gs://public-datasets/natality/csv' using PigStorage(':');
top_natality_csv = LIMIT natality_csv 10;
dump natality_csv;'''
EXPERIMENT_NAME = 'Dataproc - Submit Pig Job'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Pig job pipeline',
description='Dataproc submit Pig job pipeline'
)
def dataproc_submit_pig_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
pig_job='',
job='',
wait_interval='30'
):
dataproc_submit_pig_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
pig_job=pig_job,
job=job,
wait_interval=wait_interval)
```
#### Compile the pipeline
```
pipeline_func = dataproc_submit_pig_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
## References
* [Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster)
* [Pig documentation](http://pig.apache.org/docs/latest/)
* [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs)
* [PigJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PigJob)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
|
github_jupyter
|
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
dataproc_submit_pig_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.1/components/gcp/dataproc/submit_pig_job/component.yaml')
help(dataproc_submit_pig_job_op)
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
natality_csv = load 'gs://public-datasets/natality/csv' using PigStorage(':');
top_natality_csv = LIMIT natality_csv 10;
dump natality_csv;'''
EXPERIMENT_NAME = 'Dataproc - Submit Pig Job'
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit Pig job pipeline',
description='Dataproc submit Pig job pipeline'
)
def dataproc_submit_pig_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
pig_job='',
job='',
wait_interval='30'
):
dataproc_submit_pig_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
pig_job=pig_job,
job=job,
wait_interval=wait_interval)
pipeline_func = dataproc_submit_pig_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
| 0.308398 | 0.9255 |
```
import numpy
import matplotlib.pyplot
%matplotlib inline
data = numpy.loadtxt(fname = 'data/data/weather-01.csv', delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
```
# Loop time
```
word = 'notebook'
print (word[4])
for char in word:
print (char)
```
## Get list of filenames from the disk
```
import glob
print(glob.glob('data/data/weather*.csv'))
```
## Putting it all together
```
filenames = sorted(glob.glob('data/data/weather*.csv'))
filenames_123 = filenames [0:3]
for f in filenames_123:
print(f)
data = numpy.loadtxt(fname=f, delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
```
## Making decisions
```
num = 137
if num > 100:
print ('Greater')
else:
print ('Not Greater')
print ('Done')
num = -3
if num > 0:
print (num, "is positive")
elif num == 0:
print (num, "is zero")
else:
print (num, "is negative")
```
## Test the data
```
filenames = sorted(glob.glob('data/data/weather*.csv'))
filenames_123 = filenames [0:3]
for f in filenames_123:
print(f)
data = numpy.loadtxt(fname=f, delimiter = ',')
if (numpy.max (data, axis = 0)[0] == 0) and (numpy.max(data, axis = 0)[20] == 20):
print ("Suspicious-looking maxima")
elif numpy.sum(numpy.min(data, axis = 0)) == 0:
print ("Minima gives zero")
else:
print ("data looks okay")
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
```
# Functions
```
def fahr_to_kelvin (temp):
return ((temp-32) * (5/9) +273.15)
print ("Freezing point of water: ", fahr_to_kelvin(32))
print ("Boiling point of water: ", fahr_to_kelvin(212))
## make data tessts into functions
def analyse (filename):
""" Brings up max, min and average plots for the data in sublots for filename argument.
"""
data = numpy.loadtxt(fname=filename, delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
def detect_problems (filename):
""" Some of our data looks a bit funky checks for these problems.
This funciton reads a file (filename argument) and reports on odd ooking maxima and minima that add up to zero.
This seems to happen when sensors break.
The function does not return any data.
"""
data = numpy.loadtxt(fname = filename, delimiter = ',')
if (numpy.max (data, axis = 0)[0] == 0) and (numpy.max(data, axis = 0)[20] == 20):
print ("Suspicious-looking maxima")
elif numpy.sum(numpy.min(data, axis = 0)) == 0:
print ("Minima gives zero")
else:
print ("data looks okay")
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
help (numpy.loadtxt)
help (detect_problems)
help (analyse)
```
|
github_jupyter
|
import numpy
import matplotlib.pyplot
%matplotlib inline
data = numpy.loadtxt(fname = 'data/data/weather-01.csv', delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
word = 'notebook'
print (word[4])
for char in word:
print (char)
import glob
print(glob.glob('data/data/weather*.csv'))
filenames = sorted(glob.glob('data/data/weather*.csv'))
filenames_123 = filenames [0:3]
for f in filenames_123:
print(f)
data = numpy.loadtxt(fname=f, delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
num = 137
if num > 100:
print ('Greater')
else:
print ('Not Greater')
print ('Done')
num = -3
if num > 0:
print (num, "is positive")
elif num == 0:
print (num, "is zero")
else:
print (num, "is negative")
filenames = sorted(glob.glob('data/data/weather*.csv'))
filenames_123 = filenames [0:3]
for f in filenames_123:
print(f)
data = numpy.loadtxt(fname=f, delimiter = ',')
if (numpy.max (data, axis = 0)[0] == 0) and (numpy.max(data, axis = 0)[20] == 20):
print ("Suspicious-looking maxima")
elif numpy.sum(numpy.min(data, axis = 0)) == 0:
print ("Minima gives zero")
else:
print ("data looks okay")
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
def fahr_to_kelvin (temp):
return ((temp-32) * (5/9) +273.15)
print ("Freezing point of water: ", fahr_to_kelvin(32))
print ("Boiling point of water: ", fahr_to_kelvin(212))
## make data tessts into functions
def analyse (filename):
""" Brings up max, min and average plots for the data in sublots for filename argument.
"""
data = numpy.loadtxt(fname=filename, delimiter = ',')
#create wider fiigure for subplots
fig = matplotlib.pyplot.figure (figsize = (10,3))
#create placeholders for plots
subplot1 = fig.add_subplot (1,3,1)
subplot2 = fig.add_subplot (1,3,2)
subplot3 = fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis = 0))
subplot2.set_ylabel('max')
subplot2.plot(numpy.max(data, axis = 0))
subplot3.set_ylabel('min')
subplot3.plot(numpy.min(data, axis = 0))
fig.tight_layout()
matplotlib.pyplot.show()
def detect_problems (filename):
""" Some of our data looks a bit funky checks for these problems.
This funciton reads a file (filename argument) and reports on odd ooking maxima and minima that add up to zero.
This seems to happen when sensors break.
The function does not return any data.
"""
data = numpy.loadtxt(fname = filename, delimiter = ',')
if (numpy.max (data, axis = 0)[0] == 0) and (numpy.max(data, axis = 0)[20] == 20):
print ("Suspicious-looking maxima")
elif numpy.sum(numpy.min(data, axis = 0)) == 0:
print ("Minima gives zero")
else:
print ("data looks okay")
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
help (numpy.loadtxt)
help (detect_problems)
help (analyse)
| 0.431105 | 0.831759 |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 7. Custom Interactivity</h2></div>
Using hvPlot allows you to generate a number of different types of plot quickly from a standard API by building [HoloViews](https://holoviews.org) objects, as discussed in the previous notebook. These objects are rendered with Bokeh which offers a number of standard ways to interact with your plot, such as panning and zooming tools.
Many other modes of interactivity are possible when building an exploratory visualization (such as a dashboard) and these forms of interactivity cannot be achieved using hvPlot alone.
In this notebook, we will drop down to the HoloViews level of representation to build a visualization directly that consists of linked plots that update when you interactivily select a particular earthquake with the mouse. The goal is to show how more sophisticated forms of interactivity can be built when needed, in a way that's fully compatible with all the examples shown in earlier sections.
First let us load our initial imports:
```
import numpy as np
import pandas as pd
import hvplot.pandas # noqa
from holoviews.element import tiles
```
And clean the data before filtering (for magnitude `>7`) and projecting to to Web Mercator as before:
```
%%time
df = pd.read_parquet('../data/earthquakes-projected.parq')
df.time = df.time.astype('datetime64[ns]')
df = df.set_index(df.time)
most_severe = df[df.mag >= 7]
```
Towards the end of the previous notebook we generated a scatter plot of earthquakes
across the earth that had a magnitude `>7` that was projected using
datashader and overlaid on top of a map tile source:
```
high_mag_quakes = most_severe.hvplot.points(x='easting', y='northing', c='mag',
title='Earthquakes with magnitude >= 7')
esri = tiles.ESRI().redim(x='easting', y='northing')
esri * high_mag_quakes
```
And saw how this object is a HoloViews `Points` object:
```
print(high_mag_quakes)
```
This object is an example of a HoloViews *Element* which is an object that can display itself. These elements are *thin* wrappers around your data and the raw input data is always available on the `.data` attribute. For instance, we can look at the `head` of the `most_severe` `DataFrame` as follows:
```
high_mag_quakes.data.head()
```
We will now learn a little more about `HoloViews` elements, including how to build them up from scratch so that we can control every aspect of them.
### An Introduction to HoloViews Elements
HoloViews elements are the atomic, visualizable components that can be
rendered by a plotting library such as Bokeh. We don't actually need to use
hvPlot to create these element objects: we can create them directly by
importing HoloViews (and loading the extension if we have not loaded
hvPlot):
```
import holoviews as hv
hv.extension("bokeh") # Optional here as we have already loaded hvplot.pandas
```
Now we can create our own example of a `Points` element. In the next
cell we plot 100 points with a normal (independent) distrbutions in the
`x` and `y` directions:
```
xs = np.random.randn(100)
ys = np.random.randn(100)
hv.Points((xs, ys))
```
Now that the axis labels are 'x' and 'y', the default *dimensions* for
this element type. We can use a different set of dimensions along the x- and y-axis (say
'weight' and 'height') and we can also associate additional `fitness` information with each point if we wish:
```
xs = np.random.randn(100)
ys = np.random.randn(100)
fitness = np.random.randn(100)
height_v_weight = hv.Points((xs, ys, fitness), ['weight', 'height'], 'fitness')
height_v_weight
```
Now we can look at the printed representation of this object:
```
print(height_v_weight)
```
Here the printed representation shows the *key dimensions* that we specified in square brackets as `[weight,height]` and the additional *value dimension* `fitness` in parentheses as `(fitness)`. The *key dimensions* map to the axes and the *value dimensions* can be visually represented by other visual attributes as we shall see shortly.
For more information an HoloViews dimensions, see this [user guide](http://holoviews.org/user_guide/Annotating_Data.html).
#### Exercise
Visit the [HoloViews reference gallery](http://holoviews.org/reference/index.html) and browse
the available set of elements. Pick an element type and try running
one of the self-contained examples in the following cell.
### Setting Visual Options
The two `Points` elements above look quite different from the one
returned by hvplot showing the earthquake positions. This is because
hvplot makes use of the HoloViews *options system* to customize the
visual representation of these element objects.
Let us color the `height_v_weight` scatter by the fitness value and use a larger
point size:
```
height_v_weight.opts(color='fitness', size=8, colorbar=True, aspect='square')
```
#### Exercise
Copy the line above into the next cell and try changing the points to
'blue' or 'green' or another dimension of the data such as 'height' or 'weight'.
Are the results what you expect?
### The `help` system
You can learn more about the `.opts` method and the HoloViews options
system in the [corresponding user
guide](http://holoviews.org/user_guide/Applying_Customizations.html). To
easily learn about the available options from inside a notebook, you can
use `hv.help` and inspect the 'Style Options'.
```
# Commented as there is a lot of help output!
# hv.help(hv.Scatter)
```
At this point, we can have some insight to the sort of HoloViews object
hvPlot is building behind the scenes for our earthquake example:
```
esri * hv.Points(most_severe, ['easting', 'northing'], 'mag').opts(color='mag', size=8, aspect='equal')
```
#### Exercise
Try using `hv.help` to inspect the options available for different element types such as the `Points` element used above. Copy the line above into the cell below and pick a `Points` option that makes sense to you and try using it in the `.opts` method.
<details><summary><i><u>(Hint)<u></i></summary><br>
If you can't decide on an option to pick, a good choice is `marker`. For instance, try:
* `marker='+'`
* `marker='d'`.
HoloViews uses [matplotlib's conventions](https://matplotlib.org/3.1.0/api/markers_api.html) for specifying the various marker types. Try finding out which ones are support by Bokeh.
</details>
### Custom interactivity for Elements
When rasterization of the population density data via hvplot was first introduced, we saw that the HoloViews object
returned was not an element but a *`DynamicMap`*.
A `DynamicMap` enables custom interactivity beyond the Bokeh defaults by
dynamically generating elements that get displayed and updated as the
plot is interacted with.
There is a counterpart to the `DynamicMap` that does not require a live
Python server to be running called the `HoloMap`. The `HoloMap`
container will not be covered in the tutorial but you can learn more
about them in the [containers user
guide](http://holoviews.org/user_guide/Dimensioned_Containers.html).
Now let us build a very simple `DynamicMap` that is driven by a *linked
stream* (specifically a `PointerXY` stream) that represents the position
of the cursor over the plot:
```
from holoviews import streams
ellipse = hv.Ellipse(0, 0, 1)
pointer = streams.PointerXY(x=0, y=0) # x=0 and y=0 are the initialized values
def crosshair(x, y):
return hv.HLine(y) * hv.VLine(x)
ellipse * hv.DynamicMap(crosshair, streams=[pointer])
```
Try moving your mouse over the plot and you should see the crosshair
follow your mouse position.
The core concepts here are:
* The plot shows an overlay built with the `*` operator introduced in
the previous notebook.
* There is a callback that returns this overlay that is built according
to the supplied `x` and `y` arguments. A DynamicMap always contains a
callback that returns a HoloViews object such as an `Element` or
`Overlay`
* These `x` and `y` arguments are supplied by the `PointerXY` stream
that reflect the position of the mouse on the plot.
#### Exercise
Look up the `Ellipse`, `HLine`, and `VLine` elements in the
[HoloViews reference guide](http://holoviews.org/reference/index.html) and see
if the definitions of these elements align with your initial intuitions.
#### Exercise (additional)
If you have time, try running one of the examples in the
'Streams' section of the [HoloViews reference guide](http://holoviews.org/reference/index.html) in the cell below. All the examples in the reference guide should be relatively short and self-contained.
### Selecting a particular earthquake with the mouse
Now we only need two more concepts before we can set up the appropriate
mechanism to select a particular earthquake on the hvPlot-generated
Scatter plot we started with.
First, we can attach a stream to an existing HoloViews element such as
the earthquake distribution generated with hvplot:
```
selection_stream = streams.Selection1D(source=high_mag_quakes)
```
Next we need to enable the 'tap' tool on our Scatter to instruct Bokeh
to enable the desired selection mechanism in the browser.
```
high_mag_quakes.opts(tools=['tap'])
```
The Bokeh default alpha of points which are unselected is going to be too low when we overlay these points on a tile source. We can use the HoloViews options system to pick a better default as follows:
```
hv.opts.defaults(hv.opts.Points(nonselection_alpha=0.4))
```
The tap tool is in the toolbar with the icon showing the concentric
circles and plus symbol. If you enable this tool, you should be able to pick individual earthquakes above by tapping on them.
Now we can make a DynamicMap that uses the stream we defined to show the index of the earthquake selected via the `hv.Text` element:
```
def labelled_callback(index):
if len(index) == 0:
return hv.Text(x=0,y=0, text='')
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe.iloc[first_index]
text = '%d : %s' % (first_index, row.place)
return hv.Text(x=row.easting, y=row.northing, text=text).opts(color='white')
labeller = hv.DynamicMap(labelled_callback, streams=[selection_stream])
```
This labeller receives the index argument from the Selection1D stream
which corresponds to the row of the original dataframe (`most_severe`)
that was selected. This lets us present the index and place value using
`hv.Text` which we then position at the corresponding latitude and
longitude to label the chosen earthquake.
Finally, we overlay this labeller `DynamicMap` over the original
plot. Now by using the tap tool you can see the index number of an
earthquake followed by the assigned place name:
```
(esri * high_mag_quakes * labeller).opts(hv.opts.Points(tools=['tap', 'hover']))
```
#### Exercise
Pick an earthquake point above and using the displayed index, display the corresponding row of the `most_severe` dataframe using the `.iloc` method in the following cell.
### Building a linked earthquake visualizer
Now we will build a visualization that achieves the following:
* The user can select an earthquake with magnitude `>7` using the tap
tool in the manner illustrated in the last section.
* In addition to the existing label, we will add concentric circles to further highlight the
selected earthquake location.
* *All* earthquakes within 0.5 degrees of latitude and longitude of the
selected earthquake (~50km) will then be used to supply data for two linked
plots:
1. A histogram showing the distribution of magnitudes in the selected area.
2. A timeseries scatter plot showing the magnitudes of earthquakes over time in the selected area.
The first step is to generate a concentric-circle marker using a similar approach to the `labeller` above. We can write a function that uses `Ellipse` to mark a particular earthquake and pass it to a `DynamicMap`:
```
def mark_earthquake(index):
if len(index) == 0:
return hv.Overlay([])
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe.iloc[first_index]
return (hv.Ellipse(row.easting, row.northing, 1.5e6) *
hv.Ellipse(row.easting, row.northing, 3e6)).opts(
hv.opts.Ellipse(color='white', alpha=0.5)
)
quake_marker = hv.DynamicMap(mark_earthquake, streams=[selection_stream])
```
Now we can test this component by building an overlay of the `ESRI` tile source, the `>=7` magnitude points and `quake_marked`:
```
esri * high_mag_quakes.opts(tools=['tap']) * quake_marker
```
Note that you may need to zoom in to your selected earthquake to see the
localized, lower magnitude earthquakes around it.
### Filtering earthquakes by location
We wish to analyse the earthquakes that occur around a particular latitude and longitude. To do this we will define a function that given a latitude and longitude, returns the rows of a suitable dataframe that corresponding to earthquakes within 0.5 degrees of that position:
```
def earthquakes_around_point(df, lat, lon, degrees_dist=0.5):
half_dist = degrees_dist / 2.0
return df[((df['latitude'] - lat).abs() < half_dist)
& ((df['longitude'] - lon).abs() < half_dist)]
```
As it can be slow to filter our dataframes in this way, we can define the following function that can cache the result of filtering `df` (containing all earthquakes) based on an index pulled from the `most_severe` dataframe:
```
def index_to_selection(indices, cache={}):
if not indices:
return most_severe.iloc[[]]
index = indices[0] # Pick only the first one if multiple are selected
if index in cache: return cache[index]
row = most_severe.iloc[index]
selected_df = earthquakes_around_point(df, row.latitude, row.longitude)
cache[index] = selected_df
return selected_df
```
The caching will be useful as we know both of our planned linked plots (i.e the histogram and scatter over time) make use of the same earthquake selection once a particular index is supplied from a user selection. This particular caching strategy is rather awkward (and leaks memory!) but it simple and will serve for the current example. A better approach to caching will be presented in the [Advanced Dashboards](./08_Advanced_Dashboards.ipynb) section of the tutorial.
#### Exercise
Test the `index_to_selection` function above for the index you picked in the previous exercise. Note that the stream supplied a *list* of indices and that the function above only uses the first value given in that list. Do the selected rows look correct?:
#### Exercise
Convince yourself that the selected earthquakes are within 0.5$^o$ distance of each other in both latitude and longitude.
<details><summary>Hint</summary><br>
For a given `chosen` index, you can see the distance difference using the following code:
```python
chosen = 235
delta_long = index_to_selection([chosen]).longitude.max() - index_to_selection([chosen]).longitude.min()
delta_lat = index_to_selection([chosen]).latitude.max() - index_to_selection([chosen]).latitude.min()
print("Difference in longitude: %s" % delta_long)
print("Difference in latitude: %s" % delta_lat)
```
</details>
### Linked plots
So far we have overlayed the display updates on top of the existing
spatial distribution of earthquakes. However, there is no requirement
that the data is overlaid and we might want to simply attach an entirely
new, derived plot that dynamically updates to the side.
Using the same principles as we have already seen, we can define a
`DynamicMap` that returns `Histogram` distributions of earthquake
magnitude:
```
def histogram_callback(index):
title = 'Distribution of all magnitudes within half a degree of selection'
selected_df = index_to_selection(index)
return selected_df.hvplot.hist(y='mag', bin_range=(0,10), bins=20, color='red', title=title)
histogram = hv.DynamicMap(histogram_callback, streams=[selection_stream])
```
The only real difference in the approach here is that we can still use
`.hvplot` to generate our elements instead of declaring the HoloViews
elements explicitly. In this example, `.hvplot.hist` is used.
The exact same principles can be used to build the scatter callback and `temporal_distribution` `DynamicMap`:
```
def scatter_callback(index):
title = 'Temporal distribution of all magnitudes within half a degree of selection '
selected_df = index_to_selection(index)
return selected_df.hvplot.scatter('time', 'mag', color='green', title=title)
temporal_distribution = hv.DynamicMap(scatter_callback, streams=[selection_stream])
```
Lastly, let us define a `DynamicMap` that draws a `VLine` to mark the time at which the selected earthquake occurs so we can see which tremors may have been aftershocks immediately after that major earthquake occurred:
```
def vline_callback(index):
if not index:
return hv.VLine(0).opts(alpha=0)
row = most_severe.iloc[index[0]]
return hv.VLine(row.time).opts(line_width=2, color='black')
temporal_vline = hv.DynamicMap(vline_callback, streams=[selection_stream])
```
We now have all the pieces we need to build an interactive, linked visualization of earthquake data.
#### Exercise
Test the `histogram_callback` and `scatter_callback` callback functions by supplying your chosen index, remembering that these functions require a list argument in the following cell.
### Putting it together
Now we can combine the components we have already built as follows to create a dynamically updating plot together with an associated, linked histogram:
```
((esri * high_mag_quakes.opts(tools=['tap']) * labeller * quake_marker)
+ histogram + temporal_distribution * temporal_vline).cols(1)
```
We now have a custom interactive visualization that builds on the output of `hvplot` by making use of the underlying HoloViews objects that it generates.
## Conclusion
When exploring data it can be convenient to use the `.plot` API to quickly visualize a particular dataset. By calling `.hvplot` to generate different plots over the course of a session and then linking such plots together, it is possible to gradually build up a mental model of how a particular dataset is structured.
In the workflow presented here, building such custom interaction is relatively quick and easy and does not involve throwing away prior code used to generate simpler plots. In the spirit of 'short cuts not dead ends', we can use the HoloViews-object output of `hvplot` that we used in our initial exploration to build rich visualizations with custom interaction to explore our data at a deeper level.
These interactive visualizations not only allow for custom interactions beyond the scope of `hvplot` alone, but they can display visual annotations not offered by the `.plot` API. In particular, we can overlay our data on top of tile sources, generate interactive textual annotations, draw shapes such a circles, mark horizontal and vertical marker lines and much more. Using HoloViews you can build visualizations that allow you to directly interact with your data in a useful and intuitive manner.
|
github_jupyter
|
import numpy as np
import pandas as pd
import hvplot.pandas # noqa
from holoviews.element import tiles
%%time
df = pd.read_parquet('../data/earthquakes-projected.parq')
df.time = df.time.astype('datetime64[ns]')
df = df.set_index(df.time)
most_severe = df[df.mag >= 7]
high_mag_quakes = most_severe.hvplot.points(x='easting', y='northing', c='mag',
title='Earthquakes with magnitude >= 7')
esri = tiles.ESRI().redim(x='easting', y='northing')
esri * high_mag_quakes
print(high_mag_quakes)
high_mag_quakes.data.head()
import holoviews as hv
hv.extension("bokeh") # Optional here as we have already loaded hvplot.pandas
xs = np.random.randn(100)
ys = np.random.randn(100)
hv.Points((xs, ys))
xs = np.random.randn(100)
ys = np.random.randn(100)
fitness = np.random.randn(100)
height_v_weight = hv.Points((xs, ys, fitness), ['weight', 'height'], 'fitness')
height_v_weight
print(height_v_weight)
height_v_weight.opts(color='fitness', size=8, colorbar=True, aspect='square')
# Commented as there is a lot of help output!
# hv.help(hv.Scatter)
esri * hv.Points(most_severe, ['easting', 'northing'], 'mag').opts(color='mag', size=8, aspect='equal')
from holoviews import streams
ellipse = hv.Ellipse(0, 0, 1)
pointer = streams.PointerXY(x=0, y=0) # x=0 and y=0 are the initialized values
def crosshair(x, y):
return hv.HLine(y) * hv.VLine(x)
ellipse * hv.DynamicMap(crosshair, streams=[pointer])
selection_stream = streams.Selection1D(source=high_mag_quakes)
high_mag_quakes.opts(tools=['tap'])
hv.opts.defaults(hv.opts.Points(nonselection_alpha=0.4))
def labelled_callback(index):
if len(index) == 0:
return hv.Text(x=0,y=0, text='')
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe.iloc[first_index]
text = '%d : %s' % (first_index, row.place)
return hv.Text(x=row.easting, y=row.northing, text=text).opts(color='white')
labeller = hv.DynamicMap(labelled_callback, streams=[selection_stream])
(esri * high_mag_quakes * labeller).opts(hv.opts.Points(tools=['tap', 'hover']))
def mark_earthquake(index):
if len(index) == 0:
return hv.Overlay([])
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe.iloc[first_index]
return (hv.Ellipse(row.easting, row.northing, 1.5e6) *
hv.Ellipse(row.easting, row.northing, 3e6)).opts(
hv.opts.Ellipse(color='white', alpha=0.5)
)
quake_marker = hv.DynamicMap(mark_earthquake, streams=[selection_stream])
esri * high_mag_quakes.opts(tools=['tap']) * quake_marker
def earthquakes_around_point(df, lat, lon, degrees_dist=0.5):
half_dist = degrees_dist / 2.0
return df[((df['latitude'] - lat).abs() < half_dist)
& ((df['longitude'] - lon).abs() < half_dist)]
def index_to_selection(indices, cache={}):
if not indices:
return most_severe.iloc[[]]
index = indices[0] # Pick only the first one if multiple are selected
if index in cache: return cache[index]
row = most_severe.iloc[index]
selected_df = earthquakes_around_point(df, row.latitude, row.longitude)
cache[index] = selected_df
return selected_df
chosen = 235
delta_long = index_to_selection([chosen]).longitude.max() - index_to_selection([chosen]).longitude.min()
delta_lat = index_to_selection([chosen]).latitude.max() - index_to_selection([chosen]).latitude.min()
print("Difference in longitude: %s" % delta_long)
print("Difference in latitude: %s" % delta_lat)
def histogram_callback(index):
title = 'Distribution of all magnitudes within half a degree of selection'
selected_df = index_to_selection(index)
return selected_df.hvplot.hist(y='mag', bin_range=(0,10), bins=20, color='red', title=title)
histogram = hv.DynamicMap(histogram_callback, streams=[selection_stream])
def scatter_callback(index):
title = 'Temporal distribution of all magnitudes within half a degree of selection '
selected_df = index_to_selection(index)
return selected_df.hvplot.scatter('time', 'mag', color='green', title=title)
temporal_distribution = hv.DynamicMap(scatter_callback, streams=[selection_stream])
def vline_callback(index):
if not index:
return hv.VLine(0).opts(alpha=0)
row = most_severe.iloc[index[0]]
return hv.VLine(row.time).opts(line_width=2, color='black')
temporal_vline = hv.DynamicMap(vline_callback, streams=[selection_stream])
((esri * high_mag_quakes.opts(tools=['tap']) * labeller * quake_marker)
+ histogram + temporal_distribution * temporal_vline).cols(1)
| 0.633183 | 0.973645 |
<a href="https://colab.research.google.com/github/connorpheraty/DS-Unit-1-Sprint-2-Data-Wrangling/blob/master/Connor_Heraty_LS_DS3_121_Scrape_and_process_data_LIVE_LESSON.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Scrape and process data
Objectives
- scrape and parse web pages
- use list comprehensions
- select rows and columns with pandas
Links
- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/)
- Requests
- Beautiful Soup
- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)
- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
- Subset Observations (Rows)
- Subset Variables (Columns)
- Python Data Science Handbook
- [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects
- [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection
## Scrape the titles of PyCon 2019 talks
```
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
type(result.text)
soup = bs4.BeautifulSoup(result.text)
len(soup.select('h2'))
soup.select('h2')
first = soup.select('h2')[0]
type(first.text)
first.text
text_only = first.text.strip()
text_only
last = soup.select('h2')[-1]
last.text.strip()
title_lst = []
for tag in soup.select('h2'):
title = tag.text.strip()
title_lst.append(title)
print(title_lst)
titles = [tag.text.strip()
for tag in soup.select('h2')]
type(titles), len(titles)
```
## 5 ways to look at long titles
Let's define a long title as greater than 80 characters
### 1. For Loop
```
tit_lst = []
for title in titles:
if len(title) > 80:
tit_lst.append(title)
print(tit_lst)
```
### 2. List Comprehension
```
long_titles = [x
for x in titles if len(titles) > 80]
```
### 3. Filter with named function
```
def long(title):
return len(title) > 80
long('Python is good')
list(filter(long, titles))
list(filter(lambda t: len(t) > 80, titles))
```
### 4. Filter with anonymous function
```
list(filter(lambda t: len(t) > 80, titles))
```
### 5. Pandas
pandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
```
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df.shape
df[ df['title'].str.len() > 80 ]
df['title'].str.len() > 80
condition = df['title'].str.len() > 80
df[condition]
```
## Make new dataframe columns
pandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
```
```
### title length
```
df['title length'] = df['title'].apply(len)
df.head()
df[ df['title length'] > 80 ]
df.loc[ df['title length'] > 80, 'title length']
```
### long title
```
df['long title'] = df['title length'] > 80
df.shape
df.head()
df[ df['long title']==True]
```
### first letter
```
df['first letter'] = df['title'].str[0]
df[df['first letter'] == 'P']
df[df['title'].str.startswith('P')]
```
### word count
Using [`textstat`](https://github.com/shivam5992/textstat)
```
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.head()
df.shape
df[ df['title word count'] <= 3 ]
```
## Rename column
`title length` --> `title character count`
pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
```
df = df.rename(columns={'title length': 'title character count'})
df.head()
```
## Analyze the dataframe
### Describe
pandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
```
df.describe(include='all')
df.describe(exclude='number')
```
### Sort values
pandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html)
Five shortest titles, by character count
```
df.sort_values(by='title character count').head(5)
```
Titles sorted reverse alphabetically
```
df.sort_values(by='first letter', ascending=False).head()
```
### Get value counts
pandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html)
Frequency counts of first letters
```
df['first letter'].value_counts()
```
Percentage of talks with long titles
```
df['long title'].value_counts(normalize=True)
```
### Plot
pandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Top 5 most frequent first letters
```
(df['first letter']
.value_counts()
.head(5)
.plot
.barh(color='green',
title='Top 5 most frequent first letters, PyCon 2019 talks'));
```
Histogram of title lengths, in characters
```
df['title character count'].plot.hist(color='orange', title = 'Distribution of Title Length in Characters');
```
# Assignment
**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`
**Make** new columns in the dataframe:
- description
- description character count
- description word count
**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?
**Answer** the question: Which descriptions could fit in a tweet?
# Stretch Challenge
**Make** another new column in the dataframe:
- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstat#the-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)
**Answer** the question: What's the distribution of grade levels? Plot a histogram.
**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77#issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.)
Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).
So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
## **Objectives**
1. Scape the talk descriptions
2. Make new columns in the dataframe:
a. description
b. description character count
c. description word count
3. Describe all the dataframe's colums.
a. Average description word count
b. Minimum
c. Maximum
4. Which descriptions could fit in a tweet?
## Objective 1
```
# Isolate the html container that contains talk descriptions
first = soup.select('div.presentation-description')[0]
first
#Eliminate all characters not related to the talk description in our string
first.text.strip()
# Repeat for all descriptions in our list
talk_description = [tag.text.strip()
for tag in soup.select('div.presentation-description')]
```
## Objective 2
```
# Create pandas database out of our cleaned list
df = pd.DataFrame({'Description': talk_description})
df.shape
# Description column created
df.head()
# Description Character Count created
df['Description Character Count'] = df['Description'].apply(len)
df.head()
# Description Word Count created
df['Description Word Count'] = df['Description'].apply(textstat.lexicon_count)
df.head()
```
## **Objective 3**
The average description word count is 130.82
The minimum words used is 20
The maximum word used is 421
```
df.describe(include='all')
```
## Objective 4
```
# Creating Tweet-able column with boolean values to determine whether the talk description could fit within twitters 280 character tweet limit.
df['Tweet-able?'] = df['Description Character Count'] < 280
df.head()
# Created a subset variable to determine which talk descriptions could fit in a single tweet.
df[ df['Tweet-able?']]
# Confirmed above finding
df['Tweet-able?'].value_counts()
```
## Stretch Challenge
```
# Grade level function on a string of text
def scores(x):
for i in x:
text_score = textstat.flesch_kincaid_grade(x)
return text_score
# Applies score function on dataframe
df['Description Grade Level'] = df['Description'].apply(scores)
df.head()
import matplotlib.pyplot as plt
plt.figure(figsize=(20,10))
plt.xlabel('Flesch Kincaid Scores')
plt.title('Distribution of Description Grade Levels for PyCon 2019 Talks',size=15)
plt.hist(df['Description Grade Level'], bins=20);
# We will make a seperate dataframe with all descriptions above 50 to analyze
unreadable_desc = df[ df['Description Grade Level'] > 60]
unreadable_lst = list(unreadable_desc['Description'])
# Create list of three highest scores
unreadable_lst[0:3]
# I have inputted all three descriptions into 'http://www.hemingwayapp.com/' for further analysis
```
## Improving Description Readability
The distribution of Description Grade Levels skews to the left which implies most descriptions are easy to read . To get a better understanding of how the algorithm works, I further analyzed the three hardest to read descriptions.
The three highest scores shared two properties: long run-on sentences and technical jargon.
Due to the nature of the conference, technical jargon is often unavoidable when explaining the subject matter of the talk. Lowering its use may decrease the quality of the description. If one's goal was to reduce their Flesch Kincaid score, they may want to consider reducing their use of technical jargon.
Reducing the use of run-on sentences is an excellent way to make a description easier to read. It can also be done without decreasing the quality of the description!
|
github_jupyter
|
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
type(result.text)
soup = bs4.BeautifulSoup(result.text)
len(soup.select('h2'))
soup.select('h2')
first = soup.select('h2')[0]
type(first.text)
first.text
text_only = first.text.strip()
text_only
last = soup.select('h2')[-1]
last.text.strip()
title_lst = []
for tag in soup.select('h2'):
title = tag.text.strip()
title_lst.append(title)
print(title_lst)
titles = [tag.text.strip()
for tag in soup.select('h2')]
type(titles), len(titles)
tit_lst = []
for title in titles:
if len(title) > 80:
tit_lst.append(title)
print(tit_lst)
long_titles = [x
for x in titles if len(titles) > 80]
def long(title):
return len(title) > 80
long('Python is good')
list(filter(long, titles))
list(filter(lambda t: len(t) > 80, titles))
list(filter(lambda t: len(t) > 80, titles))
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df.shape
df[ df['title'].str.len() > 80 ]
df['title'].str.len() > 80
condition = df['title'].str.len() > 80
df[condition]
```
### title length
### long title
### first letter
### word count
Using [`textstat`](https://github.com/shivam5992/textstat)
## Rename column
`title length` --> `title character count`
pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
## Analyze the dataframe
### Describe
pandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
### Sort values
pandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html)
Five shortest titles, by character count
Titles sorted reverse alphabetically
### Get value counts
pandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html)
Frequency counts of first letters
Percentage of talks with long titles
### Plot
pandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Top 5 most frequent first letters
Histogram of title lengths, in characters
# Assignment
**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`
**Make** new columns in the dataframe:
- description
- description character count
- description word count
**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?
**Answer** the question: Which descriptions could fit in a tweet?
# Stretch Challenge
**Make** another new column in the dataframe:
- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstat#the-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)
**Answer** the question: What's the distribution of grade levels? Plot a histogram.
**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77#issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.)
Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).
So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
## **Objectives**
1. Scape the talk descriptions
2. Make new columns in the dataframe:
a. description
b. description character count
c. description word count
3. Describe all the dataframe's colums.
a. Average description word count
b. Minimum
c. Maximum
4. Which descriptions could fit in a tweet?
## Objective 1
## Objective 2
## **Objective 3**
The average description word count is 130.82
The minimum words used is 20
The maximum word used is 421
## Objective 4
## Stretch Challenge
| 0.5144 | 0.961534 |
# Publications markdown generator for academicpages
Takes a set of bibtex of publications and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)).
The core python code is also in `pubsFromBibs.py`.
Run either from the `markdown_generator` folder after replacing updating the publist dictionary with:
* bib file names
* specific venue keys based on your bib file preferences
* any specific pre-text for specific files
* Collection Name (future feature)
TODO: Make this work with other databases of citations,
TODO: Merge this with the existing TSV parsing solution
```
from pybtex.database.input import bibtex
import pybtex.database.input.bibtex
from time import strptime
import string
import html
import os
import re
#todo: incorporate different collection types rather than a catch all publications, requires other changes to template
publist = {
"proceeding": {
"file" : "proceedings.bib",
"venuekey": "booktitle",
"venue-pretext": "In the proceedings of ",
"collection" : {"name":"publications",
"permalink":"/publication/"}
},
"journal":{
"file": "pubs.bib",
"venuekey" : "journal",
"venue-pretext" : "",
"collection" : {"name":"publications",
"permalink":"/publication/"}
}
}
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
myName = "Shivam Agarwal"
for pubsource in publist:
parser = bibtex.Parser()
parser2 = bibtex.Parser()
bibdata = parser.parse_file(publist[pubsource]["file"])
bibdataCopy = parser2.parse_file(publist[pubsource]["file"])
#loop through the individual references in a given bibtex file
for bib_id in bibdata.entries:
#reset default date
pub_year = "1900"
pub_month = "01"
pub_day = "01"
bibtexEntry = bibdataCopy.entries[bib_id]
try:
removeFields = ['paperurl', 'image', 'video', 'code', 'demo', 'month', 'keywords', 'type', 'poster']
for fieldtoremove in removeFields:
if fieldtoremove in bibtexEntry.fields: del bibtexEntry.fields[fieldtoremove]
except:
print("deleting key exception")
pass
bibtexCode = bibtexEntry.to_string('bibtex').replace('&','&').replace('= "','= {').replace('"', '}').replace("\n", "\n\n")
b = bibdata.entries[bib_id].fields
# print(publist[pubsource]["venue-pretext"])
# print(b[publist[pubsource]["venuekey"]])
# bibtex += "@"+
try:
pub_year = f'{b["year"]}'
#todo: this hack for month and day needs some cleanup
if "month" in b.keys():
if(len(b["month"])<3):
pub_month = "0"+b["month"]
pub_month = pub_month[-2:]
elif(b["month"] not in range(12)):
tmnth = strptime(b["month"][:3],'%b').tm_mon
pub_month = "{:02d}".format(tmnth)
else:
pub_month = str(b["month"])
if "day" in b.keys():
pub_day = str(b["day"])
pub_date = pub_year+"-"+pub_month+"-"+pub_day
#strip out {} as needed (some bibtex entries that maintain formatting)
# clean_title = b["title"].replace("{", "").replace("}","").replace("\\","").replace(" ","-")
clean_title = bib_id.replace("{", "").replace("}","").replace("\\","").replace(" ","-")
url_slug = re.sub("\\[.*\\]|[^a-zA-Z0-9_-]", "", clean_title)
url_slug = url_slug.replace("--","-")
# md_filename = (str(pub_date) + "-" + url_slug + ".md").replace("--","-")
# html_filename = (str(pub_date) + "-" + url_slug).replace("--","-")
md_filename = (url_slug + ".md").replace("--","-")
html_filename = (url_slug).replace("--","-")
#Build Citation from text
citation = ""
#add venue logic depending on citation type
venue = publist[pubsource]["venue-pretext"]+b[publist[pubsource]["venuekey"]].replace("{", "").replace("}","").replace("\\","")
#citation title
paperTitle = html_escape(b["title"].replace("{", "").replace("}","").replace("\\",""))
## YAML variables
md = "---\ntitle: \"" + paperTitle + '"\n'
md += """collection: """ + publist[pubsource]["collection"]["name"]
note = False
if "note" in b.keys():
if len(str(b["note"])) > 5:
md += "\nexcerpt: '" + html_escape(b["note"]) + "'"
note = True
md += "\ndate: " + str(pub_date)
md += "\nvenue: '" + html_escape(venue) + "'"
md += "\nbibid: '" + bib_id + "'"
md += """\npermalink: """ + publist[pubsource]["collection"]["permalink"] + html_filename
# md += """\npermalink: """ + publist[pubsource]["collection"]["permalink"] + html_filename
url = False
if "paperurl" in b.keys():
if len(str(b["paperurl"])) > 5:
md += "\npaperurl: '" + b["paperurl"] + "'"
url = True
image = False
if "image" in b.keys():
if len(str(b["image"])) > 5:
md += "\nimage: '" + b["image"] + "'"
image = True
code = False
if "code" in b.keys():
md += "\ncode: '" + b["code"] + "'"
code = True
demo = False
if "demo" in b.keys():
md += "\ndemo: '" + b["demo"] + "'"
demo = True
doi = False
if "doi" in b.keys():
md += "\ndoi: '" + b["doi"] + "'"
doi = True
doiurl = False
doiurlval = ""
if "doi" in b.keys():
doiurlval = "https://dx.doi.org/" + b["doi"]
md += "\ndoiurl: '" + doiurlval + "'"
doiurl = True
abstract = False
if "abstract" in b.keys():
md += "\nabstract: '" + b["abstract"] + "'"
abstract = True
video = False
if "video" in b.keys():
md += "\nvideo: '" + b["video"] + "'"
video = True
youtubeid = False
if "video" in b.keys():
vidsplit = b["video"].split("?v=")
md += "\nyoutubeid: '" + vidsplit[1] + "'"
youtubeid = True
year = False
if "year" in b.keys():
md += "\nyear: '" + b["year"] + "'"
year = True
poster = False
if "poster" in b.keys():
md += "\nposter: '" + b["poster"] + "'"
poster = True
#citation authors - todo - add highlighting for primary author?
authors = ""
for i in range(0,len(bibdata.entries[bib_id].persons["author"])):
author = bibdata.entries[bib_id].persons["author"][i]
# To underline primary author name while listing the publications
authorFullName = author.first_names[0]+" "+author.last_names[0]
if authorFullName == myName:
authorFullName = "<u>"+authorFullName+"</u>"
citation = citation+" "+authorFullName+", "
if i == len(bibdata.entries[bib_id].persons["author"])-1:
authors = authors + " and "+authorFullName
else:
authors = authors + " "+authorFullName+", "
citation = citation + "<i>\"" + paperTitle + "\"</i>"
citation = citation + " " + html_escape(venue)
citation = citation + ", " + pub_year + ". "
# if doi == True and doiurl == True:
# citation = citation + "<a href=\""+ doiurlval +"\" >" + b["doi"] +"</a>"
md += "\ncitation: '" + html_escape(citation) + "'"
md += "\nauthors: '" + html_escape(authors) + "'"
md+= "\nbibtexCode: '" + bibtexCode + "'"
md += "\n---"
## Markdown description for individual page
if note:
md += "\n" + html_escape(b["note"]) + "\n"
# if doi:
# md += "\n[Access paper here](" + b["doi"] + "){:target=\"_blank\"}\n"
# else if url:
# md += "\n[Access paper here](" + b["url"] + "){:target=\"_blank\"}\n"
# else:
# md += "\nUse [Google Scholar](https://scholar.google.com/scholar?q="+html.escape(clean_title.replace("-","+"))+"){:target=\"_blank\"} for full citation"
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w', encoding='utf-8') as f:
f.write(md)
print(f'SUCESSFULLY PARSED {bib_id}: \"', b["title"][:60],"..."*(len(b['title'])>60),"\"")
# field may not exist for a reference
except KeyError as e:
print(f'WARNING Missing Expected Field {e} from entry {bib_id}: \"', b["title"][:30],"..."*(len(b['title'])>30),"\"")
continue
```
|
github_jupyter
|
from pybtex.database.input import bibtex
import pybtex.database.input.bibtex
from time import strptime
import string
import html
import os
import re
#todo: incorporate different collection types rather than a catch all publications, requires other changes to template
publist = {
"proceeding": {
"file" : "proceedings.bib",
"venuekey": "booktitle",
"venue-pretext": "In the proceedings of ",
"collection" : {"name":"publications",
"permalink":"/publication/"}
},
"journal":{
"file": "pubs.bib",
"venuekey" : "journal",
"venue-pretext" : "",
"collection" : {"name":"publications",
"permalink":"/publication/"}
}
}
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
myName = "Shivam Agarwal"
for pubsource in publist:
parser = bibtex.Parser()
parser2 = bibtex.Parser()
bibdata = parser.parse_file(publist[pubsource]["file"])
bibdataCopy = parser2.parse_file(publist[pubsource]["file"])
#loop through the individual references in a given bibtex file
for bib_id in bibdata.entries:
#reset default date
pub_year = "1900"
pub_month = "01"
pub_day = "01"
bibtexEntry = bibdataCopy.entries[bib_id]
try:
removeFields = ['paperurl', 'image', 'video', 'code', 'demo', 'month', 'keywords', 'type', 'poster']
for fieldtoremove in removeFields:
if fieldtoremove in bibtexEntry.fields: del bibtexEntry.fields[fieldtoremove]
except:
print("deleting key exception")
pass
bibtexCode = bibtexEntry.to_string('bibtex').replace('&','&').replace('= "','= {').replace('"', '}').replace("\n", "\n\n")
b = bibdata.entries[bib_id].fields
# print(publist[pubsource]["venue-pretext"])
# print(b[publist[pubsource]["venuekey"]])
# bibtex += "@"+
try:
pub_year = f'{b["year"]}'
#todo: this hack for month and day needs some cleanup
if "month" in b.keys():
if(len(b["month"])<3):
pub_month = "0"+b["month"]
pub_month = pub_month[-2:]
elif(b["month"] not in range(12)):
tmnth = strptime(b["month"][:3],'%b').tm_mon
pub_month = "{:02d}".format(tmnth)
else:
pub_month = str(b["month"])
if "day" in b.keys():
pub_day = str(b["day"])
pub_date = pub_year+"-"+pub_month+"-"+pub_day
#strip out {} as needed (some bibtex entries that maintain formatting)
# clean_title = b["title"].replace("{", "").replace("}","").replace("\\","").replace(" ","-")
clean_title = bib_id.replace("{", "").replace("}","").replace("\\","").replace(" ","-")
url_slug = re.sub("\\[.*\\]|[^a-zA-Z0-9_-]", "", clean_title)
url_slug = url_slug.replace("--","-")
# md_filename = (str(pub_date) + "-" + url_slug + ".md").replace("--","-")
# html_filename = (str(pub_date) + "-" + url_slug).replace("--","-")
md_filename = (url_slug + ".md").replace("--","-")
html_filename = (url_slug).replace("--","-")
#Build Citation from text
citation = ""
#add venue logic depending on citation type
venue = publist[pubsource]["venue-pretext"]+b[publist[pubsource]["venuekey"]].replace("{", "").replace("}","").replace("\\","")
#citation title
paperTitle = html_escape(b["title"].replace("{", "").replace("}","").replace("\\",""))
## YAML variables
md = "---\ntitle: \"" + paperTitle + '"\n'
md += """collection: """ + publist[pubsource]["collection"]["name"]
note = False
if "note" in b.keys():
if len(str(b["note"])) > 5:
md += "\nexcerpt: '" + html_escape(b["note"]) + "'"
note = True
md += "\ndate: " + str(pub_date)
md += "\nvenue: '" + html_escape(venue) + "'"
md += "\nbibid: '" + bib_id + "'"
md += """\npermalink: """ + publist[pubsource]["collection"]["permalink"] + html_filename
# md += """\npermalink: """ + publist[pubsource]["collection"]["permalink"] + html_filename
url = False
if "paperurl" in b.keys():
if len(str(b["paperurl"])) > 5:
md += "\npaperurl: '" + b["paperurl"] + "'"
url = True
image = False
if "image" in b.keys():
if len(str(b["image"])) > 5:
md += "\nimage: '" + b["image"] + "'"
image = True
code = False
if "code" in b.keys():
md += "\ncode: '" + b["code"] + "'"
code = True
demo = False
if "demo" in b.keys():
md += "\ndemo: '" + b["demo"] + "'"
demo = True
doi = False
if "doi" in b.keys():
md += "\ndoi: '" + b["doi"] + "'"
doi = True
doiurl = False
doiurlval = ""
if "doi" in b.keys():
doiurlval = "https://dx.doi.org/" + b["doi"]
md += "\ndoiurl: '" + doiurlval + "'"
doiurl = True
abstract = False
if "abstract" in b.keys():
md += "\nabstract: '" + b["abstract"] + "'"
abstract = True
video = False
if "video" in b.keys():
md += "\nvideo: '" + b["video"] + "'"
video = True
youtubeid = False
if "video" in b.keys():
vidsplit = b["video"].split("?v=")
md += "\nyoutubeid: '" + vidsplit[1] + "'"
youtubeid = True
year = False
if "year" in b.keys():
md += "\nyear: '" + b["year"] + "'"
year = True
poster = False
if "poster" in b.keys():
md += "\nposter: '" + b["poster"] + "'"
poster = True
#citation authors - todo - add highlighting for primary author?
authors = ""
for i in range(0,len(bibdata.entries[bib_id].persons["author"])):
author = bibdata.entries[bib_id].persons["author"][i]
# To underline primary author name while listing the publications
authorFullName = author.first_names[0]+" "+author.last_names[0]
if authorFullName == myName:
authorFullName = "<u>"+authorFullName+"</u>"
citation = citation+" "+authorFullName+", "
if i == len(bibdata.entries[bib_id].persons["author"])-1:
authors = authors + " and "+authorFullName
else:
authors = authors + " "+authorFullName+", "
citation = citation + "<i>\"" + paperTitle + "\"</i>"
citation = citation + " " + html_escape(venue)
citation = citation + ", " + pub_year + ". "
# if doi == True and doiurl == True:
# citation = citation + "<a href=\""+ doiurlval +"\" >" + b["doi"] +"</a>"
md += "\ncitation: '" + html_escape(citation) + "'"
md += "\nauthors: '" + html_escape(authors) + "'"
md+= "\nbibtexCode: '" + bibtexCode + "'"
md += "\n---"
## Markdown description for individual page
if note:
md += "\n" + html_escape(b["note"]) + "\n"
# if doi:
# md += "\n[Access paper here](" + b["doi"] + "){:target=\"_blank\"}\n"
# else if url:
# md += "\n[Access paper here](" + b["url"] + "){:target=\"_blank\"}\n"
# else:
# md += "\nUse [Google Scholar](https://scholar.google.com/scholar?q="+html.escape(clean_title.replace("-","+"))+"){:target=\"_blank\"} for full citation"
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w', encoding='utf-8') as f:
f.write(md)
print(f'SUCESSFULLY PARSED {bib_id}: \"', b["title"][:60],"..."*(len(b['title'])>60),"\"")
# field may not exist for a reference
except KeyError as e:
print(f'WARNING Missing Expected Field {e} from entry {bib_id}: \"', b["title"][:30],"..."*(len(b['title'])>30),"\"")
continue
| 0.114121 | 0.408041 |
# Record IO - Pack free-format data in binary files
This tutorial will walk through the python interface for reading and writing
record io files. It can be useful when you need more more control over the
details of data pipeline. For example, when you need to augument image and label
together for detection and segmentation, or when you need a custom data iterator
for triplet sampling and negative sampling.
Setup environment first:
```
%matplotlib inline
from __future__ import print_function
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
```
The relevent code is under `mx.recordio`. There are two classes: `MXRecordIO`,
which supports sequential read and write, and `MXIndexedRecordIO`, which
supports random read and sequential write.
## MXRecordIO
First let's take a look at `MXRecordIO`. We open a file `tmp.rec` and write 5
strings to it:
```
record = mx.recordio.MXRecordIO('tmp.rec', 'w')
for i in range(5):
record.write('record_%d'%i)
record.close()
```
Then we can read it back by opening the same file with 'r':
```
record = mx.recordio.MXRecordIO('tmp.rec', 'r')
while True:
item = record.read()
if not item:
break
print item
record.close()
```
## MXIndexedRecordIO
Some times you need random access for more complex tasks. `MXIndexedRecordIO` is
designed for this. Here we create a indexed record `tmp.rec` and a corresponding
index file `tmp.idx`:
```
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'w')
for i in range(5):
record.write_idx(i, 'record_%d'%i)
record.close()
```
We can then access records with keys:
```
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'r')
record.read_idx(3)
```
You can list all keys with:
```
record.keys
```
## Packing and Unpacking Data
Each record in a .rec file can contain arbitrary binary data, but machine
learning data typically has a label/data structure. `mx.recordio` also contains
a few utility functions for packing such data, namely: `pack`, `unpack`,
`pack_img`, and `unpack_img`.
### Binary Data
`pack` and `unpack` are used for storing float (or 1d array of float) label and
binary data:
- pack:
```
# pack
data = 'data'
label1 = 1.0
header1 = mx.recordio.IRHeader(flag=0, label=label1, id=1, id2=0)
s1 = mx.recordio.pack(header1, data)
print('float label:', repr(s1))
label2 = [1.0, 2.0, 3.0]
header2 = mx.recordio.IRHeader(flag=0, label=label2, id=2, id2=0)
s2 = mx.recordio.pack(header2, data)
print('array label:', repr(s2))
```
- unpack:
```
print(*mx.recordio.unpack(s1))
print(*mx.recordio.unpack(s2))
```
### Image Data
`pack_img` and `unpack_img` are used for packing image data. Records packed by
`pack_img` can be loaded by `mx.io.ImageRecordIter`.
- pack images
```
data = np.ones((3,3,1), dtype=np.uint8)
label = 1.0
header = mx.recordio.IRHeader(flag=0, label=label, id=0, id2=0)
s = mx.recordio.pack_img(header, data, quality=100, img_fmt='.jpg')
print(repr(s))
```
- unpack images
```
print(*mx.recordio.unpack_img(s))
```
<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
|
github_jupyter
|
%matplotlib inline
from __future__ import print_function
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
record = mx.recordio.MXRecordIO('tmp.rec', 'w')
for i in range(5):
record.write('record_%d'%i)
record.close()
record = mx.recordio.MXRecordIO('tmp.rec', 'r')
while True:
item = record.read()
if not item:
break
print item
record.close()
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'w')
for i in range(5):
record.write_idx(i, 'record_%d'%i)
record.close()
record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'r')
record.read_idx(3)
record.keys
# pack
data = 'data'
label1 = 1.0
header1 = mx.recordio.IRHeader(flag=0, label=label1, id=1, id2=0)
s1 = mx.recordio.pack(header1, data)
print('float label:', repr(s1))
label2 = [1.0, 2.0, 3.0]
header2 = mx.recordio.IRHeader(flag=0, label=label2, id=2, id2=0)
s2 = mx.recordio.pack(header2, data)
print('array label:', repr(s2))
print(*mx.recordio.unpack(s1))
print(*mx.recordio.unpack(s2))
data = np.ones((3,3,1), dtype=np.uint8)
label = 1.0
header = mx.recordio.IRHeader(flag=0, label=label, id=0, id2=0)
s = mx.recordio.pack_img(header, data, quality=100, img_fmt='.jpg')
print(repr(s))
print(*mx.recordio.unpack_img(s))
| 0.221603 | 0.953665 |
```
%matplotlib inline
```
# Tutorial 01: Particles and models
A particle system is an instance of one of the classes defined in the module :mod:`sisyphe.particles`.
Particles
The basic class :class:`sisyphe.particles.Particles` defines a particle system by the positions.
Kinetic particles
The class :class:`sisyphe.particles.KineticParticles` defines a particle system by the positions and the velocities.
Body-oriented particles.
The class :class:`sisyphe.particles.BOParticles` defines a particle system in 3D by the positions and the body-orientations which are a rotation matrices in $SO(3)$ stored as quaternions.
A model is a subclass of a particle class. Several examples are defined in the module :mod:`sisyphe.models`. For example, let us create an instance of the Vicsek model :class:`sisyphe.models.Vicsek` which is a subclass of :class:`sisyphe.particles.KineticParticles`.
First, some standard imports...
```
import time
import torch
```
If CUDA is available, the computations will be done on the GPU and on the CPU otherwise. The type of the tensors (simple or double precision) are defined by the type of the initial conditions. Here and throughout the documentation, we work with single precision tensors.
```
use_cuda = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
```
We take initially $N$ particles uniformly scattered in a box of size $L$ with uniformly sampled directions of motion.
```
N = 10000
L = 100
pos = L*torch.rand((N,2)).type(dtype)
vel = torch.randn(N,2).type(dtype)
vel = vel/torch.norm(vel,dim=1).reshape((N,1))
```
Then we define the interaction radius $R$, the speed of the particles $c$ and the drift and diffusion coefficients, respectively $\nu$ and $\sigma$.
```
R = 5.
c = 1.
nu = 3.
sigma = 1.
```
We take a small discretisation time step.
```
dt = .01
```
Finally, we define an instance of the Vicsek model with these parameters.
```
from sisyphe.models import Vicsek
simu = Vicsek(
pos = pos,
vel = vel,
v = c,
sigma = sigma,
nu = nu,
interaction_radius = R,
box_size = L,
dt = dt)
```
<div class="alert alert-info"><h4>Note</h4><p>The boundary conditions are periodic by default, see `tuto_boundaryconditions`.</p></div>
So far, nothing has been computed. All the particles are implemented as Python iterators: in order to compute the next time step of the algorithm, we can call the method :meth:`__next__`. This method increments the iteration counter by one and updates all the relevant quantities (positions and velocities) by calling the method :meth:`update() <sisyphe.models.Vicsek.update>` which defines the model.
```
print("Current iteration: "+ str(simu.iteration))
simu.__next__()
print("Current iteration: "+ str(simu.iteration))
```
On a longer time interval, we can use the methods in the module :mod:`sisyphe.display`. For instance, let us fix a list of time frames.
```
frames = [5., 10., 30., 50., 75., 100]
```
Using the method :meth:`sisyphe.display.display_kinetic_particles`, the simulation will run until the last time in the list :data:`frames`. The method also displays a scatter plot of the particle system at each of the times specified in the list and finally compute and plot the order parameter.
```
from sisyphe.display import display_kinetic_particles
s = time.time()
it, op = display_kinetic_particles(simu, frames, order=True)
e = time.time()
```
Print the total simulation time and the average time per iteration.
```
print('Total time: '+str(e-s)+' seconds')
print('Average time per iteration: '+str((e-s)/simu.iteration)+' seconds')
```
|
github_jupyter
|
%matplotlib inline
import time
import torch
use_cuda = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
N = 10000
L = 100
pos = L*torch.rand((N,2)).type(dtype)
vel = torch.randn(N,2).type(dtype)
vel = vel/torch.norm(vel,dim=1).reshape((N,1))
R = 5.
c = 1.
nu = 3.
sigma = 1.
dt = .01
from sisyphe.models import Vicsek
simu = Vicsek(
pos = pos,
vel = vel,
v = c,
sigma = sigma,
nu = nu,
interaction_radius = R,
box_size = L,
dt = dt)
print("Current iteration: "+ str(simu.iteration))
simu.__next__()
print("Current iteration: "+ str(simu.iteration))
frames = [5., 10., 30., 50., 75., 100]
from sisyphe.display import display_kinetic_particles
s = time.time()
it, op = display_kinetic_particles(simu, frames, order=True)
e = time.time()
print('Total time: '+str(e-s)+' seconds')
print('Average time per iteration: '+str((e-s)/simu.iteration)+' seconds')
| 0.372277 | 0.990651 |
# Create MODFLOW grid-based GeoTiff file
This notebook creates a GeoTiff raster file in which the pixels correspond to model grid cells. Rotated grids are allowed; however, at this time, cells must be square. This requirement could be relaxed in the future, but rasters usually are composed of square pixels in most GIS software. Although MODFLOW grids won't allow it, the method can be used for skewed pixels as well.
The user needs to have a polygon shapefile of the model boundary (rectangular). The shapefile can contain multiple polygons that together define the model grid outline. The projection of the model grid is read from the shapefile .prj file. With a little coding, the projection could also be supplied as an EPSG code.
The pixels are coded to take the value of the ibound array in the layer specified in the variable `ib2use`. This could be changed to take the value of any model quantity.
```
__author__ = 'Jeff Starn'
%matplotlib notebook
from ipywidgets import interact, Dropdown
from IPython.display import display
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import geopandas as gp
import gdal
gdal.UseExceptions()
import ogr
import osr
import flopy as fp
```
The next cell contains user-supplied information. The `homes` variable is a list of directories that contain one or more MODFLOW name files. The directories in this list will be scanned and a list of MODFLOW files with their paths will be created. The user can select from this list in a subsequent cell.
```
homes = ['../Models']
mfpth = '../executables/MODFLOW-NWT_1.0.9/bin/MODFLOW-NWT_64.exe'
# give the base name (no file extension) of the model grid shapefile
model_outline = 'fzmg_model_outline'
model_outline = 'SIR2016_5076'
ib2use = 0
```
Scan the directories in `home` looking for name files.
```
dir_list = []
mod_list = []
i = 0
for home in homes:
if os.path.exists(home):
for dirpath, dirnames, filenames in os.walk(home):
for f in filenames:
if os.path.splitext(f)[-1] == '.nam':
mod = os.path.splitext(f)[0]
mod_list.append(mod)
dir_list.append(os.path.join(dirpath, f))
i += 1
print(' {} models read'.format(i))
```
Choose a name file from this list.
```
model_area = Dropdown(
options=mod_list,
description='Model:',
background_color='cyan',
border_color='black',
border_width=2)
display(model_area)
```
Make path names etc. from the selected model.
```
model = model_area.value
nam_path = [item for item in dir_list if model in item][0]
nam_file = os.path.basename(nam_path)
model_ws = os.path.dirname(nam_path)
new_ws = os.path.join(model_ws, 'WEL')
geo_ws = os.path.dirname(model_ws)
print("working model is {}".format(nam_path))
# the following information can be input directly or read from flopy
# NACP model
# delr = delc = 5280
# nrow = 250
# ncol = 500
# Fall Zone model
# delr = delc = 1056
# nrow = 750
# ncol = 250
```
Read the model using FLOPY. Only the BAS and DIS packages need to read to create a basic GeoTiff.
```
print ('Reading model information')
fpmg = fp.modflow.Modflow.load(nam_file, model_ws=model_ws, exe_name=mfpth, version='mfnwt',
load_only=['DIS', 'BAS6'], check=False)
dis = fpmg.get_package('DIS')
bas = fpmg.get_package('BAS6')
delr = dis.delr
delc = dis.delc
nlay = dis.nlay
nrow = dis.nrow
ncol = dis.ncol
hnoflo = bas.hnoflo
ibound = np.asarray(bas.ibound.get_value())
print (' ... done')
```
Fucntions used in the notebook
```
def get_minmax(g):
'''This function extracts x and y values from a polygon
and finds the coordinate pairs at extreme values.
g : Shapely Polygon or MultiPolygon object
returns: array of (x, y) pairs at extreme values'''
x, y = np.array(list(zip(*g.boundary.coords[:])))
return find_minmax(x, y)
def find_minmax(x, y):
'''This function finds the pairs of coordinates at each extreme value.
x, y : array of single coordinates, x and y
returns: array of (x, y) pairs at extreme values'''
ximin = np.argmin(x)
ximax = np.argmax(x)
yimin = np.argmin(y)
yimax = np.argmax(y)
return np.array(((x[ximin], y[ximin]),
(x[yimax], y[yimax]),
(x[ximax], y[ximax]),
(x[yimin], y[yimin])))
```
### Find the corner points of an arbitrary rectangular shapefile (i.e., MODFLOW grid)
Read the shapefile
```
src = os.path.join(geo_ws, model_outline)
basin = gp.read_file(src + '.shp')
```
Read the shapefile's projection file (`.prj`). Convert to other formats. The SRS object provides methods for other formats.
```
# Read the projection associated with the shapefile (in ESRI WKT format).
with open(src + '.prj', 'r') as f:
prj = f.readlines()
# Convert the projection to Proj.4 (for geopandas and matplotlib) and WKT
# (for open source geotiff file)
srs = osr.SpatialReference()
srs.ImportFromESRI(prj)
prj4 = srs.ExportToProj4()
wkt = srs.ExportToWkt()
# initialize with dummy array so that new arrays of the same shape can be appended
arr = np.zeros((1, 2))
# loop through all the geometries in the source shapefile and
# append the pairs of coordinates at extreme values
for geom in basin.geometry:
if geom.type == 'Polygon':
arr = np.append(arr, get_minmax(geom), axis=0)
elif geom.type == 'MultiPolygon':
for g in geom:
arr = np.append(arr, get_minmax(g), axis=0)
else:
print('unrecognized geometry type; should be Polygon or MultiPolygon')
# find the global set of coordinates at extreme values (corners)
pts = find_minmax(arr[1:, 0], arr[1:, 1])
```
Check for errors
```
LX = np.unique(delr)
LY = np.unique(delc)
assert LX.shape[0]==1, "grid spacing in delr is not uniform; can't use raster"
assert LY.shape[0]==1, "grid spacing in delc is not uniform; can't use raster"
assert LX==LY, "grid cells are not square; can't use raster"
L = LX
```
Process the corner points to find the origin with respect to the given `nrow` and `ncol` and the angle of grid rotation in radians from the positive x axis.
```
# Find the apex (ymax) of the grid.
ymax = np.argmax(pts[:, 1])
# Wrap (roll) the lines of the array around so that the apex is at the top of the array (first line).
pts = np.roll(pts, -ymax, axis=0)
# Add the first point to the end for calculating distances
pts = np.vstack((pts, pts[0, :]))
# Calculate the length of each side.
dc = np.diff(pts, axis=0)
hyp = np.hypot(dc[:, 0], dc[:, 1])
# angle in radians from positive x axis such that negative y values produce negative angles
da = np.arctan2(dc[:, 1], dc[:, 0])
```
Calculate the geotransformation coordinates for the raster
```
# the corner coordinates always have the ncol dimension to the right of the origin
if ncol <= nrow:
if hyp[0] <= hyp[3]:
origin = pts[0, :]
theta = da[0]
else:
origin = pts[3, :]
theta = da[3]
elif ncol > nrow:
if hyp[0] < hyp[3]:
origin = pts[3, :]
theta = da[3]
else:
origin = pts[0, :]
theta = da[0]
else:
assert np.isclose(hyp[0], hyp[3]), 'nrow = ncol but sides are not equal length'
A = L * np.cos(theta)
B = L * np.sin(theta)
D = L * np.sin(theta)
E = L * -np.cos(theta)
gt = [origin[0], A[0], B[0], origin[1], D[0], E[0]]
pts
ax = basin.plot()
ax.plot(arr[:,0], arr[:,1], marker='x', linestyle='None', **{'mec':'k', 'linewidth':1.0})
ax.plot(origin[0], origin[1], marker='o', linestyle='None', **{'mec':'k', 'linewidth':1.0})
```
Make the raster and save as a GeoTiff file
```
dst_file = os.path.join(geo_ws, 'model_grid.tif')
if os.path.exists(dst_file):
os.remove(dst_file)
driver = gdal.GetDriverByName("GTiff")
dst = driver.Create(dst_file, ncol, nrow, 1, gdal.GDT_Float32)
dst.SetProjection(wkt)
dst.SetGeoTransform(gt)
ba = dst.GetRasterBand(1)
no = ba.SetNoDataValue(0)
ar = ba.WriteArray(ibound[ib2use, :, :])
dst = None
driver = None
```
|
github_jupyter
|
__author__ = 'Jeff Starn'
%matplotlib notebook
from ipywidgets import interact, Dropdown
from IPython.display import display
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import geopandas as gp
import gdal
gdal.UseExceptions()
import ogr
import osr
import flopy as fp
homes = ['../Models']
mfpth = '../executables/MODFLOW-NWT_1.0.9/bin/MODFLOW-NWT_64.exe'
# give the base name (no file extension) of the model grid shapefile
model_outline = 'fzmg_model_outline'
model_outline = 'SIR2016_5076'
ib2use = 0
dir_list = []
mod_list = []
i = 0
for home in homes:
if os.path.exists(home):
for dirpath, dirnames, filenames in os.walk(home):
for f in filenames:
if os.path.splitext(f)[-1] == '.nam':
mod = os.path.splitext(f)[0]
mod_list.append(mod)
dir_list.append(os.path.join(dirpath, f))
i += 1
print(' {} models read'.format(i))
model_area = Dropdown(
options=mod_list,
description='Model:',
background_color='cyan',
border_color='black',
border_width=2)
display(model_area)
model = model_area.value
nam_path = [item for item in dir_list if model in item][0]
nam_file = os.path.basename(nam_path)
model_ws = os.path.dirname(nam_path)
new_ws = os.path.join(model_ws, 'WEL')
geo_ws = os.path.dirname(model_ws)
print("working model is {}".format(nam_path))
# the following information can be input directly or read from flopy
# NACP model
# delr = delc = 5280
# nrow = 250
# ncol = 500
# Fall Zone model
# delr = delc = 1056
# nrow = 750
# ncol = 250
print ('Reading model information')
fpmg = fp.modflow.Modflow.load(nam_file, model_ws=model_ws, exe_name=mfpth, version='mfnwt',
load_only=['DIS', 'BAS6'], check=False)
dis = fpmg.get_package('DIS')
bas = fpmg.get_package('BAS6')
delr = dis.delr
delc = dis.delc
nlay = dis.nlay
nrow = dis.nrow
ncol = dis.ncol
hnoflo = bas.hnoflo
ibound = np.asarray(bas.ibound.get_value())
print (' ... done')
def get_minmax(g):
'''This function extracts x and y values from a polygon
and finds the coordinate pairs at extreme values.
g : Shapely Polygon or MultiPolygon object
returns: array of (x, y) pairs at extreme values'''
x, y = np.array(list(zip(*g.boundary.coords[:])))
return find_minmax(x, y)
def find_minmax(x, y):
'''This function finds the pairs of coordinates at each extreme value.
x, y : array of single coordinates, x and y
returns: array of (x, y) pairs at extreme values'''
ximin = np.argmin(x)
ximax = np.argmax(x)
yimin = np.argmin(y)
yimax = np.argmax(y)
return np.array(((x[ximin], y[ximin]),
(x[yimax], y[yimax]),
(x[ximax], y[ximax]),
(x[yimin], y[yimin])))
src = os.path.join(geo_ws, model_outline)
basin = gp.read_file(src + '.shp')
# Read the projection associated with the shapefile (in ESRI WKT format).
with open(src + '.prj', 'r') as f:
prj = f.readlines()
# Convert the projection to Proj.4 (for geopandas and matplotlib) and WKT
# (for open source geotiff file)
srs = osr.SpatialReference()
srs.ImportFromESRI(prj)
prj4 = srs.ExportToProj4()
wkt = srs.ExportToWkt()
# initialize with dummy array so that new arrays of the same shape can be appended
arr = np.zeros((1, 2))
# loop through all the geometries in the source shapefile and
# append the pairs of coordinates at extreme values
for geom in basin.geometry:
if geom.type == 'Polygon':
arr = np.append(arr, get_minmax(geom), axis=0)
elif geom.type == 'MultiPolygon':
for g in geom:
arr = np.append(arr, get_minmax(g), axis=0)
else:
print('unrecognized geometry type; should be Polygon or MultiPolygon')
# find the global set of coordinates at extreme values (corners)
pts = find_minmax(arr[1:, 0], arr[1:, 1])
LX = np.unique(delr)
LY = np.unique(delc)
assert LX.shape[0]==1, "grid spacing in delr is not uniform; can't use raster"
assert LY.shape[0]==1, "grid spacing in delc is not uniform; can't use raster"
assert LX==LY, "grid cells are not square; can't use raster"
L = LX
# Find the apex (ymax) of the grid.
ymax = np.argmax(pts[:, 1])
# Wrap (roll) the lines of the array around so that the apex is at the top of the array (first line).
pts = np.roll(pts, -ymax, axis=0)
# Add the first point to the end for calculating distances
pts = np.vstack((pts, pts[0, :]))
# Calculate the length of each side.
dc = np.diff(pts, axis=0)
hyp = np.hypot(dc[:, 0], dc[:, 1])
# angle in radians from positive x axis such that negative y values produce negative angles
da = np.arctan2(dc[:, 1], dc[:, 0])
# the corner coordinates always have the ncol dimension to the right of the origin
if ncol <= nrow:
if hyp[0] <= hyp[3]:
origin = pts[0, :]
theta = da[0]
else:
origin = pts[3, :]
theta = da[3]
elif ncol > nrow:
if hyp[0] < hyp[3]:
origin = pts[3, :]
theta = da[3]
else:
origin = pts[0, :]
theta = da[0]
else:
assert np.isclose(hyp[0], hyp[3]), 'nrow = ncol but sides are not equal length'
A = L * np.cos(theta)
B = L * np.sin(theta)
D = L * np.sin(theta)
E = L * -np.cos(theta)
gt = [origin[0], A[0], B[0], origin[1], D[0], E[0]]
pts
ax = basin.plot()
ax.plot(arr[:,0], arr[:,1], marker='x', linestyle='None', **{'mec':'k', 'linewidth':1.0})
ax.plot(origin[0], origin[1], marker='o', linestyle='None', **{'mec':'k', 'linewidth':1.0})
dst_file = os.path.join(geo_ws, 'model_grid.tif')
if os.path.exists(dst_file):
os.remove(dst_file)
driver = gdal.GetDriverByName("GTiff")
dst = driver.Create(dst_file, ncol, nrow, 1, gdal.GDT_Float32)
dst.SetProjection(wkt)
dst.SetGeoTransform(gt)
ba = dst.GetRasterBand(1)
no = ba.SetNoDataValue(0)
ar = ba.WriteArray(ibound[ib2use, :, :])
dst = None
driver = None
| 0.46952 | 0.882022 |
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Get Cloud Project ID
To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
```
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
```
#3. Get Client Credentials
To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
```
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
```
#4. Enter GA360 Segmentology Parameters
GA360 funnel analysis using Census data.
1. Wait for <b>BigQuery->->->Census_Join</b> to be created.
1. Join the <a hre='https://groups.google.com/d/forum/starthinker-assets' target='_blank'>StarThinker Assets Group</a> to access the following assets
1. Copy <a href='https://datastudio.google.com/c/u/0/reporting/3673497b-f36f-4448-8fb9-3e05ea51842f/' target='_blank'>GA360 Segmentology Sample</a>. Leave the Data Source as is, you will change it in the next step.
1. Click Edit Connection, and change to <b>BigQuery->->->Census_Join</b>.
1. Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_write': 'service', # Authorization used for writing data.
'auth_read': 'service', # Authorization for reading GA360.
'view': 'service', # View Id
'recipe_slug': '', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
```
#5. Execute GA360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play.
```
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'description': 'Create a dataset for bigquery tables.',
'hour': [
4
],
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery': {
'auth': 'user',
'function': 'Pearson Significance Test',
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'ga': {
'auth': 'user',
'kwargs': {
'reportRequests': [
{
'viewId': {'field': {'name': 'view','kind': 'string','order': 2,'default': 'service','description': 'View Id'}},
'dateRanges': [
{
'startDate': '90daysAgo',
'endDate': 'today'
}
],
'dimensions': [
{
'name': 'ga:userType'
},
{
'name': 'ga:userDefinedValue'
},
{
'name': 'ga:latitude'
},
{
'name': 'ga:longitude'
}
],
'metrics': [
{
'expression': 'ga:users'
},
{
'expression': 'ga:sessionsPerUser'
},
{
'expression': 'ga:bounces'
},
{
'expression': 'ga:timeOnPage'
},
{
'expression': 'ga:pageviews'
}
]
}
],
'useResourceQuotas': False
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'GA360_KPI'
}
}
}
},
{
'bigquery': {
'auth': 'user',
'from': {
'query': 'WITH GA360_SUM AS ( SELECT A.Dimensions.userType AS User_Type, A.Dimensions.userDefinedValue AS User_Value, B.zip_code AS Zip, SUM(Metrics.users) AS Users, SUM(Metrics.sessionsPerUser) AS Sessions, SUM(Metrics.timeOnPage) AS Time_On_Site, SUM(Metrics.bounces) AS Bounces, SUM(Metrics.pageviews) AS Page_Views FROM `{dataset}.GA360_KPI` AS A JOIN `bigquery-public-data.geo_us_boundaries.zip_codes` AS B ON ST_WITHIN(ST_GEOGPOINT(A.Dimensions.longitude, A.Dimensions.latitude), B.zip_code_geom) GROUP BY 1,2,3 ) SELECT User_Type, User_Value, Zip, Users, SAFE_DIVIDE(Users, SUM(Users) OVER()) AS User_Percent, SAFE_DIVIDE(Sessions, SUM(Sessions) OVER()) AS Impression_Percent, SAFE_DIVIDE(Time_On_Site, SUM(Time_On_Site) OVER()) AS Time_On_Site_Percent, SAFE_DIVIDE(Bounces, SUM(Bounces) OVER()) AS Bounce_Percent, SAFE_DIVIDE(Page_Views, SUM(Page_Views) OVER()) AS Page_View_Percent FROM GA360_SUM ',
'parameters': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
},
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be written in BigQuery.'}},
'view': 'GA360_KPI_Normalized'
}
}
},
{
'census': {
'auth': 'user',
'normalize': {
'census_geography': 'zip_codes',
'census_year': '2018',
'census_span': '5yr'
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
},
{
'census': {
'auth': 'user',
'correlate': {
'join': 'Zip',
'pass': [
'User_Type',
'User_Value'
],
'sum': [
'Users'
],
'correlate': [
'User_Percent',
'Impression_Percent',
'Time_On_Site_Percent',
'Bounce_Percent',
'Page_View_Percent'
],
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'GA360_KPI_Normalized',
'significance': 80
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
```
|
github_jupyter
|
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'auth_write': 'service', # Authorization used for writing data.
'auth_read': 'service', # Authorization for reading GA360.
'view': 'service', # View Id
'recipe_slug': '', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'description': 'Create a dataset for bigquery tables.',
'hour': [
4
],
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery': {
'auth': 'user',
'function': 'Pearson Significance Test',
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'ga': {
'auth': 'user',
'kwargs': {
'reportRequests': [
{
'viewId': {'field': {'name': 'view','kind': 'string','order': 2,'default': 'service','description': 'View Id'}},
'dateRanges': [
{
'startDate': '90daysAgo',
'endDate': 'today'
}
],
'dimensions': [
{
'name': 'ga:userType'
},
{
'name': 'ga:userDefinedValue'
},
{
'name': 'ga:latitude'
},
{
'name': 'ga:longitude'
}
],
'metrics': [
{
'expression': 'ga:users'
},
{
'expression': 'ga:sessionsPerUser'
},
{
'expression': 'ga:bounces'
},
{
'expression': 'ga:timeOnPage'
},
{
'expression': 'ga:pageviews'
}
]
}
],
'useResourceQuotas': False
},
'out': {
'bigquery': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'GA360_KPI'
}
}
}
},
{
'bigquery': {
'auth': 'user',
'from': {
'query': 'WITH GA360_SUM AS ( SELECT A.Dimensions.userType AS User_Type, A.Dimensions.userDefinedValue AS User_Value, B.zip_code AS Zip, SUM(Metrics.users) AS Users, SUM(Metrics.sessionsPerUser) AS Sessions, SUM(Metrics.timeOnPage) AS Time_On_Site, SUM(Metrics.bounces) AS Bounces, SUM(Metrics.pageviews) AS Page_Views FROM `{dataset}.GA360_KPI` AS A JOIN `bigquery-public-data.geo_us_boundaries.zip_codes` AS B ON ST_WITHIN(ST_GEOGPOINT(A.Dimensions.longitude, A.Dimensions.latitude), B.zip_code_geom) GROUP BY 1,2,3 ) SELECT User_Type, User_Value, Zip, Users, SAFE_DIVIDE(Users, SUM(Users) OVER()) AS User_Percent, SAFE_DIVIDE(Sessions, SUM(Sessions) OVER()) AS Impression_Percent, SAFE_DIVIDE(Time_On_Site, SUM(Time_On_Site) OVER()) AS Time_On_Site_Percent, SAFE_DIVIDE(Bounces, SUM(Bounces) OVER()) AS Bounce_Percent, SAFE_DIVIDE(Page_Views, SUM(Page_Views) OVER()) AS Page_View_Percent FROM GA360_SUM ',
'parameters': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
},
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be written in BigQuery.'}},
'view': 'GA360_KPI_Normalized'
}
}
},
{
'census': {
'auth': 'user',
'normalize': {
'census_geography': 'zip_codes',
'census_year': '2018',
'census_span': '5yr'
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
},
{
'census': {
'auth': 'user',
'correlate': {
'join': 'Zip',
'pass': [
'User_Type',
'User_Value'
],
'sum': [
'Users'
],
'correlate': [
'User_Percent',
'Impression_Percent',
'Time_On_Site_Percent',
'Bounce_Percent',
'Page_View_Percent'
],
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'GA360_KPI_Normalized',
'significance': 80
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
| 0.411466 | 0.772252 |
```
import h5py
import numpy as np
import shutil
from misc_utils.tensor_sampling_utils import sample_tensors
# TODO: Set the path for the source weights file you want to load.
weights_source_path = 'trained_weights/VGG_coco_SSD_300x300_iter_400000.h5'
# TODO: Set the path and name for the destination weights file
# that you want to create.
weights_destination_path = 'trained_weights/VGG_coco_SSD_300x300_iter_400000_subsampled_5_classes.h5'
# Make a copy of the weights file.
shutil.copy(weights_source_path, weights_destination_path)
# Load both the source weights file and the copy we made.
# We will load the original weights file in read-only mode so that we can't mess up anything.
weights_source_file = h5py.File(weights_source_path, 'r')
weights_destination_file = h5py.File(weights_destination_path, 'r+') # defaultnya tidak ada parameter 'r+'
classifier_names = ['conv4_3_norm_mbox_conf',
'fc7_mbox_conf',
'conv6_2_mbox_conf',
'conv7_2_mbox_conf',
'conv8_2_mbox_conf',
'conv9_2_mbox_conf']
conv4_3_norm_mbox_conf_kernel = weights_source_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_source_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
n_classes_source = 81
# classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12] # diganti jadi yang buah dan sayur
classes_of_interest = [0, 52, 53, 55, 56, 57] # banana, apple, orange, broccoli, carrot, total ada 5 class + background = 6
subsampling_indices = []
for i in range(int(324/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
print(subsampling_indices)
# TODO: Set the number of classes in the source weights file. Note that this number must include
# the background class, so for MS COCO's 80 classes, this must be 80 + 1 = 81.
n_classes_source = 81
# TODO: Set the indices of the classes that you want to pick for the sub-sampled weight tensors.
# In case you would like to just randomly sample a certain number of classes, you can just set
# `classes_of_interest` to an integer instead of the list below. Either way, don't forget to
# include the background class. That is, if you set an integer, and you want `n` positive classes,
# then you must set `classes_of_interest = n + 1`.
# classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12]
# classes_of_interest = 9 # Uncomment this in case you want to just randomly sub-sample the last axis instead of providing a list of indices.
classes_of_interest = [0, 52, 53, 55, 56, 57] # banana, apple, orange, broccoli, carrot, total ada 5 class + background = 6
for name in classifier_names:
# Get the trained weights for this layer from the source HDF5 weights file.
kernel = weights_source_file[name][name]['kernel:0'][:] # defaultnya .value (deprecated)
bias = weights_source_file[name][name]['bias:0'][:]
# Get the shape of the kernel. We're interested in sub-sampling
# the last dimension, 'o'.
height, width, in_channels, out_channels = kernel.shape
# Compute the indices of the elements we want to sub-sample.
# Keep in mind that each classification predictor layer predicts multiple
# bounding boxes for every spatial location, so we want to sub-sample
# the relevant classes for each of these boxes.
if isinstance(classes_of_interest, (list, tuple)):
subsampling_indices = []
for i in range(int(out_channels/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
elif isinstance(classes_of_interest, int):
subsampling_indices = int(classes_of_interest * (out_channels/n_classes_source))
else:
raise ValueError("`classes_of_interest` must be either an integer or a list/tuple.")
# Sub-sample the kernel and bias.
# The `sample_tensors()` function used below provides extensive
# documentation, so don't hesitate to read it if you want to know
# what exactly is going on here.
new_kernel, new_bias = sample_tensors(weights_list=[kernel, bias],
sampling_instructions=[height, width, in_channels, subsampling_indices],
axes=[[3]], # The one bias dimension corresponds to the last kernel dimension.
init=['gaussian', 'zeros'],
mean=0.0,
stddev=0.005)
# Delete the old weights from the destination file.
del weights_destination_file[name][name]['kernel:0']
del weights_destination_file[name][name]['bias:0']
# Create new datasets for the sub-sampled weights.
weights_destination_file[name][name].create_dataset(name='kernel:0', data=new_kernel)
weights_destination_file[name][name].create_dataset(name='bias:0', data=new_bias)
# Make sure all data is written to our output file before this sub-routine exits.
weights_destination_file.flush()
conv4_3_norm_mbox_conf_kernel = weights_destination_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_destination_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
from tensorflow import keras
from tensorflow.keras.optimizers import Adam
from keras import backend as K
from keras.models import load_model
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.object_detection_2d_patch_sampling_ops import RandomMaxCropFixedAR
from data_generator.object_detection_2d_geometric_ops import Resize
img_height = 300 # Height of the input images
img_width = 300 # Width of the input images
img_channels = 3 # Number of color channels of the input images
subtract_mean = [123, 117, 104] # The per-channel mean of the images in the dataset
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we should set this to `True`, but weirdly the results are better without swapping.
# TODO: Set the number of classes.
n_classes = 6 # ganti # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets.
# scales = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets.
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not you want to limit the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are scaled as in the original implementation
normalize_coords = True
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='inference',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=subtract_mean,
divide_by_stddev=None,
swap_channels=swap_channels,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
nms_max_output_size=400,
return_predictor_sizes=False)
print("Model built.")
# 2: Load the sub-sampled weights into the model.
# Load the weights that we've just created via sub-sampling.
weights_path = weights_destination_path
model.load_weights(weights_path, by_name=True)
print("Weights file loaded:", weights_path)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model.
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
```
|
github_jupyter
|
import h5py
import numpy as np
import shutil
from misc_utils.tensor_sampling_utils import sample_tensors
# TODO: Set the path for the source weights file you want to load.
weights_source_path = 'trained_weights/VGG_coco_SSD_300x300_iter_400000.h5'
# TODO: Set the path and name for the destination weights file
# that you want to create.
weights_destination_path = 'trained_weights/VGG_coco_SSD_300x300_iter_400000_subsampled_5_classes.h5'
# Make a copy of the weights file.
shutil.copy(weights_source_path, weights_destination_path)
# Load both the source weights file and the copy we made.
# We will load the original weights file in read-only mode so that we can't mess up anything.
weights_source_file = h5py.File(weights_source_path, 'r')
weights_destination_file = h5py.File(weights_destination_path, 'r+') # defaultnya tidak ada parameter 'r+'
classifier_names = ['conv4_3_norm_mbox_conf',
'fc7_mbox_conf',
'conv6_2_mbox_conf',
'conv7_2_mbox_conf',
'conv8_2_mbox_conf',
'conv9_2_mbox_conf']
conv4_3_norm_mbox_conf_kernel = weights_source_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_source_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
n_classes_source = 81
# classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12] # diganti jadi yang buah dan sayur
classes_of_interest = [0, 52, 53, 55, 56, 57] # banana, apple, orange, broccoli, carrot, total ada 5 class + background = 6
subsampling_indices = []
for i in range(int(324/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
print(subsampling_indices)
# TODO: Set the number of classes in the source weights file. Note that this number must include
# the background class, so for MS COCO's 80 classes, this must be 80 + 1 = 81.
n_classes_source = 81
# TODO: Set the indices of the classes that you want to pick for the sub-sampled weight tensors.
# In case you would like to just randomly sample a certain number of classes, you can just set
# `classes_of_interest` to an integer instead of the list below. Either way, don't forget to
# include the background class. That is, if you set an integer, and you want `n` positive classes,
# then you must set `classes_of_interest = n + 1`.
# classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12]
# classes_of_interest = 9 # Uncomment this in case you want to just randomly sub-sample the last axis instead of providing a list of indices.
classes_of_interest = [0, 52, 53, 55, 56, 57] # banana, apple, orange, broccoli, carrot, total ada 5 class + background = 6
for name in classifier_names:
# Get the trained weights for this layer from the source HDF5 weights file.
kernel = weights_source_file[name][name]['kernel:0'][:] # defaultnya .value (deprecated)
bias = weights_source_file[name][name]['bias:0'][:]
# Get the shape of the kernel. We're interested in sub-sampling
# the last dimension, 'o'.
height, width, in_channels, out_channels = kernel.shape
# Compute the indices of the elements we want to sub-sample.
# Keep in mind that each classification predictor layer predicts multiple
# bounding boxes for every spatial location, so we want to sub-sample
# the relevant classes for each of these boxes.
if isinstance(classes_of_interest, (list, tuple)):
subsampling_indices = []
for i in range(int(out_channels/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
elif isinstance(classes_of_interest, int):
subsampling_indices = int(classes_of_interest * (out_channels/n_classes_source))
else:
raise ValueError("`classes_of_interest` must be either an integer or a list/tuple.")
# Sub-sample the kernel and bias.
# The `sample_tensors()` function used below provides extensive
# documentation, so don't hesitate to read it if you want to know
# what exactly is going on here.
new_kernel, new_bias = sample_tensors(weights_list=[kernel, bias],
sampling_instructions=[height, width, in_channels, subsampling_indices],
axes=[[3]], # The one bias dimension corresponds to the last kernel dimension.
init=['gaussian', 'zeros'],
mean=0.0,
stddev=0.005)
# Delete the old weights from the destination file.
del weights_destination_file[name][name]['kernel:0']
del weights_destination_file[name][name]['bias:0']
# Create new datasets for the sub-sampled weights.
weights_destination_file[name][name].create_dataset(name='kernel:0', data=new_kernel)
weights_destination_file[name][name].create_dataset(name='bias:0', data=new_bias)
# Make sure all data is written to our output file before this sub-routine exits.
weights_destination_file.flush()
conv4_3_norm_mbox_conf_kernel = weights_destination_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_destination_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
from tensorflow import keras
from tensorflow.keras.optimizers import Adam
from keras import backend as K
from keras.models import load_model
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.object_detection_2d_patch_sampling_ops import RandomMaxCropFixedAR
from data_generator.object_detection_2d_geometric_ops import Resize
img_height = 300 # Height of the input images
img_width = 300 # Width of the input images
img_channels = 3 # Number of color channels of the input images
subtract_mean = [123, 117, 104] # The per-channel mean of the images in the dataset
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we should set this to `True`, but weirdly the results are better without swapping.
# TODO: Set the number of classes.
n_classes = 6 # ganti # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets.
# scales = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets.
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not you want to limit the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are scaled as in the original implementation
normalize_coords = True
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='inference',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=subtract_mean,
divide_by_stddev=None,
swap_channels=swap_channels,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
nms_max_output_size=400,
return_predictor_sizes=False)
print("Model built.")
# 2: Load the sub-sampled weights into the model.
# Load the weights that we've just created via sub-sampling.
weights_path = weights_destination_path
model.load_weights(weights_path, by_name=True)
print("Weights file loaded:", weights_path)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model.
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
| 0.461988 | 0.419202 |
```
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.feature_extraction import text
from sklearn.decomposition import LatentDirichletAllocation as LDA
import nltk
from nltk.stem import SnowballStemmer, WordNetLemmatizer
from nltk.tokenize import word_tokenize
from time import time
import csv
import os
import sys
root = os.getcwd()
sys.path.append("{root}/../..".format(root=root))
from utils.stopWords import stopWords
# CONSTANTS
datasetFilepath = "../../data/data.csv"
countTopics = 5
countTopWords = 10 # Only show the top 10 words in a topic
countFeatures = 100
ngramRange = (1, 1)
tokenPattern= r'(?u)\b[A-Za-z]+\b' # Only include letters, remove any numerical characters
maxReviewRating = 3 # Must be number between 1 and 5
maxReviews = 10000
customStopWords = text.ENGLISH_STOP_WORDS.union(stopWords)
minDF = 2 # Shows up in at least 10 documents x
maxDF = 0.95 # Occurs in less than 90% of the documents
nltk.download('wordnet')
lemmatizer = WordNetLemmatizer()
tokenizer = lambda word: [lemmatizer.lemmatize(t) for t in word]
startTime = time()
corpus = []
with open(datasetFilepath, 'r') as file:
reader = csv.DictReader(file)
for index, row in enumerate(reader):
try:
if float(row["reviewRating"]) <= maxReviewRating:
corpus.append(row["reviewContent"])
except Exception as e:
print("Catching error: ", e)
pass
print("Data extraction completed in %f seconds" %
(time() - startTime))
print("data length: %s \n" % len(corpus))
countSamples = len(corpus)
startTime = time()
nltk.download('punkt')
# Create hashmap for slightly faster lookup
customStopWordsHashmap = { k: True for k in customStopWords }
validCorpus = []
for review in corpus:
validWords = []
for word in word_tokenize(review):
if word not in customStopWordsHashmap:
validWords.append(word)
validReview = " ".join(validWords)
validCorpus.append(validReview)
corpus = validCorpus
print("Filtering stop words completed in %f seconds" % (time() - startTime))
startTime = time()
TFVectorizer = CountVectorizer(
max_df=maxDF,
min_df=minDF,
max_features=countFeatures,
ngram_range=ngramRange,
token_pattern=tokenPattern,
stop_words=customStopWords
)
TF = TFVectorizer.fit_transform(corpus)
featureNames = TFVectorizer.get_feature_names()
print("Vectorization completed in %f seconds" % (time() - startTime))
# Fit the model
startTime = time()
print("Fitting the NMF model with countSamples=%d and countFeatures=%d \n" % (countSamples, countFeatures))
LDAModel = LDA(n_components=countTopics).fit(TF)
print("Completed model fitting in %f seconds" % (time() - startTime))
# Maps the indexes back to the featureName
for index, topic in enumerate(LDAModel.components_):
print("Topic %d:" % (index + 1))
print(", ".join([featureNames[i] for i in topic.argsort()[:-countTopWords - 1:-1]]))
```
|
github_jupyter
|
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.feature_extraction import text
from sklearn.decomposition import LatentDirichletAllocation as LDA
import nltk
from nltk.stem import SnowballStemmer, WordNetLemmatizer
from nltk.tokenize import word_tokenize
from time import time
import csv
import os
import sys
root = os.getcwd()
sys.path.append("{root}/../..".format(root=root))
from utils.stopWords import stopWords
# CONSTANTS
datasetFilepath = "../../data/data.csv"
countTopics = 5
countTopWords = 10 # Only show the top 10 words in a topic
countFeatures = 100
ngramRange = (1, 1)
tokenPattern= r'(?u)\b[A-Za-z]+\b' # Only include letters, remove any numerical characters
maxReviewRating = 3 # Must be number between 1 and 5
maxReviews = 10000
customStopWords = text.ENGLISH_STOP_WORDS.union(stopWords)
minDF = 2 # Shows up in at least 10 documents x
maxDF = 0.95 # Occurs in less than 90% of the documents
nltk.download('wordnet')
lemmatizer = WordNetLemmatizer()
tokenizer = lambda word: [lemmatizer.lemmatize(t) for t in word]
startTime = time()
corpus = []
with open(datasetFilepath, 'r') as file:
reader = csv.DictReader(file)
for index, row in enumerate(reader):
try:
if float(row["reviewRating"]) <= maxReviewRating:
corpus.append(row["reviewContent"])
except Exception as e:
print("Catching error: ", e)
pass
print("Data extraction completed in %f seconds" %
(time() - startTime))
print("data length: %s \n" % len(corpus))
countSamples = len(corpus)
startTime = time()
nltk.download('punkt')
# Create hashmap for slightly faster lookup
customStopWordsHashmap = { k: True for k in customStopWords }
validCorpus = []
for review in corpus:
validWords = []
for word in word_tokenize(review):
if word not in customStopWordsHashmap:
validWords.append(word)
validReview = " ".join(validWords)
validCorpus.append(validReview)
corpus = validCorpus
print("Filtering stop words completed in %f seconds" % (time() - startTime))
startTime = time()
TFVectorizer = CountVectorizer(
max_df=maxDF,
min_df=minDF,
max_features=countFeatures,
ngram_range=ngramRange,
token_pattern=tokenPattern,
stop_words=customStopWords
)
TF = TFVectorizer.fit_transform(corpus)
featureNames = TFVectorizer.get_feature_names()
print("Vectorization completed in %f seconds" % (time() - startTime))
# Fit the model
startTime = time()
print("Fitting the NMF model with countSamples=%d and countFeatures=%d \n" % (countSamples, countFeatures))
LDAModel = LDA(n_components=countTopics).fit(TF)
print("Completed model fitting in %f seconds" % (time() - startTime))
# Maps the indexes back to the featureName
for index, topic in enumerate(LDAModel.components_):
print("Topic %d:" % (index + 1))
print(", ".join([featureNames[i] for i in topic.argsort()[:-countTopWords - 1:-1]]))
| 0.378 | 0.255332 |
```
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
#Assigning the dataframe url to a variable.
url_df = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_experiments.zip?raw=true"
#Assigning the uncompressed csv dataframe to a variable.
df = pd.read_csv(url_df, compression = 'zip')
df.head()
```
### Note:
##### pandas.crosstab
```
pd.crosstab(df['dose'], df['duration'])
pd.crosstab([df['dose'], df['duration']], df['treatment'])
pd.crosstab([df['dose'], df['duration']], df['treatment'], normalize='index')
pd.crosstab([df['dose'], df['duration']], df['treatment'], values=df['g0'], aggfunc='mean')
df[['g0', 'g3']]
sns.scatterplot(x='g0', y= 'g3', data=df)
sns.scatterplot(x='g0', y= 'g8', data=df)
sns.lmplot(data=df, x='g0', y= 'g8', line_kws={'color':'red'} )
sns.lmplot(data=df, x='g0', y= 'g8', line_kws={'color':'red'}, col='treatment', row='duration' )
```
### Note:
#### correlation
When the result is close to **-1** or **+1**, it means the variables have a **strong relationship**. When they are close to **0**, it means they have a **weak relationship**.
```
df.loc[:, 'g0':'g771'].corr()
```
## Plotting a diagonal correlation matrix
https://seaborn.pydata.org/examples/many_pairwise_correlations.html
```
# Compute the correlation matrix
corr = df.loc[:, 'g0':'g50'].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
### Note:
##### The values in the *c#* columns is the viability of the cell (how many cells alive).
```
# Compute the correlation matrix
corr_cel = df.loc[:, 'c0':'c50'].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr_cel, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr_cel, mask=mask, cmap=cmap, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
### Challenge 1: Do as in *pd.crosstab([df['dose'], df['duration']], df['treatment'], normalize='index')* but, using *"pandas.groupby()"*. (frequency table).
### Challenge 2: Normalize columns using crosstab, so the sum of the values in each columns is 1.
### Challenge 3: Explore other options for *aggfunc=*
### Challenge 4: Explore *melt*
### Challenge 5: Calculate and analyze the correlation between *g#*s and *c#*s.
### Challenge 6: Study *Plotting a diagonal correlation matrix*
### Challenge 7: Summary
|
github_jupyter
|
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
#Assigning the dataframe url to a variable.
url_df = "https://github.com/Jefexon/Alura-Data-Immersion-3/blob/main/Data/data_experiments.zip?raw=true"
#Assigning the uncompressed csv dataframe to a variable.
df = pd.read_csv(url_df, compression = 'zip')
df.head()
pd.crosstab(df['dose'], df['duration'])
pd.crosstab([df['dose'], df['duration']], df['treatment'])
pd.crosstab([df['dose'], df['duration']], df['treatment'], normalize='index')
pd.crosstab([df['dose'], df['duration']], df['treatment'], values=df['g0'], aggfunc='mean')
df[['g0', 'g3']]
sns.scatterplot(x='g0', y= 'g3', data=df)
sns.scatterplot(x='g0', y= 'g8', data=df)
sns.lmplot(data=df, x='g0', y= 'g8', line_kws={'color':'red'} )
sns.lmplot(data=df, x='g0', y= 'g8', line_kws={'color':'red'}, col='treatment', row='duration' )
df.loc[:, 'g0':'g771'].corr()
# Compute the correlation matrix
corr = df.loc[:, 'g0':'g50'].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
# Compute the correlation matrix
corr_cel = df.loc[:, 'c0':'c50'].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr_cel, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr_cel, mask=mask, cmap=cmap, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
| 0.732209 | 0.881564 |
# Laboratorio 2
```
import numpy as np
from scipy import linalg
```
## Ejercicio 1
Dados dos NumPy array, `x` e `y` unidimensionales, construye su matriz de Cauchy `C`tal que
(1 punto)
$$
c_{ij} = \frac{1}{x_i - y_j}
$$
```
def cauchy_matrix(x, y):
m = x.shape[0]
n = y.shape[0]
C = np.empty(shape=(m, n))
for i in range(m):
for j in range(n):
C[i, j]=1/(x[i]-y[j])
return C
x = np.arange(10, 101, 10)
y = np.arange(5)
cauchy_matrix(x, y)
```
## Ejercicio 2
(1 punto)
Implementa la multiplicación matricial a través de dos ciclos `for`. Verifica que tu implementación está correcta y luego compara los tiempos de tu implementación versus la de NumPy.
```
def my_mul(A, B):
m, n = A.shape
p, q = B.shape
if n != p:
raise ValueError("Las dimensiones de las matrices no calzan!")
C = np.empty(shape=(m, q))
for i in range(m):
for j in range(q):
C[i, j] = np.sum(A[i]*B[:, j])
return C
A = np.arange(15).reshape(-1, 5)
B = np.arange(20).reshape(5, -1)
my_mul(A, B)
# Validation
np.allclose(my_mul(A, B), A @ B)
%%timeit
my_mul(A, B)
%%timeit
A @ B
```
## Ejercicio 3
(1 punto)
Crea una función que imprima todos los bloques contiguos de tamaño $3 \times 3$ para una matriz de $5 \times 5$.
Hint: Deben ser 9 bloques!
```
def three_times_three_blocks(A):
m, n = A.shape
if m<3 or n<3:
return
counter = 1
for i in range(m-2):
for j in range(n-2):
block = A[i:i+3, j:j+3]
print(f"Block {counter}:")
print(block)
print("\n")
counter += 1
A = np.arange(1, 26).reshape(5, 5)
A
three_times_three_blocks(A)
```
## Ejercicio 4
(1 punto)
Has tu propio implementación de la matriz de Hilbert de orden $n$ y luego compara los tiempos de ejecución versus la función [`scipy.linalg.hilbert`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.hilbert.html#scipy.linalg.hilbert). Finalmente, verifica que la inversa de tu implementación (utilizando `linalg.inv`) es idéntica a la obtenida con la función [`scipy.linalg.invhilbert`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.invhilbert.html#scipy.linalg.invhilbert).
```
def my_hilbert(n):
H = np.empty((n, n))
for i in range(n):
for j in range(n):
H[i, j]=1/(i+j+1)
return H
n = 5
np.allclose(my_hilbert(n), linalg.hilbert(n))
%timeit my_hilbert(n)
%timeit linalg.hilbert(n)
# Verificacion inversas
np.allclose(np.linalg.inv(my_hilbert(n)), linalg.invhilbert(n))
```
Vuelve a probar pero con $n=10$. ¿Cambia algo? ¿Por qué podría ser?
```
n = 10
np.allclose(my_hilbert(n), linalg.hilbert(n))
%timeit my_hilbert(n)
%timeit linalg.hilbert(n)
# Verificacion inversas
np.allclose(np.linalg.inv(my_hilbert(n)), linalg.invhilbert(n))
```
__Respuesta:__ La diferencia de tiempos cambio; las matrices ya no son iguales. En teoria puede que la funcion implementeda en scipy tiene una asintota mas pequena en orden de magnitud, mientras que el programa implementado tiene menos coste en constantes; asi, cuando los valores crecen, la aplicacion implementada en numpy tiende a ser mas eficiente que la implementada por el usuario, en cambio para valores pequenos se revierte. La no igualdad de la i
|
github_jupyter
|
import numpy as np
from scipy import linalg
def cauchy_matrix(x, y):
m = x.shape[0]
n = y.shape[0]
C = np.empty(shape=(m, n))
for i in range(m):
for j in range(n):
C[i, j]=1/(x[i]-y[j])
return C
x = np.arange(10, 101, 10)
y = np.arange(5)
cauchy_matrix(x, y)
def my_mul(A, B):
m, n = A.shape
p, q = B.shape
if n != p:
raise ValueError("Las dimensiones de las matrices no calzan!")
C = np.empty(shape=(m, q))
for i in range(m):
for j in range(q):
C[i, j] = np.sum(A[i]*B[:, j])
return C
A = np.arange(15).reshape(-1, 5)
B = np.arange(20).reshape(5, -1)
my_mul(A, B)
# Validation
np.allclose(my_mul(A, B), A @ B)
%%timeit
my_mul(A, B)
%%timeit
A @ B
def three_times_three_blocks(A):
m, n = A.shape
if m<3 or n<3:
return
counter = 1
for i in range(m-2):
for j in range(n-2):
block = A[i:i+3, j:j+3]
print(f"Block {counter}:")
print(block)
print("\n")
counter += 1
A = np.arange(1, 26).reshape(5, 5)
A
three_times_three_blocks(A)
def my_hilbert(n):
H = np.empty((n, n))
for i in range(n):
for j in range(n):
H[i, j]=1/(i+j+1)
return H
n = 5
np.allclose(my_hilbert(n), linalg.hilbert(n))
%timeit my_hilbert(n)
%timeit linalg.hilbert(n)
# Verificacion inversas
np.allclose(np.linalg.inv(my_hilbert(n)), linalg.invhilbert(n))
n = 10
np.allclose(my_hilbert(n), linalg.hilbert(n))
%timeit my_hilbert(n)
%timeit linalg.hilbert(n)
# Verificacion inversas
np.allclose(np.linalg.inv(my_hilbert(n)), linalg.invhilbert(n))
| 0.257485 | 0.954393 |
```
# Including libraries into the R environment
library(tidyverse)
library(rvest)
library(dplyr)
#URL for Suicidal rates for 2016
url_suicide_rate <- "https://en.wikipedia.org/wiki/List_of_countries_by_suicide_rate"
#Fetching data from web page as an HTML table
data_suicide_rate <- url_suicide_rate %>%
read_html()%>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table[3]')%>%
html_table()
#Converting HTML table to a DataFrame
data_suicide_rate <- data_suicide_rate %>% as.data.frame() %>%
select(Country, Both.sexes, Male, Female) %>%
rename('Suicide Rates 2016 (per 100,000 People)' = Both.sexes,
'Suicide Rates 2016 (Males)' = Male,
'Suicide Rates 2016 (Females)' = Female)
#Printing the data frame
data_suicide_rate
#URL for Health Expenditure
url_health_expenditure <- 'https://en.wikipedia.org/wiki/List_of_countries_by_total_health_expenditure_per_capita'
#Fetching data from web page as an HTML table
data_health_expenditure <- url_health_expenditure %>%
read_html() %>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table[1]/tbody/tr /td[2]/table') %>%
html_table()
#Converting HTML table to a DataFrame
data_health_expenditure <- data_health_expenditure%>% as.data.frame() %>%
select(Country, X2016)
#Renaming column names of the Data Frame
data_health_expenditure <- data_health_expenditure%>%
rename('Health Expenditure 2016 (Per Capita in dollars)' = X2016)
#Changing character data type to numeric after removing the commas in the figures using gsub()
data_health_expenditure[['Health Expenditure 2016 (Per Capita in dollars)']] <-
gsub(",","",data_health_expenditure[['Health Expenditure 2016 (Per Capita in dollars)']]) %>%
as.numeric()
#Printing the data frame
data_health_expenditure
#Merging the data sets for Suicidal Rates and Health Expenditure for different countries using INNER JOIN
data_merged <- data_suicide_rate
data_merged <- inner_join(x= data_suicide_rate, y = data_health_expenditure, by = "Country")
#Printing the DataSet
data_merged
#Creating CSV for the above merged dataset
write.csv(data_merged, file="Health_Expenditure_Merged.csv")
# Visualizing the relation of suicidal rates with Health Expenditure for different countries using a bar plot
ggplot(data_merged, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +#, fill=Country)) +
geom_bar(stat='identity', color='black', fill = 'cornflowerblue') +
theme(axis.text.x = element_text(angle=60, hjust=1)) +
geom_point(aes(y=`Health Expenditure 2016 (Per Capita in dollars)`/1000)) +ggtitle('Health Expenditure with Suicidal Rates')
```
Fig: The above graph shows the Suicide rates (per 100,000 people) for different countries along with the Health Expenditure shown as points on the bar chart. The graph comes out to be as expected for some countries, but for some of them it is bit different from the expectations. Like for Latvia, it can be seen that the expenditure on Healthcare is too low and therefore the Suicidal rates are high. But for some other countries, like Luxembourg, even if the Health Expenditure is high, still the Suicidal rates are not low.
```
#Finding correlation between Suicidal rates and Health Expenditure
cor.test(data_merged[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged[["Health Expenditure 2016 (Per Capita in dollars)"]])
#Now plotting the above correlation between two quantitative variables using a scatteprlot
Health_Expenditure_2016 <- data_merged[["Health Expenditure 2016 (Per Capita in dollars)"]]/200
ggplot(data_merged, aes(x=Health_Expenditure_2016, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") + ggtitle('CORRELATION - Suicide Rates v/s Health Expenditure')
```
Fig: The points are bit scattered on the Scatterplot and even the correlation value is too small to be considered.
```
#URL for Happiness
url_happiness_index <- "https://en.wikipedia.org/wiki/World_Happiness_Report#2016_World_Happiness_Report"
#Fetching data from web page as an HTML table
data_happiness_index <- url_happiness_index%>%
read_html()%>%
html_nodes(xpath = '//*[@id="mw-content-text"]/div/table[1]')%>%
html_table()
#Converting HTML table to a DataFrame
data_happiness_index <- data_happiness_index%>% as.data.frame()
#Printing the DataFrame
data_happiness_index
# Merging the data set for Happiness Index with Suicidal rates of different countries and then selecting
# the relevant columns for visualization and renaming them with names that are more readable
data_merged_happiness <- inner_join(x= data_suicide_rate, y = data_happiness_index, by = "Country") %>% select(Country, Score, `Suicide Rates 2016 (per 100,000 People)`) %>%
rename('Happiness Score' = Score)
#Prinitng the DataSet
data_merged_happiness
#Creating CSV file for the merged dataset
write.csv(data_merged_happiness, file="Happiness_Score_Merged.csv")
# Visualizing the relation of suicidal rates with Happiness Score for different countries using a bar plot
data_happiness_plot <- data_merged_happiness%>% head(24)
ggplot(data_happiness_plot, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +#, fill=Country)) +
geom_bar(stat='identity', color='black', fill = 'chocolate') +
theme(axis.text.x = element_text(angle=70, hjust=1)) +
geom_point(aes(y=`Happiness Score`), color="black") + ggtitle('Happiness Score with Suicidal Rates')
```
Fig: The above graph shows the Suicide rates (per 100,000 people) for different countries along with the Happiness Index shown as points on the bar chart. The results here are closer to what was expected than it was in the plot of Suiccidla rate with Health Expenditure. Like for Burundi, if the Happiness Index is low, the Suicide rate is high and that was the thing what was expected. However, for the country like Finland, even if the Happiness Score is high, the suicidal rates still comes out ot be high. Thus, there might be some other issues than Happiness Score that need to be considered to reach some conclusion about the number of suicides
```
#Finding correlation between Suicidal rates and Happiness Score of different countries
cor.test(data_merged_happiness[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged_happiness[["Happiness Score"]])
#As expected the correlation comes out to be negative
#Plotting the above correlation between two quantitative variables,i.e, Suicidal Rates and Happiness Score using a scatteprlot
ggplot(data_merged_happiness, aes(x=`Happiness Score`, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") + ggtitle('NEGATIVE CORRELATION - Suicide Rates v/s Happiness Score')
```
We cannot consider the correlation of the Suicidal Rate and Happiness Score as significant. However it comes out to be negative which was expected as Suicidal Rate is inversely proportional to the Happiness score, and the inclination of the model line should be negative.
```
#URL for GDP of different coountries
url_GDP <- 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
#Fetching data from web page as an HTML table
data_GDP <- url_GDP %>% read_html() %>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table/tbody/tr[2]/td[3]/table') %>%
html_table()
#Converting HTML table to a DataFrame
data_GDP <- data_GDP %>% as.data.frame()
#Wrangling the data by removing the commas from the figures which might have vreated a problem in some further computations
data_GDP$US. <- gsub(",","", data_GDP$US.)%>%
as.numeric()
#Renaming the columns for some readable names
data_GDP <- data_GDP %>% rename('GDP (in US dollars)'=US.)
#Printing the Data Set
data_GDP
#Merging the Health Expenditure DataSet with GDP DataSet so that we can do some further calculations
data_merged_temp <- inner_join(x= data_health_expenditure, y = data_GDP, by = "Country")
#Merging the above data set with the Suicidal Rate dataset to include the suicidal rates into that data set as well.
data_merged_GDP <- inner_join(x= data_suicide_rate, y= data_merged_temp, by="Country")
#Selecting the relevant columns using select()
data_merged_GDP <- data_merged_GDP %>% select(`Country`, `GDP (in US dollars)`, `Health Expenditure 2016 (Per Capita in dollars)`, `Suicide Rates 2016 (per 100,000 People)`)
#Calculating the percentage of Helath Expenditure as a proportion of the total GDP of that country using mutate()
# and add these values as a separate column in the merged dataset
data_merged_GDP <- data_merged_GDP %>% mutate(`Health Expenditure (as %age of GDP)` = (data_merged_GDP$`Health Expenditure 2016 (Per Capita in dollars)` /`GDP (in US dollars)`)*100 )
#Prinitng the final dataset
data_merged_GDP
#Creating a CSV for the above merged dataset
write.csv(data_merged_GDP, file="GDP_Percentage_Merged.csv")
#Visualizing the relation of suicidal rates with Percentage of the GDP spent for the Health Expenditure
ggplot(data_merged_GDP, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +
geom_bar(stat='identity', color='red') +
theme(axis.text.x = element_text(angle=70, hjust=1)) +
geom_point(aes(y=`Health Expenditure (as %age of GDP)`)) + ggtitle('%age of GDP with Suicidal Rates')
```
Fig: The above graph shows the Suicide rates (per 100,000 people) for different countries along with the Health Expenditure shown as Percentage of the GDP of that country. The unexpected results were seen for the countries like Hungary and Poland where high percentage of the GDP was spent on the Healthcare and still they are having high suicidal rates.
```
#Finding correlation between Suicidal rates and Percentage of the GDP spent for the Health Expenditure
cor.test(data_merged_GDP[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged_GDP[["Health Expenditure (as %age of GDP)"]])
# Plotting the above correlation between two quantitative variables,
# i.e, Suicidal Rates and Happiness Score using a scatteprlot
ggplot(data_merged_GDP, aes(x=`Health Expenditure (as %age of GDP)`, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") +
ggtitle('CORRELATION - Suicide Rates v/s Health Expenditure (as %age of GDP)')
```
Fig: The correlation doesn't comes out to be significant and there might be some other factors involved that affects the Suicidal Rates.
|
github_jupyter
|
# Including libraries into the R environment
library(tidyverse)
library(rvest)
library(dplyr)
#URL for Suicidal rates for 2016
url_suicide_rate <- "https://en.wikipedia.org/wiki/List_of_countries_by_suicide_rate"
#Fetching data from web page as an HTML table
data_suicide_rate <- url_suicide_rate %>%
read_html()%>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table[3]')%>%
html_table()
#Converting HTML table to a DataFrame
data_suicide_rate <- data_suicide_rate %>% as.data.frame() %>%
select(Country, Both.sexes, Male, Female) %>%
rename('Suicide Rates 2016 (per 100,000 People)' = Both.sexes,
'Suicide Rates 2016 (Males)' = Male,
'Suicide Rates 2016 (Females)' = Female)
#Printing the data frame
data_suicide_rate
#URL for Health Expenditure
url_health_expenditure <- 'https://en.wikipedia.org/wiki/List_of_countries_by_total_health_expenditure_per_capita'
#Fetching data from web page as an HTML table
data_health_expenditure <- url_health_expenditure %>%
read_html() %>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table[1]/tbody/tr /td[2]/table') %>%
html_table()
#Converting HTML table to a DataFrame
data_health_expenditure <- data_health_expenditure%>% as.data.frame() %>%
select(Country, X2016)
#Renaming column names of the Data Frame
data_health_expenditure <- data_health_expenditure%>%
rename('Health Expenditure 2016 (Per Capita in dollars)' = X2016)
#Changing character data type to numeric after removing the commas in the figures using gsub()
data_health_expenditure[['Health Expenditure 2016 (Per Capita in dollars)']] <-
gsub(",","",data_health_expenditure[['Health Expenditure 2016 (Per Capita in dollars)']]) %>%
as.numeric()
#Printing the data frame
data_health_expenditure
#Merging the data sets for Suicidal Rates and Health Expenditure for different countries using INNER JOIN
data_merged <- data_suicide_rate
data_merged <- inner_join(x= data_suicide_rate, y = data_health_expenditure, by = "Country")
#Printing the DataSet
data_merged
#Creating CSV for the above merged dataset
write.csv(data_merged, file="Health_Expenditure_Merged.csv")
# Visualizing the relation of suicidal rates with Health Expenditure for different countries using a bar plot
ggplot(data_merged, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +#, fill=Country)) +
geom_bar(stat='identity', color='black', fill = 'cornflowerblue') +
theme(axis.text.x = element_text(angle=60, hjust=1)) +
geom_point(aes(y=`Health Expenditure 2016 (Per Capita in dollars)`/1000)) +ggtitle('Health Expenditure with Suicidal Rates')
#Finding correlation between Suicidal rates and Health Expenditure
cor.test(data_merged[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged[["Health Expenditure 2016 (Per Capita in dollars)"]])
#Now plotting the above correlation between two quantitative variables using a scatteprlot
Health_Expenditure_2016 <- data_merged[["Health Expenditure 2016 (Per Capita in dollars)"]]/200
ggplot(data_merged, aes(x=Health_Expenditure_2016, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") + ggtitle('CORRELATION - Suicide Rates v/s Health Expenditure')
#URL for Happiness
url_happiness_index <- "https://en.wikipedia.org/wiki/World_Happiness_Report#2016_World_Happiness_Report"
#Fetching data from web page as an HTML table
data_happiness_index <- url_happiness_index%>%
read_html()%>%
html_nodes(xpath = '//*[@id="mw-content-text"]/div/table[1]')%>%
html_table()
#Converting HTML table to a DataFrame
data_happiness_index <- data_happiness_index%>% as.data.frame()
#Printing the DataFrame
data_happiness_index
# Merging the data set for Happiness Index with Suicidal rates of different countries and then selecting
# the relevant columns for visualization and renaming them with names that are more readable
data_merged_happiness <- inner_join(x= data_suicide_rate, y = data_happiness_index, by = "Country") %>% select(Country, Score, `Suicide Rates 2016 (per 100,000 People)`) %>%
rename('Happiness Score' = Score)
#Prinitng the DataSet
data_merged_happiness
#Creating CSV file for the merged dataset
write.csv(data_merged_happiness, file="Happiness_Score_Merged.csv")
# Visualizing the relation of suicidal rates with Happiness Score for different countries using a bar plot
data_happiness_plot <- data_merged_happiness%>% head(24)
ggplot(data_happiness_plot, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +#, fill=Country)) +
geom_bar(stat='identity', color='black', fill = 'chocolate') +
theme(axis.text.x = element_text(angle=70, hjust=1)) +
geom_point(aes(y=`Happiness Score`), color="black") + ggtitle('Happiness Score with Suicidal Rates')
#Finding correlation between Suicidal rates and Happiness Score of different countries
cor.test(data_merged_happiness[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged_happiness[["Happiness Score"]])
#As expected the correlation comes out to be negative
#Plotting the above correlation between two quantitative variables,i.e, Suicidal Rates and Happiness Score using a scatteprlot
ggplot(data_merged_happiness, aes(x=`Happiness Score`, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") + ggtitle('NEGATIVE CORRELATION - Suicide Rates v/s Happiness Score')
#URL for GDP of different coountries
url_GDP <- 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
#Fetching data from web page as an HTML table
data_GDP <- url_GDP %>% read_html() %>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table/tbody/tr[2]/td[3]/table') %>%
html_table()
#Converting HTML table to a DataFrame
data_GDP <- data_GDP %>% as.data.frame()
#Wrangling the data by removing the commas from the figures which might have vreated a problem in some further computations
data_GDP$US. <- gsub(",","", data_GDP$US.)%>%
as.numeric()
#Renaming the columns for some readable names
data_GDP <- data_GDP %>% rename('GDP (in US dollars)'=US.)
#Printing the Data Set
data_GDP
#Merging the Health Expenditure DataSet with GDP DataSet so that we can do some further calculations
data_merged_temp <- inner_join(x= data_health_expenditure, y = data_GDP, by = "Country")
#Merging the above data set with the Suicidal Rate dataset to include the suicidal rates into that data set as well.
data_merged_GDP <- inner_join(x= data_suicide_rate, y= data_merged_temp, by="Country")
#Selecting the relevant columns using select()
data_merged_GDP <- data_merged_GDP %>% select(`Country`, `GDP (in US dollars)`, `Health Expenditure 2016 (Per Capita in dollars)`, `Suicide Rates 2016 (per 100,000 People)`)
#Calculating the percentage of Helath Expenditure as a proportion of the total GDP of that country using mutate()
# and add these values as a separate column in the merged dataset
data_merged_GDP <- data_merged_GDP %>% mutate(`Health Expenditure (as %age of GDP)` = (data_merged_GDP$`Health Expenditure 2016 (Per Capita in dollars)` /`GDP (in US dollars)`)*100 )
#Prinitng the final dataset
data_merged_GDP
#Creating a CSV for the above merged dataset
write.csv(data_merged_GDP, file="GDP_Percentage_Merged.csv")
#Visualizing the relation of suicidal rates with Percentage of the GDP spent for the Health Expenditure
ggplot(data_merged_GDP, aes(Country, `Suicide Rates 2016 (per 100,000 People)`)) +
geom_bar(stat='identity', color='red') +
theme(axis.text.x = element_text(angle=70, hjust=1)) +
geom_point(aes(y=`Health Expenditure (as %age of GDP)`)) + ggtitle('%age of GDP with Suicidal Rates')
#Finding correlation between Suicidal rates and Percentage of the GDP spent for the Health Expenditure
cor.test(data_merged_GDP[["Suicide Rates 2016 (per 100,000 People)"]],
data_merged_GDP[["Health Expenditure (as %age of GDP)"]])
# Plotting the above correlation between two quantitative variables,
# i.e, Suicidal Rates and Happiness Score using a scatteprlot
ggplot(data_merged_GDP, aes(x=`Health Expenditure (as %age of GDP)`, y=`Suicide Rates 2016 (per 100,000 People)`)) +
geom_point( color="red")+
geom_smooth( fill="green") +
ggtitle('CORRELATION - Suicide Rates v/s Health Expenditure (as %age of GDP)')
| 0.6488 | 0.848533 |
# Power to Gas with Heat Coupling
This is an example for power to gas with optional coupling to heat sector (via boiler OR Combined-Heat-and-Power (CHP))
A location has an electric, gas and heat bus. The primary source is wind power, which can be converted to gas. The gas can be stored to convert into electricity or heat (with either a boiler or a CHP).
```
import pypsa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyomo.environ import Constraint
%matplotlib inline
```
## Combined-Heat-and-Power (CHP) parameterisation
This setup follows http://www.ea-energianalyse.dk/reports/student-reports/integration_of_50_percent_wind%20power.pdf pages 35-6 which follows http://www.sciencedirect.com/science/article/pii/030142159390282K
```
# ratio between max heat output and max electric output
nom_r = 1.0
# backpressure limit
c_m = 0.75
# marginal loss for each additional generation of heat
c_v = 0.15
```
Graph for the case that max heat output equals max electric output
```
fig, ax = plt.subplots(figsize=(9, 5))
t = 0.01
ph = np.arange(0, 1.0001, t)
ax.plot(ph, c_m * ph)
ax.set_xlabel("P_heat_out")
ax.set_ylabel("P_elec_out")
ax.grid(True)
ax.set_xlim([0, 1.1])
ax.set_ylim([0, 1.1])
ax.text(0.1, 0.7, "Allowed output", color="r")
ax.plot(ph, 1 - c_v * ph)
for i in range(1, 10):
k = 0.1 * i
x = np.arange(0, k / (c_m + c_v), t)
ax.plot(x, k - c_v * x, color="g", alpha=0.5)
ax.text(0.05, 0.41, "iso-fuel-lines", color="g", rotation=-7)
ax.fill_between(ph, c_m * ph, 1 - c_v * ph, facecolor="r", alpha=0.5)
fig.tight_layout()
```
## Optimisation
```
network = pypsa.Network()
network.set_snapshots(pd.date_range("2016-01-01 00:00", "2016-01-01 03:00", freq="H"))
network.add("Bus", "0", carrier="AC")
network.add("Bus", "0 gas", carrier="gas")
network.add("Carrier", "wind")
network.add("Carrier", "gas", co2_emissions=0.2)
network.add("GlobalConstraint", "co2_limit", sense="<=", constant=0.0)
network.add(
"Generator",
"wind turbine",
bus="0",
carrier="wind",
p_nom_extendable=True,
p_max_pu=[0.0, 0.2, 0.7, 0.4],
capital_cost=1000,
)
network.add("Load", "load", bus="0", p_set=5.0)
network.add(
"Link",
"P2G",
bus0="0",
bus1="0 gas",
efficiency=0.6,
capital_cost=1000,
p_nom_extendable=True,
)
network.add(
"Link",
"generator",
bus0="0 gas",
bus1="0",
efficiency=0.468,
capital_cost=400,
p_nom_extendable=True,
)
network.add("Store", "gas depot", bus="0 gas", e_cyclic=True, e_nom_extendable=True)
```
Add heat sector
```
network.add("Bus", "0 heat", carrier="heat")
network.add("Carrier", "heat")
network.add("Load", "heat load", bus="0 heat", p_set=10.0)
network.add(
"Link",
"boiler",
bus0="0 gas",
bus1="0 heat",
efficiency=0.9,
capital_cost=300,
p_nom_extendable=True,
)
network.add("Store", "water tank", bus="0 heat", e_cyclic=True, e_nom_extendable=True)
```
Add CHP constraints
```
# Guarantees ISO fuel lines, i.e. fuel consumption p_b0 + p_g0 = constant along p_g1 + c_v p_b1 = constant
network.links.at["boiler", "efficiency"] = (
network.links.at["generator", "efficiency"] / c_v
)
def extra_functionality(network, snapshots):
# Guarantees heat output and electric output nominal powers are proportional
network.model.chp_nom = Constraint(
rule=lambda model: network.links.at["generator", "efficiency"]
* nom_r
* model.link_p_nom["generator"]
== network.links.at["boiler", "efficiency"] * model.link_p_nom["boiler"]
)
# Guarantees c_m p_b1 \leq p_g1
def backpressure(model, snapshot):
return (
c_m
* network.links.at["boiler", "efficiency"]
* model.link_p["boiler", snapshot]
<= network.links.at["generator", "efficiency"]
* model.link_p["generator", snapshot]
)
network.model.backpressure = Constraint(list(snapshots), rule=backpressure)
# Guarantees p_g1 +c_v p_b1 \leq p_g1_nom
def top_iso_fuel_line(model, snapshot):
return (
model.link_p["boiler", snapshot] + model.link_p["generator", snapshot]
<= model.link_p_nom["generator"]
)
network.model.top_iso_fuel_line = Constraint(
list(snapshots), rule=top_iso_fuel_line
)
network.lopf(network.snapshots, extra_functionality=extra_functionality)
network.objective
```
## Inspection
```
network.loads_t.p
network.links.p_nom_opt
# CHP is dimensioned by the heat demand met in three hours when no wind
4 * 10.0 / 3 / network.links.at["boiler", "efficiency"]
# elec is set by the heat demand
28.490028 * 0.15
network.links_t.p0
network.links_t.p1
pd.DataFrame({attr: network.stores_t[attr]["gas depot"] for attr in ["p", "e"]})
pd.DataFrame({attr: network.stores_t[attr]["water tank"] for attr in ["p", "e"]})
pd.DataFrame({attr: network.links_t[attr]["boiler"] for attr in ["p0", "p1"]})
network.stores.loc["gas depot"]
network.generators.loc["wind turbine"]
network.links.p_nom_opt
```
Calculate the overall efficiency of the CHP
```
eta_elec = network.links.at["generator", "efficiency"]
r = 1 / c_m
# P_h = r*P_e
(1 + r) / ((1 / eta_elec) * (1 + c_v * r))
```
|
github_jupyter
|
import pypsa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyomo.environ import Constraint
%matplotlib inline
# ratio between max heat output and max electric output
nom_r = 1.0
# backpressure limit
c_m = 0.75
# marginal loss for each additional generation of heat
c_v = 0.15
fig, ax = plt.subplots(figsize=(9, 5))
t = 0.01
ph = np.arange(0, 1.0001, t)
ax.plot(ph, c_m * ph)
ax.set_xlabel("P_heat_out")
ax.set_ylabel("P_elec_out")
ax.grid(True)
ax.set_xlim([0, 1.1])
ax.set_ylim([0, 1.1])
ax.text(0.1, 0.7, "Allowed output", color="r")
ax.plot(ph, 1 - c_v * ph)
for i in range(1, 10):
k = 0.1 * i
x = np.arange(0, k / (c_m + c_v), t)
ax.plot(x, k - c_v * x, color="g", alpha=0.5)
ax.text(0.05, 0.41, "iso-fuel-lines", color="g", rotation=-7)
ax.fill_between(ph, c_m * ph, 1 - c_v * ph, facecolor="r", alpha=0.5)
fig.tight_layout()
network = pypsa.Network()
network.set_snapshots(pd.date_range("2016-01-01 00:00", "2016-01-01 03:00", freq="H"))
network.add("Bus", "0", carrier="AC")
network.add("Bus", "0 gas", carrier="gas")
network.add("Carrier", "wind")
network.add("Carrier", "gas", co2_emissions=0.2)
network.add("GlobalConstraint", "co2_limit", sense="<=", constant=0.0)
network.add(
"Generator",
"wind turbine",
bus="0",
carrier="wind",
p_nom_extendable=True,
p_max_pu=[0.0, 0.2, 0.7, 0.4],
capital_cost=1000,
)
network.add("Load", "load", bus="0", p_set=5.0)
network.add(
"Link",
"P2G",
bus0="0",
bus1="0 gas",
efficiency=0.6,
capital_cost=1000,
p_nom_extendable=True,
)
network.add(
"Link",
"generator",
bus0="0 gas",
bus1="0",
efficiency=0.468,
capital_cost=400,
p_nom_extendable=True,
)
network.add("Store", "gas depot", bus="0 gas", e_cyclic=True, e_nom_extendable=True)
network.add("Bus", "0 heat", carrier="heat")
network.add("Carrier", "heat")
network.add("Load", "heat load", bus="0 heat", p_set=10.0)
network.add(
"Link",
"boiler",
bus0="0 gas",
bus1="0 heat",
efficiency=0.9,
capital_cost=300,
p_nom_extendable=True,
)
network.add("Store", "water tank", bus="0 heat", e_cyclic=True, e_nom_extendable=True)
# Guarantees ISO fuel lines, i.e. fuel consumption p_b0 + p_g0 = constant along p_g1 + c_v p_b1 = constant
network.links.at["boiler", "efficiency"] = (
network.links.at["generator", "efficiency"] / c_v
)
def extra_functionality(network, snapshots):
# Guarantees heat output and electric output nominal powers are proportional
network.model.chp_nom = Constraint(
rule=lambda model: network.links.at["generator", "efficiency"]
* nom_r
* model.link_p_nom["generator"]
== network.links.at["boiler", "efficiency"] * model.link_p_nom["boiler"]
)
# Guarantees c_m p_b1 \leq p_g1
def backpressure(model, snapshot):
return (
c_m
* network.links.at["boiler", "efficiency"]
* model.link_p["boiler", snapshot]
<= network.links.at["generator", "efficiency"]
* model.link_p["generator", snapshot]
)
network.model.backpressure = Constraint(list(snapshots), rule=backpressure)
# Guarantees p_g1 +c_v p_b1 \leq p_g1_nom
def top_iso_fuel_line(model, snapshot):
return (
model.link_p["boiler", snapshot] + model.link_p["generator", snapshot]
<= model.link_p_nom["generator"]
)
network.model.top_iso_fuel_line = Constraint(
list(snapshots), rule=top_iso_fuel_line
)
network.lopf(network.snapshots, extra_functionality=extra_functionality)
network.objective
network.loads_t.p
network.links.p_nom_opt
# CHP is dimensioned by the heat demand met in three hours when no wind
4 * 10.0 / 3 / network.links.at["boiler", "efficiency"]
# elec is set by the heat demand
28.490028 * 0.15
network.links_t.p0
network.links_t.p1
pd.DataFrame({attr: network.stores_t[attr]["gas depot"] for attr in ["p", "e"]})
pd.DataFrame({attr: network.stores_t[attr]["water tank"] for attr in ["p", "e"]})
pd.DataFrame({attr: network.links_t[attr]["boiler"] for attr in ["p0", "p1"]})
network.stores.loc["gas depot"]
network.generators.loc["wind turbine"]
network.links.p_nom_opt
eta_elec = network.links.at["generator", "efficiency"]
r = 1 / c_m
# P_h = r*P_e
(1 + r) / ((1 / eta_elec) * (1 + c_v * r))
| 0.541894 | 0.890865 |
```
import os
import json
import data_util
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import ipywidgets as widgets
from IPython.display import display, clear_output, Markdown, Image, HTML
PID_MAP = '/home/psd2120/research/data/page_id_map.json'
EVAL1_TXT = '/home/psd2120/research/data/eval.txt'
EVAL1_IMG_DIR = '../data/eval1/'
LABELS_TXT = '/home/psd2120/research/data/labels.txt'
with open(EVAL1_TXT, 'r') as f:
eval_fnames = f.read().splitlines()
with open(PID_MAP, 'r') as f:
pid_map = json.load(f)
pid2img = dict()
for fname in eval_fnames:
splt = fname.split('/')
pid2img[int(splt[-2])] = splt[-1]
label_pid = dict()
label_idx = 1
with tf.io.gfile.GFile(LABELS_TXT, 'r') as f:
labels = f.read().splitlines()
for label in labels:
label_pid[label_idx] = label
label_idx += 1
model = hub.load('gs://eol-tfrc-tpu/chkpts/eol2020/baseline/eval1/ResNet50_2048/hub/55600/')
labels_pid = widgets.FileUpload(accept='.jpg', multiple=False)
show_preds = widgets.Button(description="Get Preds")
clear = widgets.Button(description="Clear")
output = widgets.Output()
def on_clear_clicked(b):
with output:
clear_output()
def on_show_preds_clicked(b):
on_clear_clicked(b)
with output:
file = labels_pid.value
fname, val = file.popitem()
img = tf.image.decode_jpeg(val['content'], channels=3)
img = data_util.preprocess_image(img, 224, 224, is_training=False,\
color_distort=True, test_crop=True)
img = tf.expand_dims(img, axis=0)
logits = model.signatures['default'](tf.convert_to_tensor(img))['logits_sup']
preds_conf, preds_idx = tf.nn.top_k(tf.nn.softmax(logits),k=5)
preds_conf = preds_conf.numpy().tolist()[0]
preds_idx = preds_idx.numpy().tolist()[0]
# Get the image paths
pred_1_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[0]])])
pred_2_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[1]])])
pred_3_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[2]])])
pred_4_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[3]])])
pred_5_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[4]])])
# Prep for display
display(widgets.Image(value=val["content"], width=300, height=300))
td_pred_1 = "<td><img src=" + pred_1_img + " width='300' height='300'></td>"
td_pred_2 = "<td><img src=" + pred_2_img + " width='300' height='300'></td>"
td_pred_3 = "<td><img src=" + pred_3_img + " width='300' height='300'></td>"
td_pred_4 = "<td><img src=" + pred_4_img + " width='300' height='300'></td>"
td_pred_5 = "<td><img src=" + pred_5_img + " width='300' height='300'></td>"
tr_pid = "<tr><td>" + 'Pred PID->' + "</td><td>" + str(label_pid[preds_idx[0]]) + "</td><td>" +\
str(label_pid[preds_idx[1]]) + "</td><td>" + str(label_pid[preds_idx[2]]) +\
"</td><td>" + str(label_pid[preds_idx[3]]) +\
"</td><td>" + str(label_pid[preds_idx[4]]) + "</td></tr>"
tr_name = "<tr><td>" + 'canonicalName->' + "</td><td>" +\
pid_map[label_pid[preds_idx[0]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[1]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[2]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[3]]]['canonicalName'] +\
"</td><td>" + pid_map[label_pid[preds_idx[4]]]['canonicalName'] + "</td></tr>"
tr_conf = "<tr><td>" + 'Softmax Prob. ->' + "</td><td>" + str(round(preds_conf[0],3)) + "</td><td>" +\
str(round(preds_conf[1],3)) + "</td><td>" + str(round(preds_conf[2],3)) + "</td><td>" +\
str(round(preds_conf[3],3)) + "</td><td>" + str(round(preds_conf[4],3)) + "</td></tr>"
tr = "<table><tr>" +\
'<td>Preds-></td>' + td_pred_1 + td_pred_2 +\
td_pred_3 + td_pred_4 + td_pred_5 +\
"</tr>" + tr_pid + tr_name + tr_conf + "</table>"
display(HTML(tr))
show_preds.on_click(on_show_preds_clicked)
clear.on_click(on_clear_clicked)
display(widgets.HBox((labels_pid, show_preds, clear)))
display(output)
```
|
github_jupyter
|
import os
import json
import data_util
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import ipywidgets as widgets
from IPython.display import display, clear_output, Markdown, Image, HTML
PID_MAP = '/home/psd2120/research/data/page_id_map.json'
EVAL1_TXT = '/home/psd2120/research/data/eval.txt'
EVAL1_IMG_DIR = '../data/eval1/'
LABELS_TXT = '/home/psd2120/research/data/labels.txt'
with open(EVAL1_TXT, 'r') as f:
eval_fnames = f.read().splitlines()
with open(PID_MAP, 'r') as f:
pid_map = json.load(f)
pid2img = dict()
for fname in eval_fnames:
splt = fname.split('/')
pid2img[int(splt[-2])] = splt[-1]
label_pid = dict()
label_idx = 1
with tf.io.gfile.GFile(LABELS_TXT, 'r') as f:
labels = f.read().splitlines()
for label in labels:
label_pid[label_idx] = label
label_idx += 1
model = hub.load('gs://eol-tfrc-tpu/chkpts/eol2020/baseline/eval1/ResNet50_2048/hub/55600/')
labels_pid = widgets.FileUpload(accept='.jpg', multiple=False)
show_preds = widgets.Button(description="Get Preds")
clear = widgets.Button(description="Clear")
output = widgets.Output()
def on_clear_clicked(b):
with output:
clear_output()
def on_show_preds_clicked(b):
on_clear_clicked(b)
with output:
file = labels_pid.value
fname, val = file.popitem()
img = tf.image.decode_jpeg(val['content'], channels=3)
img = data_util.preprocess_image(img, 224, 224, is_training=False,\
color_distort=True, test_crop=True)
img = tf.expand_dims(img, axis=0)
logits = model.signatures['default'](tf.convert_to_tensor(img))['logits_sup']
preds_conf, preds_idx = tf.nn.top_k(tf.nn.softmax(logits),k=5)
preds_conf = preds_conf.numpy().tolist()[0]
preds_idx = preds_idx.numpy().tolist()[0]
# Get the image paths
pred_1_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[0]])])
pred_2_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[1]])])
pred_3_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[2]])])
pred_4_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[3]])])
pred_5_img = os.path.join(EVAL1_IMG_DIR, pid2img[int(label_pid[preds_idx[4]])])
# Prep for display
display(widgets.Image(value=val["content"], width=300, height=300))
td_pred_1 = "<td><img src=" + pred_1_img + " width='300' height='300'></td>"
td_pred_2 = "<td><img src=" + pred_2_img + " width='300' height='300'></td>"
td_pred_3 = "<td><img src=" + pred_3_img + " width='300' height='300'></td>"
td_pred_4 = "<td><img src=" + pred_4_img + " width='300' height='300'></td>"
td_pred_5 = "<td><img src=" + pred_5_img + " width='300' height='300'></td>"
tr_pid = "<tr><td>" + 'Pred PID->' + "</td><td>" + str(label_pid[preds_idx[0]]) + "</td><td>" +\
str(label_pid[preds_idx[1]]) + "</td><td>" + str(label_pid[preds_idx[2]]) +\
"</td><td>" + str(label_pid[preds_idx[3]]) +\
"</td><td>" + str(label_pid[preds_idx[4]]) + "</td></tr>"
tr_name = "<tr><td>" + 'canonicalName->' + "</td><td>" +\
pid_map[label_pid[preds_idx[0]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[1]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[2]]]['canonicalName'] + "</td><td>" +\
pid_map[label_pid[preds_idx[3]]]['canonicalName'] +\
"</td><td>" + pid_map[label_pid[preds_idx[4]]]['canonicalName'] + "</td></tr>"
tr_conf = "<tr><td>" + 'Softmax Prob. ->' + "</td><td>" + str(round(preds_conf[0],3)) + "</td><td>" +\
str(round(preds_conf[1],3)) + "</td><td>" + str(round(preds_conf[2],3)) + "</td><td>" +\
str(round(preds_conf[3],3)) + "</td><td>" + str(round(preds_conf[4],3)) + "</td></tr>"
tr = "<table><tr>" +\
'<td>Preds-></td>' + td_pred_1 + td_pred_2 +\
td_pred_3 + td_pred_4 + td_pred_5 +\
"</tr>" + tr_pid + tr_name + tr_conf + "</table>"
display(HTML(tr))
show_preds.on_click(on_show_preds_clicked)
clear.on_click(on_clear_clicked)
display(widgets.HBox((labels_pid, show_preds, clear)))
display(output)
| 0.326486 | 0.201813 |
### Set path to original pyNeuroChem. Please change to your own path
```
import sys
sys.path.append('/home/olexandr/notebooks/ASE_ANI/lib')
from ase_interface import ANI, ANID3, D3
import numpy as np
import time
import glob
import pandas as pd
# ASE
import ase
from ase.io import read, write
from ase.optimize import BFGS, LBFGS
from ase.calculators.mopac import MOPAC
#figure plotting
import matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
Read geometry from xyz file
```
geometry = read('data/b2_opt.xyz')
geometry.set_calculator(ANI())
e = geometry.get_potential_energy()
print('Tital ANI energy', e, 'eV')
geometry.set_calculator(ANID3())
e_ad = geometry.get_potential_energy()
print('Tital ANI-D3 energy', e_ad, 'eV')
geometry.set_calculator(D3())
e_d3 = geometry.get_potential_energy()
print('Tital D3 correction energy', e_d3, 'eV')
```
# Dimer scan from S66 database
```
base_path = 'data/S66x10/Geometries/'
slides = ['0.7', '0.8' , '0.9', '0.95', '1.0', '1.05', '1.1', '1.25', '1.5', '2.0']
shift = np.array( [0.7, 0.8, 0.9, 0.95, 1.0, 1.05, 1.1, 1.25, 1.5, 2.0])
#fname = 'S66by10_58_'
refQM = pd.read_csv('data/S66x10/ref_QM_data.csv')
```
Define structure number from S66 [1 to 66]
```
i = 1
fname = 'S66by10_' + str(i) + '_'
energies = []
energies_d3 = []
energies_mop = []
for sl in slides:
filename = base_path + fname + sl + '_dimer.xyz'
geometry = read(filename)
geometry.set_calculator(ANI())
energies.append(geometry.get_potential_energy())
geometry.set_calculator(ANID3())
energies_d3.append(geometry.get_potential_energy())
geometry.set_calculator(MOPAC(method='PM7'))
energies_mop.append(geometry.get_potential_energy())
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(ANI())
m1 = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(ANI())
m2 = geometry2.get_potential_energy()
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(ANID3())
m1d3 = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(ANID3())
m2d3 = geometry2.get_potential_energy()
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(MOPAC(method='PM7'))
m1_mop = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(MOPAC(method='PM7'))
m2_mop = geometry2.get_potential_energy()
dE = np.array(energies) - (m1+m2)
dE_d3 = np.array(energies_d3) - (m1d3+m2d3)
dE_mop = np.array(energies_mop) - (m1_mop+m2_mop)
best_qm = refQM[refQM['System #']==i]['Benchmark'].values
title = refQM[refQM['System #']==i]['System'].values[0]
mpl.rcParams['figure.figsize'] = (10.0, 7.0)
plt.figure()
plt.plot(shift, dE*23.06, label='ANI')
plt.plot(shift, dE_d3*23.06, label='ANI-D3')
plt.plot(shift, dE_mop*23.06, label='PM7')
plt.plot(shift, best_qm, label='Best QM')
plt.legend(fontsize=24)
sns.set(font_scale=1.0)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.xlabel(r'$ R_{o}$', fontsize=28)
plt.ylabel(r'$ \Delta E\ \mathrm{(kcal/mol)}$', fontsize=28)
plt.title(title, fontsize=40)
#outfile =
#plt.savefig(outfile, bbox_inches="tight", dpi=300)
plt.show()
```
|
github_jupyter
|
import sys
sys.path.append('/home/olexandr/notebooks/ASE_ANI/lib')
from ase_interface import ANI, ANID3, D3
import numpy as np
import time
import glob
import pandas as pd
# ASE
import ase
from ase.io import read, write
from ase.optimize import BFGS, LBFGS
from ase.calculators.mopac import MOPAC
#figure plotting
import matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
geometry = read('data/b2_opt.xyz')
geometry.set_calculator(ANI())
e = geometry.get_potential_energy()
print('Tital ANI energy', e, 'eV')
geometry.set_calculator(ANID3())
e_ad = geometry.get_potential_energy()
print('Tital ANI-D3 energy', e_ad, 'eV')
geometry.set_calculator(D3())
e_d3 = geometry.get_potential_energy()
print('Tital D3 correction energy', e_d3, 'eV')
base_path = 'data/S66x10/Geometries/'
slides = ['0.7', '0.8' , '0.9', '0.95', '1.0', '1.05', '1.1', '1.25', '1.5', '2.0']
shift = np.array( [0.7, 0.8, 0.9, 0.95, 1.0, 1.05, 1.1, 1.25, 1.5, 2.0])
#fname = 'S66by10_58_'
refQM = pd.read_csv('data/S66x10/ref_QM_data.csv')
i = 1
fname = 'S66by10_' + str(i) + '_'
energies = []
energies_d3 = []
energies_mop = []
for sl in slides:
filename = base_path + fname + sl + '_dimer.xyz'
geometry = read(filename)
geometry.set_calculator(ANI())
energies.append(geometry.get_potential_energy())
geometry.set_calculator(ANID3())
energies_d3.append(geometry.get_potential_energy())
geometry.set_calculator(MOPAC(method='PM7'))
energies_mop.append(geometry.get_potential_energy())
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(ANI())
m1 = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(ANI())
m2 = geometry2.get_potential_energy()
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(ANID3())
m1d3 = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(ANID3())
m2d3 = geometry2.get_potential_energy()
filename = base_path + fname + '2.0_monomerA.xyz'
geometry1 = read(filename)
geometry1.set_calculator(MOPAC(method='PM7'))
m1_mop = geometry1.get_potential_energy()
filename = base_path + fname + '2.0_monomerB.xyz'
geometry2 = read(filename)
geometry2.set_calculator(MOPAC(method='PM7'))
m2_mop = geometry2.get_potential_energy()
dE = np.array(energies) - (m1+m2)
dE_d3 = np.array(energies_d3) - (m1d3+m2d3)
dE_mop = np.array(energies_mop) - (m1_mop+m2_mop)
best_qm = refQM[refQM['System #']==i]['Benchmark'].values
title = refQM[refQM['System #']==i]['System'].values[0]
mpl.rcParams['figure.figsize'] = (10.0, 7.0)
plt.figure()
plt.plot(shift, dE*23.06, label='ANI')
plt.plot(shift, dE_d3*23.06, label='ANI-D3')
plt.plot(shift, dE_mop*23.06, label='PM7')
plt.plot(shift, best_qm, label='Best QM')
plt.legend(fontsize=24)
sns.set(font_scale=1.0)
plt.tick_params(axis='both', which='major', labelsize=20)
plt.xlabel(r'$ R_{o}$', fontsize=28)
plt.ylabel(r'$ \Delta E\ \mathrm{(kcal/mol)}$', fontsize=28)
plt.title(title, fontsize=40)
#outfile =
#plt.savefig(outfile, bbox_inches="tight", dpi=300)
plt.show()
| 0.283781 | 0.821152 |
```
# Import Stuff
import tensorflow as tf
from tensorflow import keras
import numpy as np
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import cv2
from imgextract import Extractor
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from multiprocessing import Process
from IPython.display import clear_output
# All Parameters required for training are declared over here
# Frequency of Image Capturing
FRAME_SKIP = 2
# Frame Size
FRAME_SIZE = (150,150)
# Dataset
!rm -r A-Dataset-for-Automatic-Violence-Detection-in-Videos/
!git clone https://github.com/airtlab/A-Dataset-for-Automatic-Violence-Detection-in-Videos
!rm -r Data
!mkdir Data
!mkdir -p ./Data/Video/Violent
!mkdir -p ./Data/Video/NonViolent
!cp -a ./A-Dataset-for-Automatic-Violence-Detection-in-Videos/violence-detection-dataset/violent/cam1/. ./Data/Video/Violent/
!cp -a ./A-Dataset-for-Automatic-Violence-Detection-in-Videos/violence-detection-dataset/non-violent/cam1/. ./Data/Video/NonViolent/
clear_output()
!mkdir -p ./Data/Training/V
!mkdir -p ./Data/Training/NV
def thread_1():
ext = Extractor(FRAME_SIZE, FRAME_SKIP)
for i in range(60):
path = f"./Data/Video/Violent/{i+1}.mp4"
print(f"Processing Violent Vid-{i}")
ext.extract(path, 'V')
print("Violent Extracted")
def thread_2():
ext = Extractor(FRAME_SIZE, FRAME_SKIP)
for i in range(60):
path = f"/content/Data/Video/NonViolent/{i+1}.mp4"
print(f"Processing NonViolent Vid-{i}")
ext.extract(path, 'NV')
print("Non-Violent Extracted")
# Violent Extraction
t1 = Process(target=thread_1, args=())
t2 = Process(target=thread_2, args=())
t1.start()
t2.start()
# NonViolent Extraction
t1.join()
t2.join()
print("Complete")
base_dir='./Data'
train_dir=os.path.join(base_dir,'Training')
train_violent_dir =os.path.join(train_dir, 'V' )
train_nonviolent_dir=os.path.join(train_dir,'NV')
train_datagen= ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2,horizontal_flip=True, fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(train_dir,color_mode="rgb", target_size = FRAME_SIZE,batch_size=20,classes=['NV','V'], class_mode='binary', shuffle=True)
model= tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1,activation ='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model1=model.fit(train_generator,steps_per_epoch=50, epochs=30)
import time
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
```
|
github_jupyter
|
# Import Stuff
import tensorflow as tf
from tensorflow import keras
import numpy as np
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import cv2
from imgextract import Extractor
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from multiprocessing import Process
from IPython.display import clear_output
# All Parameters required for training are declared over here
# Frequency of Image Capturing
FRAME_SKIP = 2
# Frame Size
FRAME_SIZE = (150,150)
# Dataset
!rm -r A-Dataset-for-Automatic-Violence-Detection-in-Videos/
!git clone https://github.com/airtlab/A-Dataset-for-Automatic-Violence-Detection-in-Videos
!rm -r Data
!mkdir Data
!mkdir -p ./Data/Video/Violent
!mkdir -p ./Data/Video/NonViolent
!cp -a ./A-Dataset-for-Automatic-Violence-Detection-in-Videos/violence-detection-dataset/violent/cam1/. ./Data/Video/Violent/
!cp -a ./A-Dataset-for-Automatic-Violence-Detection-in-Videos/violence-detection-dataset/non-violent/cam1/. ./Data/Video/NonViolent/
clear_output()
!mkdir -p ./Data/Training/V
!mkdir -p ./Data/Training/NV
def thread_1():
ext = Extractor(FRAME_SIZE, FRAME_SKIP)
for i in range(60):
path = f"./Data/Video/Violent/{i+1}.mp4"
print(f"Processing Violent Vid-{i}")
ext.extract(path, 'V')
print("Violent Extracted")
def thread_2():
ext = Extractor(FRAME_SIZE, FRAME_SKIP)
for i in range(60):
path = f"/content/Data/Video/NonViolent/{i+1}.mp4"
print(f"Processing NonViolent Vid-{i}")
ext.extract(path, 'NV')
print("Non-Violent Extracted")
# Violent Extraction
t1 = Process(target=thread_1, args=())
t2 = Process(target=thread_2, args=())
t1.start()
t2.start()
# NonViolent Extraction
t1.join()
t2.join()
print("Complete")
base_dir='./Data'
train_dir=os.path.join(base_dir,'Training')
train_violent_dir =os.path.join(train_dir, 'V' )
train_nonviolent_dir=os.path.join(train_dir,'NV')
train_datagen= ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2,horizontal_flip=True, fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(train_dir,color_mode="rgb", target_size = FRAME_SIZE,batch_size=20,classes=['NV','V'], class_mode='binary', shuffle=True)
model= tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1,activation ='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model1=model.fit(train_generator,steps_per_epoch=50, epochs=30)
import time
t = time.time()
export_path_keras = "./{}.h5".format(int(t))
print(export_path_keras)
model.save(export_path_keras)
| 0.542136 | 0.295454 |
# Introduction to Regularization
<a href="https://drive.google.com/file/d/1EZ_xqMaYj77vErVnrQmnFOj-VBEoO5uW/view" target="_blank">
<img src="http://www.deltanalytics.org/uploads/2/6/1/4/26140521/screen-shot-2019-01-05-at-4-48-29-pm_orig.png" width="500" height="400">
</a>
In the context of regression, regularization refers to techniques to constrain/shrink the coefficient estimates towards zero.
Shrinking the coefficients can 1) improve the fit of the model and 2) reduce the variance of the coefficients.
Two common types of regularization are ridge and lasso.
Recall that least squares linear regression minimizes the residual sum of squares (RSS). In other words, it minimizes
$ RSS = \displaystyle \sum^{n}_{i=1} (y_i - \beta_0 - \sum^{p}_{j=1} \beta_j x_{ij})^2 $
In ridge and lasso, we add a term to the value we are trying to minimize.
In ridge, we minimize
$ RSS + \lambda \displaystyle \sum^{p}_{j=1} \beta_j^2 $
In lasso, we minimize
$ RSS + \lambda \displaystyle \sum^{p}_{j=1} |\beta_j| $
The $\lambda$ (pronounced "lambda") in the above values is a hyper-parameter which determines how 'strong' the regularization effect is. Note: sometimes $\alpha$ (pronounced "alpha") is used instead of $\lambda$.
A useful way to use ridge or lasso regression is to run the regression over a range of alphas and see which features maintain a large beta coefficient for the longest. It is these features which have the most predictive power!
More in depth information can be found here: [Regularization Regression](https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-ridge-lasso-regression-python/)
```
# Load python packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
plt.rcParams['figure.figsize'] = (12, 8)
sns.set()
sns.set(font_scale=1.5)
# packages for checking assumptions
from scipy import stats as stats
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, make_scorer
import statsmodels.formula.api as sm
# packages for regularization
from sklearn.linear_model import Lasso
from math import pow, sqrt
np.random.seed(1234)
```
2) Load Data
---
```
# Load data
path = '../data/'
filename = 'loans.csv'
df = pd.read_csv(path+filename)
# Alternatively, if you are using Colab, get the data by git cloning the Delta Analytics repository
!git clone https://github.com/DeltaAnalytics/machine_learning_for_good_data
df = pd.read_csv("machine_learning_for_good_data/loans.csv")
df.dtypes
# create indicator variables for country
for country in df['location_country_code'].unique():
if country is not np.nan:
df['country_'+country] = np.where(df.location_country_code == country, 1, 0)
# create indicator variables for sector
for sect in df['sector'].unique():
df['sector_'+sect] = np.where(df.sector == sect, 1, 0)
df.dtypes
pd.options.mode.chained_assignment = None # default='warn'
# Define the dependent variable
y = df['loan_amount']
# Define the independent variables
X = df[['lender_count', 'sector_Education', 'sector_Clothing',
'sector_Personal Use', 'sector_Retail', 'sector_Transportation', 'sector_Agriculture']]
# Add an intercept term to the independent variables
X['cnst'] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model1 = sm.OLS(endog = y_train,exog = X_train).fit()
print(model1.summary())
alphas = np.arange(0.001, 0.502, 0.002)
lasso_coefs = []
X_train_lasso= X_train[X_train.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_train_lasso, y_train)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_train_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
```
Retail and Transportation go to 0 when alpha is 0.3. Let's try removing these from the model.
```
pd.options.mode.chained_assignment = None # default='warn'
# Define the dependent variable
y = df['loan_amount']
# Define the independent variables
X = df[['lender_count', 'sector_Education', 'sector_Clothing',
'sector_Personal Use', 'sector_Agriculture']]
# Add an intercept term to the independent variables
X['cnst'] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model2 = sm.OLS(endog = y_train,exog = X_train).fit()
print(model2.summary())
```
Even though we removed two dependent variables from the analysis, our R-squared and adjusted R-squared stayed the same. This means that the two variables we removed (Transportation and Retail) are less important to loan amount. The example above shows how we can use regularization for feature selection.
# Important facts about regularization
Recall that with least squares linear regression, the coefficients are scale equivariant. In other words, multiplying a feature by a constant $c$ simply leads to a scaling of the least squares coefficient estimate by a factor of 1/$c$.
Let's demonstrate this fact by creating a example set of data that has three variables: 1) amount of money made at a restaurant in one day, 2) distance in meters to the nearest university, 3) distance in kilometers to the nearest hospital.
```
np.random.seed(1234)
earnings = np.random.normal(2000, 300, 50)
university_distances = np.random.normal(7000,2000,50)
hospital_distances = np.random.normal(7,2,50)
earnings = [a if a > 0 else -a for a in earnings]
university_distances = [a if a > 0 else -a for a in university_distances]
hospital_distances = [a if a > 0 else -a for a in hospital_distances]
df = pd.DataFrame({"earnings": sorted(earnings), "university": sorted(university_distances, reverse=True),
'hospital' : sorted(hospital_distances, reverse=True)})
df
# plot distance to nearest university (in meters) vs. earnings
ax = sns.regplot(x='earnings', y='university', data=df, fit_reg=False)
ax.set_title('Scatter plot of distance to nearest university (in meters) vs earnings')
# plot distance to nearest hospital (in kilometers) vs. earnings
ax = sns.regplot(x='earnings', y='hospital', data=df, fit_reg=False)
ax.set_title('Scatter plot of distance to nearest hospital (in kilometers) vs earnings')
```
Let's run a multivariate linear regression without scaling any variables and compare the results to a model where we standardize the distance variables to both use kilometers.
```
model1 = sm.ols(formula = 'earnings ~ university + hospital', data = df).fit()
print(model1.summary())
```
The R-squared is 0.938 and the Adjusted R-squared is 0.935. The coefficients for the intercept, university, and hospital are 3024.1009, -0.0643, and -76.3083. Now let's scale the university variable to be in kilometers instead of meters.
```
df_scaled = df.copy()
df_scaled['university'] = df_scaled['university']/1000
df_scaled
model2 = sm.ols(formula = 'earnings ~ university + hospital', data = df_scaled).fit()
print(model2.summary())
```
The R-squared is 0.938 and the Adjusted R-squared is 0.935. The coefficients for the intercept, university, and hospital are 3024.1009, -64.3473, and -76.3083. So we changed the university variable by scaling it by a constant and the resulting coefficient was scaled by the same constant. The p-values did not change and the coefficients on the other variables did not change.
What do you think scaling will do if we incorporate regularization by using lasso or ridge regression? Do you think scaling will have an effect on the coefficients of the variables?
<br>
<br>
<br>
<br>
Let's run lasso on our unscaled data and our scaled data and see what happens.
# Unscaled data
```
X = df[['university', 'hospital']]
y = df['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
```
The above plot shows the coefficients for the university and hospital variables at 0 and approximately -75, respectively. Would you keep or drop these variables from your model? Why?
<br>
<br>
<br>
<br>
# Scaled data
```
X = df_scaled[['university', 'hospital']]
y = df_scaled['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
```
The above plot shows the coefficient for the university and hospital variables are at around -64 and -76, respectively. Would you keep or drop these variables from your model? Why?
<br>
<br>
<br>
<br>
Clearly, scaling affects the coefficients and thus affects the results of lasso regression. Thus, it is best to apply regularization techniques like ridge and lasso after standardizing the predictors. You can standardize the predictors by applying the following formula:
$ \tilde{x}_{ij} = \frac{x_{ij}}{\sqrt{\frac{1}{n} \sum_{i=1}^{n} (x_{ij} - \bar{x}_{j})^2}} $
So now let's take the unscaled data and make a new dataset where we standardize the predictors.
```
df_standardized = df.copy()
university_mean = df_standardized['university'].mean()
university_denom = sqrt(sum((df_standardized['university']-university_mean)**2)/len(df_standardized['university']))
hospital_mean = df_standardized['hospital'].mean()
hospital_denom = sqrt(sum((df_standardized['hospital']-hospital_mean)**2)/len(df_standardized['hospital']))
df_standardized['university'] = df_standardized['university']/university_denom
df_standardized['hospital'] = df_standardized['hospital']/hospital_denom
df_standardized
X = df_standardized[['university', 'hospital']]
y = df_standardized['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
```
Now that we've scaled our features, the coefficients are back to being within the same order of magnitude! Always remember to standardize the features when using regularization.
|
github_jupyter
|
# Load python packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
plt.rcParams['figure.figsize'] = (12, 8)
sns.set()
sns.set(font_scale=1.5)
# packages for checking assumptions
from scipy import stats as stats
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, make_scorer
import statsmodels.formula.api as sm
# packages for regularization
from sklearn.linear_model import Lasso
from math import pow, sqrt
np.random.seed(1234)
# Load data
path = '../data/'
filename = 'loans.csv'
df = pd.read_csv(path+filename)
# Alternatively, if you are using Colab, get the data by git cloning the Delta Analytics repository
!git clone https://github.com/DeltaAnalytics/machine_learning_for_good_data
df = pd.read_csv("machine_learning_for_good_data/loans.csv")
df.dtypes
# create indicator variables for country
for country in df['location_country_code'].unique():
if country is not np.nan:
df['country_'+country] = np.where(df.location_country_code == country, 1, 0)
# create indicator variables for sector
for sect in df['sector'].unique():
df['sector_'+sect] = np.where(df.sector == sect, 1, 0)
df.dtypes
pd.options.mode.chained_assignment = None # default='warn'
# Define the dependent variable
y = df['loan_amount']
# Define the independent variables
X = df[['lender_count', 'sector_Education', 'sector_Clothing',
'sector_Personal Use', 'sector_Retail', 'sector_Transportation', 'sector_Agriculture']]
# Add an intercept term to the independent variables
X['cnst'] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model1 = sm.OLS(endog = y_train,exog = X_train).fit()
print(model1.summary())
alphas = np.arange(0.001, 0.502, 0.002)
lasso_coefs = []
X_train_lasso= X_train[X_train.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_train_lasso, y_train)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_train_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
pd.options.mode.chained_assignment = None # default='warn'
# Define the dependent variable
y = df['loan_amount']
# Define the independent variables
X = df[['lender_count', 'sector_Education', 'sector_Clothing',
'sector_Personal Use', 'sector_Agriculture']]
# Add an intercept term to the independent variables
X['cnst'] = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model2 = sm.OLS(endog = y_train,exog = X_train).fit()
print(model2.summary())
np.random.seed(1234)
earnings = np.random.normal(2000, 300, 50)
university_distances = np.random.normal(7000,2000,50)
hospital_distances = np.random.normal(7,2,50)
earnings = [a if a > 0 else -a for a in earnings]
university_distances = [a if a > 0 else -a for a in university_distances]
hospital_distances = [a if a > 0 else -a for a in hospital_distances]
df = pd.DataFrame({"earnings": sorted(earnings), "university": sorted(university_distances, reverse=True),
'hospital' : sorted(hospital_distances, reverse=True)})
df
# plot distance to nearest university (in meters) vs. earnings
ax = sns.regplot(x='earnings', y='university', data=df, fit_reg=False)
ax.set_title('Scatter plot of distance to nearest university (in meters) vs earnings')
# plot distance to nearest hospital (in kilometers) vs. earnings
ax = sns.regplot(x='earnings', y='hospital', data=df, fit_reg=False)
ax.set_title('Scatter plot of distance to nearest hospital (in kilometers) vs earnings')
model1 = sm.ols(formula = 'earnings ~ university + hospital', data = df).fit()
print(model1.summary())
df_scaled = df.copy()
df_scaled['university'] = df_scaled['university']/1000
df_scaled
model2 = sm.ols(formula = 'earnings ~ university + hospital', data = df_scaled).fit()
print(model2.summary())
X = df[['university', 'hospital']]
y = df['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
X = df_scaled[['university', 'hospital']]
y = df_scaled['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
df_standardized = df.copy()
university_mean = df_standardized['university'].mean()
university_denom = sqrt(sum((df_standardized['university']-university_mean)**2)/len(df_standardized['university']))
hospital_mean = df_standardized['hospital'].mean()
hospital_denom = sqrt(sum((df_standardized['hospital']-hospital_mean)**2)/len(df_standardized['hospital']))
df_standardized['university'] = df_standardized['university']/university_denom
df_standardized['hospital'] = df_standardized['hospital']/hospital_denom
df_standardized
X = df_standardized[['university', 'hospital']]
y = df_standardized['earnings']
alphas = np.arange(0.001, 1, 0.002)
lasso_coefs = []
X_lasso= X[X.columns.tolist()] # Select columns / features for model
for a in alphas:
lassoreg = Lasso(alpha=a, copy_X=True, normalize=True)
lassoreg.fit(X_lasso, y)
lasso_coefs.append(lassoreg.coef_)
lasso_coefs = np.asarray(lasso_coefs).T
plt.figure(figsize=(14,10))
for coefs, feature in zip(lasso_coefs, X_lasso.columns):
plt.plot(alphas, coefs, label = feature)
plt.legend(loc='best')
plt.show()
| 0.806434 | 0.991263 |
```
import numpy as np
import functions
import functions_vectorized
import matplotlib.pyplot as plt
import imageio
import math
import sklearn
import sklearn.model_selection
import sklearn.linear_model
import pandas
```
Task#1
Product of all non-zero elemnts on a diagonal of a matrix
```
#Non-vectorized
x = [[1, 0, 1], [2, 0, 2], [3, 0, 3], [4, 4, 4]]
print(functions.prod_non_zero_diag(x))
#Vectorized
print("------")
x = np.array([[1, 0, 1], [2, 0, 2], [3, 0, 3], [4, 4, 4]])
print(functions_vectorized.prod_non_zero_diag(x))
```
Task#2
Check whether x and y are the same multisets
```
#Non-vectorized
x = [1, 2, 2, 4]
y = [4, 2, 1, 2]
"""t1 = time.clock()
res = functions.are_multisets_equal(x, y)
t2 = time.clock()
t2 -= t1
print(res)"""
print(functions.are_multisets_equal(x, y))
#Vectorized
print("------")
x = np.array([1, 2, 2, 4])
y = np.array([4, 2, 1, 2])
"""tv1 = time.clock()
resv = functions_vectorized.are_multisets_equal(x, y)
tv2 = time.clock()
tv2 -= tv1"""
print(functions_vectorized.are_multisets_equal(x, y))
```
Task #3
Print the largest element in the array that stands after zero
T
```
#Non-vectorized
x = [6, 2, 0, 3, 0, 0, 5, 7, 0]
print(functions.max_after_zero(x))
#Vectorized
print("------")
x = np.array([6, 2, 0, 3, 0, 0, 5, 7, 0])
print(functions_vectorized.max_after_zero(x))
```
Task #4
```
img = imageio.imread('download.jpg', as_gray=False, pilmode="RGB")
coefs = [0.299, 0.587, 0.114]
soruce = plt.figure('Default')
plt.imshow(img)
casual=plt.figure('Non-vectorized')
plt.imshow(functions.convert_image(img, coefs), cmap='Greys_r')
vectorized=plt.figure('Vectorized')
plt.imshow(functions_vectorized.convert_image(img, coefs), cmap='Greys_r')
```
Task #5
```
x = [2, 2, 2, 3, 3, 3, 5]
functions.run_length_encoding(x)
#functions_vectorized.run_legth_encoding(x)
#https://stackoverflow.com/questions/1066758/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi
```
Task #6
```
a = [[1,2,3,4,5], [1,3,5,7,11]]
b = [[1,2,3,4,5], [10,30,50,70,110]]
print(functions.pairwise_distance(a, b))
print(functions_vectorized.pairwise_distance(a, b))
```
Part 2
Task 1
```
data = pandas.read_csv("data.csv")
data = data.fillna(data.mean())
print(data)
scores = pandas.read_csv("scores.csv")
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(data[1:], scores, test_size=0.2, random_state=0)
reg = sklearn.linear_model.LinearRegression().fit(X_train, y_train)
reg.score(X_test, y_test), reg.score(X_train, y_train) #compares x predicted output to known y which are true
attendance = pandas.read_csv("attendance.csv")
df = pandas.DataFrame([], columns=list('AB'))
print(df)
#df
for i in attendance:
x = attendance[i]
print(x)
#print(attendance[i])
attendance
```
|
github_jupyter
|
import numpy as np
import functions
import functions_vectorized
import matplotlib.pyplot as plt
import imageio
import math
import sklearn
import sklearn.model_selection
import sklearn.linear_model
import pandas
#Non-vectorized
x = [[1, 0, 1], [2, 0, 2], [3, 0, 3], [4, 4, 4]]
print(functions.prod_non_zero_diag(x))
#Vectorized
print("------")
x = np.array([[1, 0, 1], [2, 0, 2], [3, 0, 3], [4, 4, 4]])
print(functions_vectorized.prod_non_zero_diag(x))
#Non-vectorized
x = [1, 2, 2, 4]
y = [4, 2, 1, 2]
"""t1 = time.clock()
res = functions.are_multisets_equal(x, y)
t2 = time.clock()
t2 -= t1
print(res)"""
print(functions.are_multisets_equal(x, y))
#Vectorized
print("------")
x = np.array([1, 2, 2, 4])
y = np.array([4, 2, 1, 2])
"""tv1 = time.clock()
resv = functions_vectorized.are_multisets_equal(x, y)
tv2 = time.clock()
tv2 -= tv1"""
print(functions_vectorized.are_multisets_equal(x, y))
#Non-vectorized
x = [6, 2, 0, 3, 0, 0, 5, 7, 0]
print(functions.max_after_zero(x))
#Vectorized
print("------")
x = np.array([6, 2, 0, 3, 0, 0, 5, 7, 0])
print(functions_vectorized.max_after_zero(x))
img = imageio.imread('download.jpg', as_gray=False, pilmode="RGB")
coefs = [0.299, 0.587, 0.114]
soruce = plt.figure('Default')
plt.imshow(img)
casual=plt.figure('Non-vectorized')
plt.imshow(functions.convert_image(img, coefs), cmap='Greys_r')
vectorized=plt.figure('Vectorized')
plt.imshow(functions_vectorized.convert_image(img, coefs), cmap='Greys_r')
x = [2, 2, 2, 3, 3, 3, 5]
functions.run_length_encoding(x)
#functions_vectorized.run_legth_encoding(x)
#https://stackoverflow.com/questions/1066758/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi
a = [[1,2,3,4,5], [1,3,5,7,11]]
b = [[1,2,3,4,5], [10,30,50,70,110]]
print(functions.pairwise_distance(a, b))
print(functions_vectorized.pairwise_distance(a, b))
data = pandas.read_csv("data.csv")
data = data.fillna(data.mean())
print(data)
scores = pandas.read_csv("scores.csv")
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(data[1:], scores, test_size=0.2, random_state=0)
reg = sklearn.linear_model.LinearRegression().fit(X_train, y_train)
reg.score(X_test, y_test), reg.score(X_train, y_train) #compares x predicted output to known y which are true
attendance = pandas.read_csv("attendance.csv")
df = pandas.DataFrame([], columns=list('AB'))
print(df)
#df
for i in attendance:
x = attendance[i]
print(x)
#print(attendance[i])
attendance
| 0.367724 | 0.949856 |
```
import pandas as pd
import requests
from bs4 import BeautifulSoup
import time
#AUCTION_PRICES = 'https://www.psacard.com/auctionprices#2basketball%20cards%7Cbaseket%20ball'
AUCTION_PRICES = 'https://www.psacard.com/pop/basketball-cards/2018/panini-absolute-memorabilia-glass/164038'
from selenium import webdriver
/mnt/c/Users/adity/Downloads/Intervie
driver = webdriver.Firefox(executable_path=r'/mnt/c/Users/adity/Downloads/Chrome/geckodriver-v0.27.0-win64/geckodriver.exe')
#'C:\Users\adity\Downloads\Chrome\geckodriver-v0.27.0-win64\geckodriver.exe')
driver.get(AUCTION_PRICES)
soup=BeautifulSoup(driver.page_source)
print(soup.prettify())
for link in soup.find_all('table'):
print (link.get('href',None), link.get_text())
import requests
import lxml.html as lh
import pandas as pd
page = requests.get(AUCTION_PRICES)
#Store the contents of the website under doc
doc = lh.fromstring(page.content)
#Parse data that are stored between <tr>..</tr> of HTML
tr_elements = doc.xpath('//tr')
page
gdp_table = soup.find("table", attrs={"class": "auction-summary-results"})
gdp_table_data = gdp_table.tbody.find_all("tr")
gdp_table_data
sess = requests.Session()
# sess.mount("https://", requests.adapters.HTTPAdapter(max_retries=5))
# r = sess.get(AUCTION_PRICES)
html_content = requests.get(AUCTION_PRICES).text
r.raise_for_status()
soup = BeautifulSoup(r.text, 'lxml')
time.sleep(5)
soup = BeautifulSoup(r.text, 'lxml')
time.sleep(5)
print(soup.prettify())
a1 = soup.find_all("table")
a1
soup.findAll('table')[0].findAll('tr')
for tr in a1.tbody.find_all("tr"):
print (tr)
<table class="auction-summary-results stacktable large-only" style="">
<thead>
<tr>
<th>Item</th>
<th>Category</th>
<th class="text-center">Items Found</th>
</tr>
</thead>
<tbody id="auction-summary-results-body"><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/murray-wier/summary/633272">1948 Topps Magic Photos All-American Basketball Murray Wier #2B All American Basketball</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/ed-macauley/summary/633273">1948 Topps Magic Photos All-American Basketball Ed Macauley #3B All American Basketball</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/manhattan/summary/633276">1948 Topps Magic Photos All-American Basketball Manhattan #6B All American Basketball</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/st-louis-university/summary/633277">1948 Topps Magic Photos Basketball Thrills St.Louis University #1Q </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/kentucky-58-42/summary/633280">1948 Topps Magic Photos Basketball Thrills Kentucky 58-42 #4Q </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/depaul-75-64/summary/633281">1948 Topps Magic Photos Basketball Thrills DePaul 75 to 64 #5Q </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1971-topps/nba-basketball/summary/296533">1971 Topps NBA Basketball #137 Champions</a></td><td>Basketball Cards</td><td>37</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-basket-16-las-estrellas-de-la-nba/kareem-abdul-jabbar/summary/2286885">1988 Basket 16 Las Estrellas de La NBA Kareem Abdul-Jabbar #73 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-basket-16-las-estrellas-de-la-nba/kareem-abdul-jabbar/summary/2286889">1988 Basket 16 Las Estrellas de La NBA Kareem Abdul-Jabbar #74 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-entenmanns-bulls/scottie-pippen/summary/538367">1988 Entenmann's Bulls Scottie Pippen #33 Ball in Basket-Blank Back</a></td><td>Basketball Cards</td><td>9</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/pat-ewing/summary/2647119">1989 Gigantes Del Basket Pat Ewing #29 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/kareem-abdul-jabbar/summary/2229873">1989 Gigantes Del Basket Kareem Abdul-Jabbar #37 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/michael-jordan/summary/2184390">1989 Gigantes Del Basket Michael Jordan #43 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/david-robinson/summary/2273269">1989 Gigantes Del Basket David Robinson #79 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/dennis-rodman/summary/2647121">1989 Gigantes Del Basket Dennis Rodman #80 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1990-hoops/david-robinson/summary/1903442">1990 Hoops David Robinson #378 Basketball Fully Visible</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1990-hoops/david-robinson/summary/1903443">1990 Hoops David Robinson #378 Basketball Partially Visible</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-hoops-mcdonalds/usa-basketball-team/summary/441570">1991 Hoops McDonald'S USA Basketball Team #62 </a></td><td>Basketball Cards</td><td>27</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-little-basketball-big-leaguers/larry-bird/summary/3232640">1991 Little Basketball Big Leaguers Larry Bird Perforated</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-skybox/usa-basketball/summary/306751">1991 Skybox USA Basketball Team Card</a></td><td>Basketball Cards</td><td>35</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-hoops/usa-basketball-team/summary/302030">1992 Hoops USA Basketball Team </a></td><td>Basketball Cards</td><td>58</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/team-usa/summary/307004">1992 Skybox USA Basketball Team USA Plastic Card</a></td><td>Basketball Cards</td><td>42</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/charles-barkley/summary/306959">1992 Skybox USA Basketball Charles Barkley #2 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/charles-barkley/summary/860246">1992 Skybox USA Basketball Charles Barkley #8 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306963">1992 Skybox USA Basketball Larry Bird #10 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306964">1992 Skybox USA Basketball Larry Bird #11 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306965">1992 Skybox USA Basketball Larry Bird #12 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306966">1992 Skybox USA Basketball Larry Bird #13 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306967">1992 Skybox USA Basketball Larry Bird #14 </a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306968">1992 Skybox USA Basketball Larry Bird #15 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306969">1992 Skybox USA Basketball Larry Bird #16 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306970">1992 Skybox USA Basketball Larry Bird #17 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306971">1992 Skybox USA Basketball Larry Bird #18 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/patrick-ewing/summary/901852">1992 Skybox USA Basketball Patrick Ewing #19 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306974">1992 Skybox USA Basketball Magic Johnson #29 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306978">1992 Skybox USA Basketball Magic Johnson #33 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306980">1992 Skybox USA Basketball Magic Johnson #35 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306981">1992 Skybox USA Basketball Magic Johnson #36 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306982">1992 Skybox USA Basketball Michael Jordan #37 </a></td><td>Basketball Cards</td><td>25</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306983">1992 Skybox USA Basketball Michael Jordan #38 </a></td><td>Basketball Cards</td><td>52</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306984">1992 Skybox USA Basketball Michael Jordan #39 </a></td><td>Basketball Cards</td><td>46</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306985">1992 Skybox USA Basketball Michael Jordan #40 </a></td><td>Basketball Cards</td><td>39</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306986">1992 Skybox USA Basketball Michael Jordan #41 </a></td><td>Basketball Cards</td><td>35</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306987">1992 Skybox USA Basketball Michael Jordan #42 </a></td><td>Basketball Cards</td><td>41</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306988">1992 Skybox USA Basketball Michael Jordan #43 </a></td><td>Basketball Cards</td><td>27</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306989">1992 Skybox USA Basketball Michael Jordan #44 </a></td><td>Basketball Cards</td><td>29</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306990">1992 Skybox USA Basketball Michael Jordan #45 </a></td><td>Basketball Cards</td><td>20</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/karl-malone/summary/721855">1992 Skybox USA Basketball Karl Malone #46 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/karl-malone/summary/860235">1992 Skybox USA Basketball Karl Malone #48 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/chris-mullin/summary/901851">1992 Skybox USA Basketball Chris Mullin #57 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/901850">1992 Skybox USA Basketball Scottie Pippen #65 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/1916078">1992 Skybox USA Basketball Scottie Pippen #67 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/306993">1992 Skybox USA Basketball Scottie Pippen #68 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/883645">1992 Skybox USA Basketball David Robinson #73 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/1753279">1992 Skybox USA Basketball David Robinson #74 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/860241">1992 Skybox USA Basketball David Robinson #78 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/901849">1992 Skybox USA Basketball John Stockton #83 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/1883628">1992 Skybox USA Basketball John Stockton #88 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/1883632">1992 Skybox USA Basketball John Stockton #89 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/mike-krzyzewski/summary/306997">1992 Skybox USA Basketball Mike Krzyzewski #95 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/mike-krzyzewski/summary/306998">1992 Skybox USA Basketball Mike Krzyzewski #96 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-bird/summary/307001">1992 Skybox USA Basketball Magic on Bird #102 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-jordan/summary/307003">1992 Skybox USA Basketball Magic on Jordan #105 </a></td><td>Basketball Cards</td><td>26</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-stockton/summary/860256">1992 Skybox USA Basketball Magic on Stockton #110 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/charles-barkley-chris-mullin-clyde-drexler-john-stockton-michael-jordan-patrick-ewing/summary/2517663">1992 St. Vincent US Olympic Basketball Charles Barkley/Chris Mullin/Clyde Drexler/John Stockton/Michael Jordan/Patrick Ewing Panel</a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/david-robinson/summary/2627521">1992 St. Vincent US Olympic Basketball David Robinson Stamp</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/larry-bird/summary/2627527">1992 St. Vincent US Olympic Basketball Larry Bird Stamp</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-upper-deck-wilt-chamberlain-heroes/wilt-chamberlain/summary/437929">1992 Upper Deck Wilt Chamberlain Heroes Wilt Chamberlain #18 Basketball Heroes CL</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1993-skybox-pepsi-shaq-attaq/palming-basketball/summary/728960">1993 Skybox Pepsi Shaq Attaq Palming Basketball #1 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1993-skybox-pepsi-shaq-attaq/bending-basket/summary/450066">1993 Skybox Pepsi Shaq Attaq Bending Basket #2 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/joe-dumars/summary/448278">1994 Flair USA Basketball Joe Dumars #19 Golden Moment</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/reggie-miller/summary/678033">1994 Flair USA Basketball Reggie Miller #61 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448282">1994 Flair USA Basketball Shaquille O'Neal #74 Career Highlights</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448285">1994 Flair USA Basketball Shaquille O'Neal #77 Rookie Year</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448286">1994 Flair USA Basketball Shaquille O'Neal #78 Weights & Measures</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448287">1994 Flair USA Basketball Shaquille O'Neal #79 Personal Note</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448288">1994 Flair USA Basketball Shaquille O'Neal #80 Dreamscapes</a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/carol-blazejowski/summary/448292">1994 Flair USA Basketball Carol Blazejowski #113 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/teresa-edwards/summary/669908">1994 Flair USA Basketball Teresa Edwards #114 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/n-lieberman-cline/summary/448293">1994 Flair USA Basketball N. Lieberman-Cline #115 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/nancy-lieberman-cline/summary/1701640">1994 Flair USA Basketball Nancy Lieberman-Cline #115 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/ann-meyers/summary/448294">1994 Flair USA Basketball Ann Meyers #116 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/pat-summitt/summary/448295">1994 Flair USA Basketball Pat Summitt #117 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/lynette-woodard/summary/448296">1994 Flair USA Basketball Lynette Woodard #118 </a></td><td>Basketball Cards</td><td>8</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/checklist/summary/874289">1994 Flair USA Basketball Checklist #120 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shawn-kemp/summary/2684176">1994 Skybox USA Basketball Shawn Kemp #17 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/joe-dumars/summary/307011">1994 Skybox USA Basketball Joe Dumars #51 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/joe-dumars/summary/705932">1994 Skybox USA Basketball Joe Dumars #54 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/703289">1994 Skybox USA Basketball Shaquille O'Neal #67 Gold</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307016">1994 Skybox USA Basketball Shaquille O'Neal #67 </a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/848902">1994 Skybox USA Basketball Shaquille O'Neal #68 Gold</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307018">1994 Skybox USA Basketball Shaquille O'Neal #68 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307020">1994 Skybox USA Basketball Shaquille O'Neal #70 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/852665">1994 Skybox USA Basketball Shaquille O'Neal #70 Gold</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307022">1994 Skybox USA Basketball Shaquille O'Neal #72 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/reggie-miller/summary/830376">1994 Skybox USA Basketball Reggie Miller #75 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/reggie-miller/summary/831018">1994 Skybox USA Basketball Reggie Miller #77 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/team-card/summary/307024">1994 Skybox USA Basketball Team Card #83 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/magic-johnson/summary/307025">1994 Skybox USA Basketball Magic Johnson #87 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball-dream-play/shaquille-oneal/summary/450238">1994 Skybox USA Basketball Dream Play Shaquille O'Neal #DP12 </a></td><td>Basketball Cards</td><td>2</td></tr></tbody>
</table>
def _get_image_urls(soup: Any) -> List[Union[str, float]]: # noqa: D102
image_data = [n for n in soup.find_all("div", {"class": "item-image"})]
images: List[Union[str, float]] = []
for n in image_data:
html = str(n)
if "href" not in html:
images.append(math.nan)
continue
images.append(html.split('href="')[1].split('"')[0])
return images
```
|
github_jupyter
|
import pandas as pd
import requests
from bs4 import BeautifulSoup
import time
#AUCTION_PRICES = 'https://www.psacard.com/auctionprices#2basketball%20cards%7Cbaseket%20ball'
AUCTION_PRICES = 'https://www.psacard.com/pop/basketball-cards/2018/panini-absolute-memorabilia-glass/164038'
from selenium import webdriver
/mnt/c/Users/adity/Downloads/Intervie
driver = webdriver.Firefox(executable_path=r'/mnt/c/Users/adity/Downloads/Chrome/geckodriver-v0.27.0-win64/geckodriver.exe')
#'C:\Users\adity\Downloads\Chrome\geckodriver-v0.27.0-win64\geckodriver.exe')
driver.get(AUCTION_PRICES)
soup=BeautifulSoup(driver.page_source)
print(soup.prettify())
for link in soup.find_all('table'):
print (link.get('href',None), link.get_text())
import requests
import lxml.html as lh
import pandas as pd
page = requests.get(AUCTION_PRICES)
#Store the contents of the website under doc
doc = lh.fromstring(page.content)
#Parse data that are stored between <tr>..</tr> of HTML
tr_elements = doc.xpath('//tr')
page
gdp_table = soup.find("table", attrs={"class": "auction-summary-results"})
gdp_table_data = gdp_table.tbody.find_all("tr")
gdp_table_data
sess = requests.Session()
# sess.mount("https://", requests.adapters.HTTPAdapter(max_retries=5))
# r = sess.get(AUCTION_PRICES)
html_content = requests.get(AUCTION_PRICES).text
r.raise_for_status()
soup = BeautifulSoup(r.text, 'lxml')
time.sleep(5)
soup = BeautifulSoup(r.text, 'lxml')
time.sleep(5)
print(soup.prettify())
a1 = soup.find_all("table")
a1
soup.findAll('table')[0].findAll('tr')
for tr in a1.tbody.find_all("tr"):
print (tr)
<table class="auction-summary-results stacktable large-only" style="">
<thead>
<tr>
<th>Item</th>
<th>Category</th>
<th class="text-center">Items Found</th>
</tr>
</thead>
<tbody id="auction-summary-results-body"><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/murray-wier/summary/633272">1948 Topps Magic Photos All-American Basketball Murray Wier #2B All American Basketball</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/ed-macauley/summary/633273">1948 Topps Magic Photos All-American Basketball Ed Macauley #3B All American Basketball</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-all-american-basketball/manhattan/summary/633276">1948 Topps Magic Photos All-American Basketball Manhattan #6B All American Basketball</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/st-louis-university/summary/633277">1948 Topps Magic Photos Basketball Thrills St.Louis University #1Q </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/kentucky-58-42/summary/633280">1948 Topps Magic Photos Basketball Thrills Kentucky 58-42 #4Q </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1948-topps-magic-photos-basketball-thrills/depaul-75-64/summary/633281">1948 Topps Magic Photos Basketball Thrills DePaul 75 to 64 #5Q </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1971-topps/nba-basketball/summary/296533">1971 Topps NBA Basketball #137 Champions</a></td><td>Basketball Cards</td><td>37</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-basket-16-las-estrellas-de-la-nba/kareem-abdul-jabbar/summary/2286885">1988 Basket 16 Las Estrellas de La NBA Kareem Abdul-Jabbar #73 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-basket-16-las-estrellas-de-la-nba/kareem-abdul-jabbar/summary/2286889">1988 Basket 16 Las Estrellas de La NBA Kareem Abdul-Jabbar #74 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1988-entenmanns-bulls/scottie-pippen/summary/538367">1988 Entenmann's Bulls Scottie Pippen #33 Ball in Basket-Blank Back</a></td><td>Basketball Cards</td><td>9</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/pat-ewing/summary/2647119">1989 Gigantes Del Basket Pat Ewing #29 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/kareem-abdul-jabbar/summary/2229873">1989 Gigantes Del Basket Kareem Abdul-Jabbar #37 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/michael-jordan/summary/2184390">1989 Gigantes Del Basket Michael Jordan #43 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/david-robinson/summary/2273269">1989 Gigantes Del Basket David Robinson #79 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1989-gigantes-del-basket/dennis-rodman/summary/2647121">1989 Gigantes Del Basket Dennis Rodman #80 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1990-hoops/david-robinson/summary/1903442">1990 Hoops David Robinson #378 Basketball Fully Visible</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1990-hoops/david-robinson/summary/1903443">1990 Hoops David Robinson #378 Basketball Partially Visible</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-hoops-mcdonalds/usa-basketball-team/summary/441570">1991 Hoops McDonald'S USA Basketball Team #62 </a></td><td>Basketball Cards</td><td>27</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-little-basketball-big-leaguers/larry-bird/summary/3232640">1991 Little Basketball Big Leaguers Larry Bird Perforated</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1991-skybox/usa-basketball/summary/306751">1991 Skybox USA Basketball Team Card</a></td><td>Basketball Cards</td><td>35</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-hoops/usa-basketball-team/summary/302030">1992 Hoops USA Basketball Team </a></td><td>Basketball Cards</td><td>58</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/team-usa/summary/307004">1992 Skybox USA Basketball Team USA Plastic Card</a></td><td>Basketball Cards</td><td>42</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/charles-barkley/summary/306959">1992 Skybox USA Basketball Charles Barkley #2 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/charles-barkley/summary/860246">1992 Skybox USA Basketball Charles Barkley #8 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306963">1992 Skybox USA Basketball Larry Bird #10 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306964">1992 Skybox USA Basketball Larry Bird #11 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306965">1992 Skybox USA Basketball Larry Bird #12 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306966">1992 Skybox USA Basketball Larry Bird #13 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306967">1992 Skybox USA Basketball Larry Bird #14 </a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306968">1992 Skybox USA Basketball Larry Bird #15 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306969">1992 Skybox USA Basketball Larry Bird #16 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306970">1992 Skybox USA Basketball Larry Bird #17 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/larry-bird/summary/306971">1992 Skybox USA Basketball Larry Bird #18 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/patrick-ewing/summary/901852">1992 Skybox USA Basketball Patrick Ewing #19 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306974">1992 Skybox USA Basketball Magic Johnson #29 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306978">1992 Skybox USA Basketball Magic Johnson #33 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306980">1992 Skybox USA Basketball Magic Johnson #35 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-johnson/summary/306981">1992 Skybox USA Basketball Magic Johnson #36 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306982">1992 Skybox USA Basketball Michael Jordan #37 </a></td><td>Basketball Cards</td><td>25</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306983">1992 Skybox USA Basketball Michael Jordan #38 </a></td><td>Basketball Cards</td><td>52</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306984">1992 Skybox USA Basketball Michael Jordan #39 </a></td><td>Basketball Cards</td><td>46</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306985">1992 Skybox USA Basketball Michael Jordan #40 </a></td><td>Basketball Cards</td><td>39</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306986">1992 Skybox USA Basketball Michael Jordan #41 </a></td><td>Basketball Cards</td><td>35</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306987">1992 Skybox USA Basketball Michael Jordan #42 </a></td><td>Basketball Cards</td><td>41</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306988">1992 Skybox USA Basketball Michael Jordan #43 </a></td><td>Basketball Cards</td><td>27</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306989">1992 Skybox USA Basketball Michael Jordan #44 </a></td><td>Basketball Cards</td><td>29</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/michael-jordan/summary/306990">1992 Skybox USA Basketball Michael Jordan #45 </a></td><td>Basketball Cards</td><td>20</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/karl-malone/summary/721855">1992 Skybox USA Basketball Karl Malone #46 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/karl-malone/summary/860235">1992 Skybox USA Basketball Karl Malone #48 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/chris-mullin/summary/901851">1992 Skybox USA Basketball Chris Mullin #57 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/901850">1992 Skybox USA Basketball Scottie Pippen #65 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/1916078">1992 Skybox USA Basketball Scottie Pippen #67 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/scottie-pippen/summary/306993">1992 Skybox USA Basketball Scottie Pippen #68 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/883645">1992 Skybox USA Basketball David Robinson #73 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/1753279">1992 Skybox USA Basketball David Robinson #74 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/david-robinson/summary/860241">1992 Skybox USA Basketball David Robinson #78 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/901849">1992 Skybox USA Basketball John Stockton #83 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/1883628">1992 Skybox USA Basketball John Stockton #88 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/john-stockton/summary/1883632">1992 Skybox USA Basketball John Stockton #89 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/mike-krzyzewski/summary/306997">1992 Skybox USA Basketball Mike Krzyzewski #95 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/mike-krzyzewski/summary/306998">1992 Skybox USA Basketball Mike Krzyzewski #96 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-bird/summary/307001">1992 Skybox USA Basketball Magic on Bird #102 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-jordan/summary/307003">1992 Skybox USA Basketball Magic on Jordan #105 </a></td><td>Basketball Cards</td><td>26</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-skybox-usa-basketball/magic-stockton/summary/860256">1992 Skybox USA Basketball Magic on Stockton #110 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/charles-barkley-chris-mullin-clyde-drexler-john-stockton-michael-jordan-patrick-ewing/summary/2517663">1992 St. Vincent US Olympic Basketball Charles Barkley/Chris Mullin/Clyde Drexler/John Stockton/Michael Jordan/Patrick Ewing Panel</a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/david-robinson/summary/2627521">1992 St. Vincent US Olympic Basketball David Robinson Stamp</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-st-vincent-us-olympic-basketball/larry-bird/summary/2627527">1992 St. Vincent US Olympic Basketball Larry Bird Stamp</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1992-upper-deck-wilt-chamberlain-heroes/wilt-chamberlain/summary/437929">1992 Upper Deck Wilt Chamberlain Heroes Wilt Chamberlain #18 Basketball Heroes CL</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1993-skybox-pepsi-shaq-attaq/palming-basketball/summary/728960">1993 Skybox Pepsi Shaq Attaq Palming Basketball #1 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1993-skybox-pepsi-shaq-attaq/bending-basket/summary/450066">1993 Skybox Pepsi Shaq Attaq Bending Basket #2 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/joe-dumars/summary/448278">1994 Flair USA Basketball Joe Dumars #19 Golden Moment</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/reggie-miller/summary/678033">1994 Flair USA Basketball Reggie Miller #61 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448282">1994 Flair USA Basketball Shaquille O'Neal #74 Career Highlights</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448285">1994 Flair USA Basketball Shaquille O'Neal #77 Rookie Year</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448286">1994 Flair USA Basketball Shaquille O'Neal #78 Weights & Measures</a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448287">1994 Flair USA Basketball Shaquille O'Neal #79 Personal Note</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/shaquille-oneal/summary/448288">1994 Flair USA Basketball Shaquille O'Neal #80 Dreamscapes</a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/carol-blazejowski/summary/448292">1994 Flair USA Basketball Carol Blazejowski #113 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/teresa-edwards/summary/669908">1994 Flair USA Basketball Teresa Edwards #114 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/n-lieberman-cline/summary/448293">1994 Flair USA Basketball N. Lieberman-Cline #115 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/nancy-lieberman-cline/summary/1701640">1994 Flair USA Basketball Nancy Lieberman-Cline #115 </a></td><td>Basketball Cards</td><td>4</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/ann-meyers/summary/448294">1994 Flair USA Basketball Ann Meyers #116 </a></td><td>Basketball Cards</td><td>7</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/pat-summitt/summary/448295">1994 Flair USA Basketball Pat Summitt #117 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/lynette-woodard/summary/448296">1994 Flair USA Basketball Lynette Woodard #118 </a></td><td>Basketball Cards</td><td>8</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-flair-usa-basketball/checklist/summary/874289">1994 Flair USA Basketball Checklist #120 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shawn-kemp/summary/2684176">1994 Skybox USA Basketball Shawn Kemp #17 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/joe-dumars/summary/307011">1994 Skybox USA Basketball Joe Dumars #51 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/joe-dumars/summary/705932">1994 Skybox USA Basketball Joe Dumars #54 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/703289">1994 Skybox USA Basketball Shaquille O'Neal #67 Gold</a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307016">1994 Skybox USA Basketball Shaquille O'Neal #67 </a></td><td>Basketball Cards</td><td>5</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/848902">1994 Skybox USA Basketball Shaquille O'Neal #68 Gold</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307018">1994 Skybox USA Basketball Shaquille O'Neal #68 </a></td><td>Basketball Cards</td><td>6</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307020">1994 Skybox USA Basketball Shaquille O'Neal #70 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/852665">1994 Skybox USA Basketball Shaquille O'Neal #70 Gold</a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/shaquille-oneal/summary/307022">1994 Skybox USA Basketball Shaquille O'Neal #72 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/reggie-miller/summary/830376">1994 Skybox USA Basketball Reggie Miller #75 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/reggie-miller/summary/831018">1994 Skybox USA Basketball Reggie Miller #77 </a></td><td>Basketball Cards</td><td>1</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/team-card/summary/307024">1994 Skybox USA Basketball Team Card #83 </a></td><td>Basketball Cards</td><td>2</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball/magic-johnson/summary/307025">1994 Skybox USA Basketball Magic Johnson #87 </a></td><td>Basketball Cards</td><td>3</td></tr><tr><td><a href="/auctionprices/basketball-cards/1994-skybox-usa-basketball-dream-play/shaquille-oneal/summary/450238">1994 Skybox USA Basketball Dream Play Shaquille O'Neal #DP12 </a></td><td>Basketball Cards</td><td>2</td></tr></tbody>
</table>
def _get_image_urls(soup: Any) -> List[Union[str, float]]: # noqa: D102
image_data = [n for n in soup.find_all("div", {"class": "item-image"})]
images: List[Union[str, float]] = []
for n in image_data:
html = str(n)
if "href" not in html:
images.append(math.nan)
continue
images.append(html.split('href="')[1].split('"')[0])
return images
| 0.211173 | 0.135718 |
# Recommender Systems with Python
Welcome to the code notebook for Recommender Systems with Python. In this lecture we will develop basic recommendation systems using Python and pandas. There is another notebook: *Advanced Recommender Systems with Python*. That notebook goes into more detail with the same data set.
In this notebook, we will focus on providing a basic recommendation system by suggesting items that are most __similar to a particular item__, in this case, movies. Keep in mind, this is not a true robust recommendation system, to describe it more accurately,it just tells you what movies/items are most similar to your movie choice.
Let's get started!
## Import Libraries
```
import numpy as np
import pandas as pd
```
## Get the Data
```
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('u.data', sep='\t', names=column_names)
df.head()
```
Now let's get the movie titles:
```
movie_titles = pd.read_csv("Movie_Id_Titles")
movie_titles.head()
```
We can merge them together:
```
df = pd.merge(df,movie_titles,on='item_id')
df.head()
```
# EDA
Let's explore the data a bit and get a look at some of the best rated movies.
## Visualization Imports
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
```
Let's create a ratings dataframe with average rating and number of ratings:
```
df.groupby('title')['rating'].mean().sort_values(ascending=False).head()
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
ratings = pd.DataFrame(df.groupby('title')['rating'].mean())
ratings.head()
```
Now set the number of ratings column:
```
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())
ratings.head()
```
Now a few histograms:
```
plt.figure(figsize=(10,4))
ratings['num of ratings'].hist(bins=70)
plt.figure(figsize=(10,4))
ratings['rating'].hist(bins=70)
sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
```
Okay! Now that we have a general idea of what the data looks like, let's move on to creating a simple recommendation system:
## Recommending Similar Movies
Now let's create a matrix that has the user ids on one access and the movie title on another axis. Each cell will then consist of the rating the user gave to that movie. Note there will be a lot of NaN values, because most people have not seen most of the movies.
```
moviemat = df.pivot_table(index='user_id',columns='title',values='rating')
moviemat.head()
```
Most rated movie:
```
ratings.sort_values('num of ratings',ascending=False).head(10)
```
Let's choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.
```
ratings.head()
```
Now let's grab the user ratings for those two movies:
```
starwars_user_ratings = moviemat['Star Wars (1977)']
liarliar_user_ratings = moviemat['Liar Liar (1997)']
print(starwars_user_ratings.head())
print(liarliar_user_ratings.head())
```
We can then use corrwith() method to get correlations between two pandas series. __corrwith compute the correlations between two columns or rows of two dataframe objects__
```
similar_to_starwars = moviemat.corrwith(starwars_user_ratings); similar_to_starwars
similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings); similar_to_liarliar
```
Let's clean this by removing NaN values and using a DataFrame instead of a series:
```
# correlations based on user ratings!!!
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])
corr_starwars.dropna(inplace=True)
corr_starwars.head()
```
Now if we sort the dataframe by correlation, we should get the most similar movies, however note that we get some results that don't really make sense. This is because there are a lot of movies only watched once by users who also watched star wars (it was the most popular movie).
```
corr_starwars.sort_values('Correlation',ascending=False).head(10)
```
Let's fix this by filtering out movies that have less than 100 reviews (this value was chosen based off the histogram from earlier).
```
corr_starwars = corr_starwars.join(ratings['num of ratings'])
corr_starwars.head()
```
Now sort the values and notice how the titles make a lot more sense:
```
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
```
Now the same for the comedy Liar Liar:
```
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])
corr_liarliar.dropna(inplace=True)
corr_liarliar = corr_liarliar.join(ratings['num of ratings'])
corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
```
** lets play around the number of ratings to suggest Recommendations based on similaties
|
github_jupyter
|
import numpy as np
import pandas as pd
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('u.data', sep='\t', names=column_names)
df.head()
movie_titles = pd.read_csv("Movie_Id_Titles")
movie_titles.head()
df = pd.merge(df,movie_titles,on='item_id')
df.head()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
df.groupby('title')['rating'].mean().sort_values(ascending=False).head()
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
ratings = pd.DataFrame(df.groupby('title')['rating'].mean())
ratings.head()
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())
ratings.head()
plt.figure(figsize=(10,4))
ratings['num of ratings'].hist(bins=70)
plt.figure(figsize=(10,4))
ratings['rating'].hist(bins=70)
sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
moviemat = df.pivot_table(index='user_id',columns='title',values='rating')
moviemat.head()
ratings.sort_values('num of ratings',ascending=False).head(10)
ratings.head()
starwars_user_ratings = moviemat['Star Wars (1977)']
liarliar_user_ratings = moviemat['Liar Liar (1997)']
print(starwars_user_ratings.head())
print(liarliar_user_ratings.head())
similar_to_starwars = moviemat.corrwith(starwars_user_ratings); similar_to_starwars
similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings); similar_to_liarliar
# correlations based on user ratings!!!
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])
corr_starwars.dropna(inplace=True)
corr_starwars.head()
corr_starwars.sort_values('Correlation',ascending=False).head(10)
corr_starwars = corr_starwars.join(ratings['num of ratings'])
corr_starwars.head()
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])
corr_liarliar.dropna(inplace=True)
corr_liarliar = corr_liarliar.join(ratings['num of ratings'])
corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
| 0.227469 | 0.982406 |
# Kubeflow pipelines
**Learning Objectives:**
1. Learn how to deploy a Kubeflow cluster on GCP
1. Learn how to create a experiment in Kubeflow
1. Learn how to package you code into a Kubeflow pipeline
1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way
## Introduction
In this notebook, we will first setup a Kubeflow cluster on GCP.
Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip freeze | grep kfp || pip install kfp
from os import path
import kfp
import kfp.compiler as compiler
import kfp.components as comp
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
```
## Setup a Kubeflow cluster on GCP
**TODO 1**
To deploy a [Kubeflow](https://www.kubeflow.org/) cluster
in your GCP project, use the [AI Platform pipelines](https://console.cloud.google.com/ai-platform/pipelines):
1. Go to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines) in the GCP Console.
1. Create a new instance
2. Hit "Configure"
3. Check the box "Allow access to the following Cloud APIs"
1. Hit "Create Cluster"
4. Hit "Deploy"
When the cluster is ready, go back to the AI Platform pipelines page and click on "SETTINGS" entry for your cluster.
This will bring up a pop up with code snippets on how to access the cluster
programmatically.
Copy the "host" entry and set the "HOST" variable below with that.
```
HOST = "<KFP HOST>"
BUCKET = "<YOUR PROJECT>"
```
## Create an experiment
**TODO 2**
We will start by creating a Kubeflow client to pilot the Kubeflow cluster:
```
client = kfp.Client(host=HOST)
```
Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment:
```
client.list_experiments()
```
Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:
```
exp = client.create_experiment(name='taxifare')
```
Let's make sure the experiment has been created correctly:
```
client.list_experiments()
```
## Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components:
* `./components/bq2gcs` that creates the training and evaluation data from BigQuery and exports it to GCS
* `./components/trainjob` that launches the training container on AI-platform and exports the model
* `./components/deploymodel` that deploys the trained model to AI-platform as a REST API
Each of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.
If you inspect the code in these folders, you'll notice that the `main.py` or `main.sh` files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the `Dockerfile` tells you that these files are executed when the container is run.
So we just packaged our ml code into light container images for reproducibility.
We have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:
```
# Builds the taxifare trainer container in case you skipped the optional part of lab 1
!taxifare/scripts/build.sh
# Pushes the taxifare trainer container to gcr/io
!taxifare/scripts/push.sh
# Builds the KF component containers and push them to gcr/io
!cd pipelines && make components
```
Now that the container images are pushed to the [registry in your project](https://console.cloud.google.com/gcr), we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
**TODO 3**
**IMPORTANT: Modify the image URI in the cell
below to reflect that you pushed the images into the gcr.io associated with your project.**
```
%%writefile bq2gcs.yaml
name: bq2gcs
description: |
This component creates the training and
validation datasets as BiqQuery tables and export
them into a Google Cloud Storage bucket at
gs://<BUCKET>/taxifare/data.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-bq2gcs
args: ["--bucket", {inputValue: Input Bucket}]
%%writefile trainjob.yaml
name: trainjob
description: |
This component trains a model to predict that taxi fare in NY.
It takes as argument a GCS bucket and expects its training and
eval data to be at gs://<BUCKET>/taxifare/data/ and will export
the trained model at gs://<BUCKET>/taxifare/model/.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-trainjob
args: [{inputValue: Input Bucket}]
%%writefile deploymodel.yaml
name: deploymodel
description: |
This component deploys a trained taxifare model on GCP as taxifare:dnn.
It takes as argument a GCS bucket and expects the model to deploy
to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-deploymodel
args: [{inputValue: Input Bucket}]
```
## Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
`@dsl.pipeline` decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow `op`
* specifying the order into which the Kubeflow ops should be run
```
# TODO 3
PIPELINE_TAR = 'taxifare.tar.gz'
BQ2GCS_YAML = './bq2gcs.yaml'
TRAINJOB_YAML = './trainjob.yaml'
DEPLOYMODEL_YAML = './deploymodel.yaml'
@dsl.pipeline(
name='Taxifare',
description='Train a ml model to predict the taxi fare in NY')
def pipeline(gcs_bucket_name='<bucket where data and model will be exported>'):
bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)
bq2gcs = bq2gcs_op(
input_bucket=gcs_bucket_name,
)
trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)
trainjob = trainjob_op(
input_bucket=gcs_bucket_name,
)
deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)
deploymodel = deploymodel_op(
input_bucket=gcs_bucket_name,
)
trainjob.after(bq2gcs)
deploymodel.after(trainjob)
```
The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:
```
compiler.Compiler().compile(pipeline, PIPELINE_TAR)
ls $PIPELINE_TAR
```
If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client:
```
# TODO 4
run = client.run_pipeline(
experiment_id=exp.id,
job_name='taxifare',
pipeline_package_path='taxifare.tar.gz',
params={
'gcs_bucket_name': BUCKET,
},
)
```
Have a look at the link to monitor the run.
Now all the runs are nicely organized under the experiment in the UI, and new runs can be either manually launched or scheduled through the UI in a completely repeatable and traceable way!
|
github_jupyter
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip freeze | grep kfp || pip install kfp
from os import path
import kfp
import kfp.compiler as compiler
import kfp.components as comp
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
HOST = "<KFP HOST>"
BUCKET = "<YOUR PROJECT>"
client = kfp.Client(host=HOST)
client.list_experiments()
exp = client.create_experiment(name='taxifare')
client.list_experiments()
# Builds the taxifare trainer container in case you skipped the optional part of lab 1
!taxifare/scripts/build.sh
# Pushes the taxifare trainer container to gcr/io
!taxifare/scripts/push.sh
# Builds the KF component containers and push them to gcr/io
!cd pipelines && make components
%%writefile bq2gcs.yaml
name: bq2gcs
description: |
This component creates the training and
validation datasets as BiqQuery tables and export
them into a Google Cloud Storage bucket at
gs://<BUCKET>/taxifare/data.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-bq2gcs
args: ["--bucket", {inputValue: Input Bucket}]
%%writefile trainjob.yaml
name: trainjob
description: |
This component trains a model to predict that taxi fare in NY.
It takes as argument a GCS bucket and expects its training and
eval data to be at gs://<BUCKET>/taxifare/data/ and will export
the trained model at gs://<BUCKET>/taxifare/model/.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-trainjob
args: [{inputValue: Input Bucket}]
%%writefile deploymodel.yaml
name: deploymodel
description: |
This component deploys a trained taxifare model on GCP as taxifare:dnn.
It takes as argument a GCS bucket and expects the model to deploy
to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-deploymodel
args: [{inputValue: Input Bucket}]
# TODO 3
PIPELINE_TAR = 'taxifare.tar.gz'
BQ2GCS_YAML = './bq2gcs.yaml'
TRAINJOB_YAML = './trainjob.yaml'
DEPLOYMODEL_YAML = './deploymodel.yaml'
@dsl.pipeline(
name='Taxifare',
description='Train a ml model to predict the taxi fare in NY')
def pipeline(gcs_bucket_name='<bucket where data and model will be exported>'):
bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)
bq2gcs = bq2gcs_op(
input_bucket=gcs_bucket_name,
)
trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)
trainjob = trainjob_op(
input_bucket=gcs_bucket_name,
)
deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)
deploymodel = deploymodel_op(
input_bucket=gcs_bucket_name,
)
trainjob.after(bq2gcs)
deploymodel.after(trainjob)
compiler.Compiler().compile(pipeline, PIPELINE_TAR)
ls $PIPELINE_TAR
# TODO 4
run = client.run_pipeline(
experiment_id=exp.id,
job_name='taxifare',
pipeline_package_path='taxifare.tar.gz',
params={
'gcs_bucket_name': BUCKET,
},
)
| 0.281702 | 0.980858 |
# Wellness 심리 상담 데이터에 대한 KoELECTRA 학습 Question & Answer
데이터와 클래스 셋으로 이루어져, 질의 데이터가 들어왔을 때, Answer 클래스를 예측하도록 학습
## 1.Google Drive 연동
- 모델 파일과 학습 데이터가 저장 되어있는 구글 드라이브의 디렉토리와 Colab을 연동.
- 좌측상단 메뉴에서 런타임-> 런타임 유형 변경 -> 하드웨어 가속기 -> GPU 선택 후 저장
### 1.1 GPU 연동 확인
```
!nvidia-smi
```
### 1.2 Google Drive 연동
아래 코드를 실행후 나오는 URL을 클릭하여 나오는 인증 코드 입력
```
from google.colab import drive
drive.mount('/content/drive')
```
**Colab 디렉토리 아래 dialogLM 경로 확인**
```
!ls drive/'My Drive'/'Colab Notebooks'/
```
**필요 패키지 설치**
```
!pip install -r drive/'My Drive'/'Colab Notebooks'/dialogLM/requirements.txt
```
## KoBERT QA Training
**Path 추가**
```
import sys
sys.path.append('drive/My Drive/Colab Notebooks/')
sys.path.append('drive/My Drive/Colab Notebooks/dialogLM')
```
### 2.1 import package
```
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from IPython.display import display
from tqdm import tqdm
import torch
from transformers import (
AdamW,
ElectraConfig,
ElectraTokenizer
)
from torch.utils.data import dataloader
from dialogLM.dataloader.wellness import WellnessTextClassificationDataset
from dialogLM.model.koelectra import koElectraForSequenceClassification
torch.cuda.is_available()
```
**Train 함수**
```
def train(epoch, model, optimizer, train_loader, save_step, save_ckpt_path, train_step = 0):
losses = []
train_start_index = train_step+1 if train_step != 0 else 0
total_train_step = len(train_loader)
model.train()
with tqdm(total= total_train_step, desc=f"Train({epoch})") as pbar:
pbar.update(train_step)
for i, data in enumerate(train_loader, train_start_index):
optimizer.zero_grad()
'''
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'bias_labels': batch[3],
'hate_labels': batch[4]}
if self.args.model_type != 'distilkobert':
inputs['token_type_ids'] = batch[2]
'''
inputs = {'input_ids': data['input_ids'],
'attention_mask': data['attention_mask'],
'labels': data['labels']
}
outputs = model(**inputs)
loss = outputs[0]
losses.append(loss.item())
loss.backward()
optimizer.step()
pbar.update(1)
pbar.set_postfix_str(f"Loss: {loss.item():.3f} ({np.mean(losses):.3f})")
if i >= total_train_step or i % save_step == 0:
torch.save({
'epoch': epoch, # 현재 학습 epoch
'model_state_dict': model.state_dict(), # 모델 저장
'optimizer_state_dict': optimizer.state_dict(), # 옵티마이저 저장
'loss': loss.item(), # Loss 저장
'train_step': i, # 현재 진행한 학습
'total_train_step': len(train_loader) # 현재 epoch에 학습 할 총 train step
}, save_ckpt_path)
return np.mean(losses)
```
### KoBERT Question & Answer Training for Wellness dataset
```
root_path='drive/My Drive/Colab Notebooks/dialogLM'
data_path = f"{root_path}/data/wellness_dialog_for_text_classification_train.txt"
checkpoint_path =f"{root_path}/checkpoint"
save_ckpt_path = f"{checkpoint_path}/koelectra-wellnesee-text-classification.pth"
model_name_or_path = "monologg/koelectra-base-discriminator"
n_epoch = 50 # Num of Epoch
batch_size = 16 # 배치 사이즈
ctx = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(ctx)
save_step = 100 # 학습 저장 주기
learning_rate = 5e-6 # Learning Rate
# Electra Tokenizer
tokenizer = ElectraTokenizer.from_pretrained(model_name_or_path)
# WellnessTextClassificationDataset 데이터 로더
dataset = WellnessTextClassificationDataset(file_path=data_path, tokenizer=tokenizer, device=device)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
electra_config = ElectraConfig.from_pretrained(model_name_or_path)
model = koElectraForSequenceClassification.from_pretrained(pretrained_model_name_or_path=model_name_or_path,
config=electra_config,
num_labels=359)
model.to(device)
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate)
pre_epoch, pre_loss, train_step = 0, 0, 0
if os.path.isfile(save_ckpt_path):
checkpoint = torch.load(save_ckpt_path, map_location=device)
pre_epoch = checkpoint['epoch']
pre_loss = checkpoint['loss']
train_step = checkpoint['train_step']
total_train_step = checkpoint['total_train_step']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(f"load pretrain from: {save_ckpt_path}, epoch={pre_epoch}, loss={pre_loss}")
# best_epoch += 1
losses = []
offset = pre_epoch
for step in range(n_epoch):
epoch = step + offset
loss = train( epoch, model, optimizer, train_loader, save_step, save_ckpt_path, train_step)
losses.append(loss)
# data
data = {
"loss": losses
}
df = pd.DataFrame(data)
display(df)
# graph
plt.figure(figsize=[12, 4])
plt.plot(losses, label="loss")
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
```
|
github_jupyter
|
!nvidia-smi
from google.colab import drive
drive.mount('/content/drive')
!ls drive/'My Drive'/'Colab Notebooks'/
!pip install -r drive/'My Drive'/'Colab Notebooks'/dialogLM/requirements.txt
import sys
sys.path.append('drive/My Drive/Colab Notebooks/')
sys.path.append('drive/My Drive/Colab Notebooks/dialogLM')
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from IPython.display import display
from tqdm import tqdm
import torch
from transformers import (
AdamW,
ElectraConfig,
ElectraTokenizer
)
from torch.utils.data import dataloader
from dialogLM.dataloader.wellness import WellnessTextClassificationDataset
from dialogLM.model.koelectra import koElectraForSequenceClassification
torch.cuda.is_available()
def train(epoch, model, optimizer, train_loader, save_step, save_ckpt_path, train_step = 0):
losses = []
train_start_index = train_step+1 if train_step != 0 else 0
total_train_step = len(train_loader)
model.train()
with tqdm(total= total_train_step, desc=f"Train({epoch})") as pbar:
pbar.update(train_step)
for i, data in enumerate(train_loader, train_start_index):
optimizer.zero_grad()
'''
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'bias_labels': batch[3],
'hate_labels': batch[4]}
if self.args.model_type != 'distilkobert':
inputs['token_type_ids'] = batch[2]
'''
inputs = {'input_ids': data['input_ids'],
'attention_mask': data['attention_mask'],
'labels': data['labels']
}
outputs = model(**inputs)
loss = outputs[0]
losses.append(loss.item())
loss.backward()
optimizer.step()
pbar.update(1)
pbar.set_postfix_str(f"Loss: {loss.item():.3f} ({np.mean(losses):.3f})")
if i >= total_train_step or i % save_step == 0:
torch.save({
'epoch': epoch, # 현재 학습 epoch
'model_state_dict': model.state_dict(), # 모델 저장
'optimizer_state_dict': optimizer.state_dict(), # 옵티마이저 저장
'loss': loss.item(), # Loss 저장
'train_step': i, # 현재 진행한 학습
'total_train_step': len(train_loader) # 현재 epoch에 학습 할 총 train step
}, save_ckpt_path)
return np.mean(losses)
root_path='drive/My Drive/Colab Notebooks/dialogLM'
data_path = f"{root_path}/data/wellness_dialog_for_text_classification_train.txt"
checkpoint_path =f"{root_path}/checkpoint"
save_ckpt_path = f"{checkpoint_path}/koelectra-wellnesee-text-classification.pth"
model_name_or_path = "monologg/koelectra-base-discriminator"
n_epoch = 50 # Num of Epoch
batch_size = 16 # 배치 사이즈
ctx = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(ctx)
save_step = 100 # 학습 저장 주기
learning_rate = 5e-6 # Learning Rate
# Electra Tokenizer
tokenizer = ElectraTokenizer.from_pretrained(model_name_or_path)
# WellnessTextClassificationDataset 데이터 로더
dataset = WellnessTextClassificationDataset(file_path=data_path, tokenizer=tokenizer, device=device)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
electra_config = ElectraConfig.from_pretrained(model_name_or_path)
model = koElectraForSequenceClassification.from_pretrained(pretrained_model_name_or_path=model_name_or_path,
config=electra_config,
num_labels=359)
model.to(device)
# Prepare optimizer and schedule (linear warmup and decay)
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate)
pre_epoch, pre_loss, train_step = 0, 0, 0
if os.path.isfile(save_ckpt_path):
checkpoint = torch.load(save_ckpt_path, map_location=device)
pre_epoch = checkpoint['epoch']
pre_loss = checkpoint['loss']
train_step = checkpoint['train_step']
total_train_step = checkpoint['total_train_step']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(f"load pretrain from: {save_ckpt_path}, epoch={pre_epoch}, loss={pre_loss}")
# best_epoch += 1
losses = []
offset = pre_epoch
for step in range(n_epoch):
epoch = step + offset
loss = train( epoch, model, optimizer, train_loader, save_step, save_ckpt_path, train_step)
losses.append(loss)
# data
data = {
"loss": losses
}
df = pd.DataFrame(data)
display(df)
# graph
plt.figure(figsize=[12, 4])
plt.plot(losses, label="loss")
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
| 0.517815 | 0.880592 |
# IOS-XE 16.5.1 EFT Image Demo
## Connecting to a Device
Let's define some variables:
```
# Local CSR 1000v (running under vagrant) -- rtr1
HOST = '127.0.0.1'
PORT = 2223
USER = 'vagrant'
PASS = 'vagrant'
# Local CSR 1000v (running under vagrant) -- rtr2
#HOST = '127.0.0.1'
#PORT = 2200
#USER = 'vagrant'
#PASS = 'vagrant'
```
Now let's establish a NETCONF session to that box using ncclient:
```
from ncclient import manager
from lxml import etree
def pretty_print(retval):
print(etree.tostring(retval.data, pretty_print=True))
def my_unknown_host_cb(host, fingerprint):
return True
m = manager.connect(host=HOST, port=PORT, username=USER, password=PASS,
allow_agent=False,
look_for_keys=False,
hostkey_verify=False,
unknown_host_cb=my_unknown_host_cb)
```
## Capabilities
Let's look at the capabilities presented by the thing we've just connected to:
```
for c in m.server_capabilities:
print(c)
```
Ok, that's a bit messy, so let's tidy it up a bit and look, initially, at all the base netconf capabilities:
```
nc_caps = [c for c in m.server_capabilities if c.startswith('urn:ietf:params:netconf')]
for c in nc_caps:
print(c)
```
And now let's look at the capabilities that are related to model support:
```
import re
for c in m.server_capabilities:
model = re.search('module=([^&]*)&', c)
if model is not None:
print("{}".format(model.group(1)))
revision = re.search('revision=([0-9]{4}-[0-9]{2}-[0-9]{2})', c)
if revision is not None:
print(" revision = {}".format(revision.group(1)))
deviations = re.search('deviations=([a-zA-Z0-9\-,]+)($|&)',c)
if deviations is not None:
print(" deviations = {}".format(deviations.group(1)))
features = re.search('features=([a-zA-Z0-9\-,]+)($|&)',c)
if features is not None:
print(" features = {}".format(features.group(1)))
```
## Schema
Let's take a look at playing with schema. First, we can try downloading them, picking one of the modules we got capabilities for.
```
SCHEMA_TO_GET = 'Cisco-IOS-XE-native'
c = m.get_schema(SCHEMA_TO_GET)
print(c.data)
```
That's not so readable. Let's use a utility called ```pyang``` to get something a bit more readable.
```
from subprocess import Popen, PIPE, STDOUT
SCHEMA_TO_GET = 'Cisco-IOS-XE-native'
#SCHEMA_TO_GET = 'ietf-interfaces'
c = m.get_schema(SCHEMA_TO_GET)
p = Popen(['pyang', '-f', 'tree'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input=c.data)[0]
print(stdout_data)
```
## What About Config?
The ncclient library provides for some simple operations. Let's skip thinking about schemas and stuff like that. Instead let's focus on config and getting end setting it. For that, ncclient provides two methods:
* get_config - takes a target data store and an optional filter
* edit_config - takes a target data store and an XML document with the edit request
### Getting Config
Let's look at some simple requests...
```
c = m.get_config(source='running')
pretty_print(c)
```
Now let's add in a simple filter:
```
filter = '''
<native>
<username/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
### Retrieve Interface Data (Native Model)
```
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
### Retrieve Interface Data (Native Model) With XPath Query
As well as subtree filters, IOS-XE support XPath-based filters.
```
filter = '/native/interface/GigabitEthernet/name'
c = m.get_config(source='running', filter=('xpath', filter))
pretty_print(c)
```
### Retrieve All BGP Data
Now let's look at the BGP native model:
```
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp"/>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
### Look At A Specific BGP Neighbor
And can we look at a specific neighbor only? Say the one with id (address) ```192.168.0.1```?
```
filter = '''
<native>
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.0.1</id>
</neighbor>
</bgp>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
### Create New BGP Neighbor
Ok, so, yes we can get a specific neighbor. Now, can we create a new neighbor? Let's create one with an id of '192.168.1.1', with a remote-as of 666.
```
from ncclient.operations import TimeoutExpiredError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor nc:operation="merge">
<id>192.168.1.1</id>
<remote-as>666</remote-as>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
```
Now let's pull back that neighbor:
```
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.1.1</id>
</neighbor>
</bgp>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
### Modify The BGP Neighbor Description
Can modify something in the neighbor we just created? Let's keep it simple and modify the description:
```
from ncclient.operations import TimeoutExpiredError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.1.1</id>
<description nc:operation="merge">modified description</description>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
```
### Delete A BGP Neighbor
Might need to do this before creating depending on the state of the router!
```
from ncclient.operations import TimeoutExpiredError
from lxml.etree import XMLSyntaxError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor nc:operation="delete">
<id>192.168.1.1</id>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except XMLSyntaxError as e:
print(e)
print(e.args)
print(dir(e))
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
```
## Other Stuff
Get interface data from native model:
```
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name/>
<ip/>
</GigabitEthernet>
</interface>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name>1</name>
</GigabitEthernet>
</interface>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
Get interfaces from IETF model:
```
filter = '''
<interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces"/>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
```
# Enable Debugging
```
import logging
handler = logging.StreamHandler()
for l in ['ncclient.transport.ssh', 'ncclient.transport.session', 'ncclient.operations.rpc']:
logger = logging.getLogger(l)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
```
|
github_jupyter
|
# Local CSR 1000v (running under vagrant) -- rtr1
HOST = '127.0.0.1'
PORT = 2223
USER = 'vagrant'
PASS = 'vagrant'
# Local CSR 1000v (running under vagrant) -- rtr2
#HOST = '127.0.0.1'
#PORT = 2200
#USER = 'vagrant'
#PASS = 'vagrant'
from ncclient import manager
from lxml import etree
def pretty_print(retval):
print(etree.tostring(retval.data, pretty_print=True))
def my_unknown_host_cb(host, fingerprint):
return True
m = manager.connect(host=HOST, port=PORT, username=USER, password=PASS,
allow_agent=False,
look_for_keys=False,
hostkey_verify=False,
unknown_host_cb=my_unknown_host_cb)
for c in m.server_capabilities:
print(c)
nc_caps = [c for c in m.server_capabilities if c.startswith('urn:ietf:params:netconf')]
for c in nc_caps:
print(c)
import re
for c in m.server_capabilities:
model = re.search('module=([^&]*)&', c)
if model is not None:
print("{}".format(model.group(1)))
revision = re.search('revision=([0-9]{4}-[0-9]{2}-[0-9]{2})', c)
if revision is not None:
print(" revision = {}".format(revision.group(1)))
deviations = re.search('deviations=([a-zA-Z0-9\-,]+)($|&)',c)
if deviations is not None:
print(" deviations = {}".format(deviations.group(1)))
features = re.search('features=([a-zA-Z0-9\-,]+)($|&)',c)
if features is not None:
print(" features = {}".format(features.group(1)))
SCHEMA_TO_GET = 'Cisco-IOS-XE-native'
c = m.get_schema(SCHEMA_TO_GET)
print(c.data)
from subprocess import Popen, PIPE, STDOUT
SCHEMA_TO_GET = 'Cisco-IOS-XE-native'
#SCHEMA_TO_GET = 'ietf-interfaces'
c = m.get_schema(SCHEMA_TO_GET)
p = Popen(['pyang', '-f', 'tree'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input=c.data)[0]
print(stdout_data)
c = m.get_config(source='running')
pretty_print(c)
filter = '''
<native>
<username/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '/native/interface/GigabitEthernet/name'
c = m.get_config(source='running', filter=('xpath', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp"/>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native>
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.0.1</id>
</neighbor>
</bgp>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
from ncclient.operations import TimeoutExpiredError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor nc:operation="merge">
<id>192.168.1.1</id>
<remote-as>666</remote-as>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.1.1</id>
</neighbor>
</bgp>
</router>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
from ncclient.operations import TimeoutExpiredError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor>
<id>192.168.1.1</id>
<description nc:operation="merge">modified description</description>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
from ncclient.operations import TimeoutExpiredError
from lxml.etree import XMLSyntaxError
edit_data = '''
<config>
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<router>
<bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-bgp">
<id>123</id>
<neighbor nc:operation="delete">
<id>192.168.1.1</id>
</neighbor>
</bgp>
</router>
</native>
</config>
'''
try:
edit_reply = m.edit_config(edit_data, target='running', format='xml')
except TimeoutExpiredError as e:
print("Operation timeout!")
except XMLSyntaxError as e:
print(e)
print(e.args)
print(dir(e))
except Exception as e:
print("severity={}, tag={}".format(e.severity, e.tag))
print(e)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface/>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name/>
<ip/>
</GigabitEthernet>
</interface>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<interface>
<GigabitEthernet>
<name>1</name>
</GigabitEthernet>
</interface>
</native>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
filter = '''
<interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces"/>
'''
c = m.get_config(source='running', filter=('subtree', filter))
pretty_print(c)
import logging
handler = logging.StreamHandler()
for l in ['ncclient.transport.ssh', 'ncclient.transport.session', 'ncclient.operations.rpc']:
logger = logging.getLogger(l)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
| 0.193338 | 0.67674 |
# Analysis of the data of loss of enzymatic activity during DMSP degradation assays by Alma1 (eukaryotic DMSP lyase)
```
# For numerical calculations
import numpy as np
import pandas as pd
import scipy as sp
import math
import git
from scipy.integrate import odeint
from numpy import arange
from scipy.integrate import odeint
import scipy.optimize
from scipy.optimize import leastsq
from math import exp
from collections import OrderedDict
from sklearn.linear_model import LinearRegression
pd.options.mode.chained_assignment = None
# Find home directory for repo
repo = git.Repo("./", search_parent_directories=True)
homedir = repo.working_dir
# Import plotting features
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
import seaborn as sns
# Set plot style
sns.set_style("ticks")
sns.set_palette("colorblind", color_codes=True)
sns.set_context("paper")
# Magic command to plot inline
%matplotlib inline
#To graph in SVG (high def)
%config InlineBackend.figure_format="svg"
```
We performed three experiments to confirm if Alma1 was losing activity over the course of the DMSP degradation assay.
## First experiment: degradation of DMSP by different enzyme concentrations
Let's start by loading the data:
```
# Load data
df = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_enz_deg_DMSP_100uM.csv')
```
First, we will get the real concentration in each sample, which is 10 times that which is in the table (which corresponds to a 1:10 dilution). Then, we will sort the values.
```
# Create real concentration column
df ['dmsp_um_real']= df ['dmsp_um'] * 10
#Sort values
df = df.sort_values(['enzyme_ul_ml_rxn', 'time_min'])
df.head()
```
## Fit through least squares
We will assume that the DMSP degradation reactions follow Michaelis-Menten kinetics, where:
$$
V = {V_\max [DMSP] \over K_M + [DMSP]}.
$$
The change in the concentration of DMSP over the course of the enzyme assay will decrease following this recursion:
$$
DMSP(t + \Delta t) = DMSP(t) - {V_\max DMSP(t) \over K_M + DMSP(t)}\Delta t.
$$
Where $DMSP(t + \Delta t)$ is the concentration of DMSP in the time t plus an increment $\Delta t$, DMSP(t) is the concentration of DMSP in the previous time unit t, $V_\max$ is the maximum velocity of the reaction, and $K_M$ is the Michaelis-Menten constant.
the function substrate_kinetics will compute this recursion.
We will make a fit to the Michaelis-Menten kinetics using a previously reported $K_M$ value.
```
def substrate_kinetics(so, vmax, km, time):
'''
Function that computes the substrate concentration over time by
numerically integrating the recursive equation
Parameters
----------
so : float.
Initial concentration of substrate
vmax : float.
Max speed of enzyme
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
'''
# Compute ∆t
delta_t = np.diff(time)[0]
# Initialize array to save substrate concentration
substrate = np.zeros(len(time))
# Modify first entry
substrate[0] = so
# Loop through time points
for i, t in enumerate(time[1:]):
substrate[i+1] = substrate[i] -\
vmax * substrate[i] / (km + substrate[i]) * delta_t
return substrate
```
We will now infer $V_{max}$ from the data using the substrate kinetic function:
```
#Define a function that computes the residuals to fit into scipy's least_squares.
def resid(vmax, so, km, time, time_exp, s_exp):
'''
Function that computes the residuals of the substrate concentration
according to the numerical integration of the dynamics.
Parameters
----------
vmax : float.
Max speed of enzyme
so : float.
Initial concentration of substrate
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
time_exp : array-like.
Time points where data was taken.
s_exp : array-like.
Experimental determination of substrate concentration
Returns
-------
residuals of experimental and theoretical values
'''
# Integrate substrate concentration
substrate = substrate_kinetics(so, vmax, km, time)
# Extract substrate at experimental time points
time_idx = np.isin(time, time_exp)
s_theory = substrate[time_idx]
return s_theory - s_exp
```
We will now utilize the previous function to calculate the $V_{max}$ for each concentration of enzyme:
```
#Group data by enzyme concentration
df_group = df.groupby(['enzyme_ul_ml_rxn'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
subs=[]
# Initialize empty dataframe to save fit results
df_fit_paramls = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group):
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.dmsp_um_real.max()
# Extract experimental concentrations
s_exp = data.dmsp_um_real.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls = df_fit_paramls.append(series, ignore_index=True)
# Create a substrate concentration list
substrate = substrate_kinetics(so, vmax, km, time)
subs.append(time)
df_fit_paramls
```
### Plot for the DMSP degradation by Alma1 and the Michaelis-Menten fit
```
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group))
# Define markers
markers = ['o', 's', 'd', '*','^']
# Loop through replicate
for i, (group, data) in enumerate(df_group):
# Extract initial concentration
so = data.dmsp_um_real.max()
# Define km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls[df_fit_paramls.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Plot experimental data
ax.scatter(data.time_min, data.dmsp_um_real, color=colors[i], marker=markers[i],
label=f"{group}X")
#ax.set_title('DddY. Vmax fitted.')
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
#Set axes limits and tick marks
ax.set_xlim(-1,40)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 50, 10))
ax.set_yticks (range(0, 110, 20))
#Set legend position
ax.legend(bbox_to_anchor=(1, 0.9), title="[Alma1]")
#save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg.pdf', bbox_inches='tight')
```
## Second experiment: further addition of DMSP
In this experiment, DMSP was added to 5 reaction vials at an initial concentration of 100 $\mu M$, and Alma1 was added at an initial concentration of 1.5X. After 38 minutes, further DMSP was added at different concentrations. Let's first load the data:
```
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
```
We will use the data from the first 38 minutes to determine the initial maximum velocity of the reaction, which is assumed to follow Michaelis-Menten kinetics.
```
# Filter data by experiment A (further DMSP addition)
df_exp_a = df_add[df_add['Experiment']=='A']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra DMSP
df_exp_a_add_i = df_exp_a[df_exp_a['Type']=='Before']
#Group data by treatment
df_group1 = df_exp_a_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group1):
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add = df_fit_paramls_add.append(series, ignore_index=True)
df_fit_paramls_add
```
The above dataframe shows the maximum velocity for each one of the 5 replicates of the Alma1 degradation assay. Now, we will calculate the maximum velocity after the addition of further DMSP.
```
#Utilize the function to get the residuals for Alma1
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra DMSP
df_exp_a_add_f = df_exp_a[df_exp_a['Type']=='After']
#Group data by treatment
df_group2 = df_exp_a_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group2):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add2 = df_fit_paramls_add2.append(series, ignore_index=True)
df_fit_paramls_add2
```
We can clearly see that the maximum velocities before and after the addition of further DMSP are very different, and that the maximum velocities are also different for each one of the replicates after the addition of further DMSP, due to the fact that the concentration of DMSP added to them after the first 38 minutes of the experiments was different.
### Plot for the second experiment to test loss of enzyme activity in the DMSP degradation by Alma1
```
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_group = df_exp_a.groupby(['Treatment'])
#Group data before the addition of DMSP by treatment to plot the fit on top of the data
df_group_i = df_exp_a_add_i.groupby(['Treatment'])
#Group data after the addition of DMSP by treatment to plot the fit on top of the data
df_group_f = df_exp_a_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of DMSP
# Loop through replicate
for i, (group, data) in enumerate(df_group_i):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add[df_fit_paramls_add.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of DMSP
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_f):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('2X','1.5X','X','0.5X','0.25X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_group):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_xticks(range(0, 90, 20))
ax.set_yticks (range(0, 260, 60))
#Add vertical dotted line
ax.axvline(linewidth=1, x = 37, color='black', linestyle='--')
#Set legend and legend position
ax.legend(bbox_to_anchor=(1.05, -0.3), title="[DMSP] ($\mu$M)", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_DMSP.pdf', bbox_inches='tight')
```
## Third experiment: further addition of Alma1
In this experiment, DMSP was added to 5 reaction vials at an initial concentration of 100 $\mu M$, and Alma1 was added at an initial concentration of 0.25X. After 38 minutes, further Alma1 was added at different concentrations. Let's first load the data:
```
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
# Filter data by experiment B (further addition of DMSP)
df_exp_b = df_add[df_add['Experiment']=='B']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra enzyme
df_exp_b_add_i = df_exp_b[df_exp_b['Type']=='Before']
#Group data by treatment
df_group3 = df_exp_b_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group3):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b = df_fit_paramls_add_b.append(series, ignore_index=True)
df_fit_paramls_add_b
```
The above dataframe shows the maximum velocity for each one of the 5 replicates of the Alma1 degradation assay. Now, we will calculate the maximum velocity after the addition of further Alma1.
```
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra enzyme
df_exp_b_add_f = df_exp_b[df_exp_b['Time_min']>36]
#Group data by treatment
df_group4 = df_exp_b_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group4):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b2 = df_fit_paramls_add_b2.append(series, ignore_index=True)
df_fit_paramls_add_b2
```
We can clearly see that the maximum velocities before and after the addition of further Alma1 are very different, and that the maximum velocities are also different for each one of the replicates after the addition of further Alma1, due to the fact that the concentration of Alma1 added to them after the first 38 minutes of the experiments was different.
### Plot for the third experiment to test loss of enzyme activity in the DMSP degradation by Alma1
```
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_groupb = df_exp_b.groupby(['Treatment'])
#Group data before the addition of enzyme by treatment to plot the fit on top of the data
df_group_ib = df_exp_b_add_i.groupby(['Treatment'])
#Group data after the addition of enzyme by treatment to plot the fit on top of the data
df_group_fb = df_exp_b_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of enzyme
# Loop through replicate
for i, (group, data) in enumerate(df_group_ib):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add_b[df_fit_paramls_add_b.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of enzyme
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_fb):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('X','2X','3X','6X','10X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_groupb):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels, limits and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 90, 10))
ax.set_yticks (range(0, 110, 20))
# Set vertical dashed line
ax.axvline(linewidth=1, x = 38, color='black', linestyle='--')
# Set legend position
ax.legend(bbox_to_anchor=(1, -0.3), title="[Alma1]", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_Alma1.pdf', bbox_inches='tight')
```
All three experiments suggest that there is a loss of enzyme activity over the course of the DMSP enzymatic degradation experiments.
|
github_jupyter
|
# For numerical calculations
import numpy as np
import pandas as pd
import scipy as sp
import math
import git
from scipy.integrate import odeint
from numpy import arange
from scipy.integrate import odeint
import scipy.optimize
from scipy.optimize import leastsq
from math import exp
from collections import OrderedDict
from sklearn.linear_model import LinearRegression
pd.options.mode.chained_assignment = None
# Find home directory for repo
repo = git.Repo("./", search_parent_directories=True)
homedir = repo.working_dir
# Import plotting features
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
import seaborn as sns
# Set plot style
sns.set_style("ticks")
sns.set_palette("colorblind", color_codes=True)
sns.set_context("paper")
# Magic command to plot inline
%matplotlib inline
#To graph in SVG (high def)
%config InlineBackend.figure_format="svg"
# Load data
df = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_enz_deg_DMSP_100uM.csv')
# Create real concentration column
df ['dmsp_um_real']= df ['dmsp_um'] * 10
#Sort values
df = df.sort_values(['enzyme_ul_ml_rxn', 'time_min'])
df.head()
def substrate_kinetics(so, vmax, km, time):
'''
Function that computes the substrate concentration over time by
numerically integrating the recursive equation
Parameters
----------
so : float.
Initial concentration of substrate
vmax : float.
Max speed of enzyme
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
'''
# Compute ∆t
delta_t = np.diff(time)[0]
# Initialize array to save substrate concentration
substrate = np.zeros(len(time))
# Modify first entry
substrate[0] = so
# Loop through time points
for i, t in enumerate(time[1:]):
substrate[i+1] = substrate[i] -\
vmax * substrate[i] / (km + substrate[i]) * delta_t
return substrate
#Define a function that computes the residuals to fit into scipy's least_squares.
def resid(vmax, so, km, time, time_exp, s_exp):
'''
Function that computes the residuals of the substrate concentration
according to the numerical integration of the dynamics.
Parameters
----------
vmax : float.
Max speed of enzyme
so : float.
Initial concentration of substrate
km : float.
Michaelis-Menten constant of enzyme
time : array-like.
Time points where to evaluate function
time_exp : array-like.
Time points where data was taken.
s_exp : array-like.
Experimental determination of substrate concentration
Returns
-------
residuals of experimental and theoretical values
'''
# Integrate substrate concentration
substrate = substrate_kinetics(so, vmax, km, time)
# Extract substrate at experimental time points
time_idx = np.isin(time, time_exp)
s_theory = substrate[time_idx]
return s_theory - s_exp
#Group data by enzyme concentration
df_group = df.groupby(['enzyme_ul_ml_rxn'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
subs=[]
# Initialize empty dataframe to save fit results
df_fit_paramls = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group):
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.dmsp_um_real.max()
# Extract experimental concentrations
s_exp = data.dmsp_um_real.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls = df_fit_paramls.append(series, ignore_index=True)
# Create a substrate concentration list
substrate = substrate_kinetics(so, vmax, km, time)
subs.append(time)
df_fit_paramls
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group))
# Define markers
markers = ['o', 's', 'd', '*','^']
# Loop through replicate
for i, (group, data) in enumerate(df_group):
# Extract initial concentration
so = data.dmsp_um_real.max()
# Define km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls[df_fit_paramls.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.time_min.max(), 1000)
# Append experimental time points
time_exp = data.time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Plot experimental data
ax.scatter(data.time_min, data.dmsp_um_real, color=colors[i], marker=markers[i],
label=f"{group}X")
#ax.set_title('DddY. Vmax fitted.')
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
#Set axes limits and tick marks
ax.set_xlim(-1,40)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 50, 10))
ax.set_yticks (range(0, 110, 20))
#Set legend position
ax.legend(bbox_to_anchor=(1, 0.9), title="[Alma1]")
#save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg.pdf', bbox_inches='tight')
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
# Filter data by experiment A (further DMSP addition)
df_exp_a = df_add[df_add['Experiment']=='A']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra DMSP
df_exp_a_add_i = df_exp_a[df_exp_a['Type']=='Before']
#Group data by treatment
df_group1 = df_exp_a_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group1):
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add = df_fit_paramls_add.append(series, ignore_index=True)
df_fit_paramls_add
#Utilize the function to get the residuals for Alma1
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra DMSP
df_exp_a_add_f = df_exp_a[df_exp_a['Type']=='After']
#Group data by treatment
df_group2 = df_exp_a_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group2):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add2 = df_fit_paramls_add2.append(series, ignore_index=True)
df_fit_paramls_add2
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_group = df_exp_a.groupby(['Treatment'])
#Group data before the addition of DMSP by treatment to plot the fit on top of the data
df_group_i = df_exp_a_add_i.groupby(['Treatment'])
#Group data after the addition of DMSP by treatment to plot the fit on top of the data
df_group_f = df_exp_a_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of DMSP
# Loop through replicate
for i, (group, data) in enumerate(df_group_i):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add[df_fit_paramls_add.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of DMSP
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_f):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('2X','1.5X','X','0.5X','0.25X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_group):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_xticks(range(0, 90, 20))
ax.set_yticks (range(0, 260, 60))
#Add vertical dotted line
ax.axvline(linewidth=1, x = 37, color='black', linestyle='--')
#Set legend and legend position
ax.legend(bbox_to_anchor=(1.05, -0.3), title="[DMSP] ($\mu$M)", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_DMSP.pdf', bbox_inches='tight')
# load data
df_add = pd.read_csv(f'{homedir}/data/raw/enz_deg/Alma1_add_exps.csv')
df_add.head()
# Filter data by experiment B (further addition of DMSP)
df_exp_b = df_add[df_add['Experiment']=='B']
# Filter data by times less than 40 min
# This is to exclude the values before the addition of extra enzyme
df_exp_b_add_i = df_exp_b[df_exp_b['Type']=='Before']
#Group data by treatment
df_group3 = df_exp_b_add_i.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group3):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b = df_fit_paramls_add_b.append(series, ignore_index=True)
df_fit_paramls_add_b
# Filter data by times more than 40 min
# This is to exclude the values after the addition of extra enzyme
df_exp_b_add_f = df_exp_b[df_exp_b['Time_min']>36]
#Group data by treatment
df_group4 = df_exp_b_add_f.groupby(['Treatment'])
# Define column names
names = ['enzyme_ul_ml_rxn', 'vmax']
# Initialize empty dataframe to save fit results
df_fit_paramls_add_b2 = pd.DataFrame(columns=names)
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group4):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Create a substrate list
substrate = substrate_kinetics(so, vmax, km, time)
# Store parameters and group as list
fit = (group, popt[0])
# Convert list to pandas Series
series = pd.Series(fit, index=names)
# Append fit to dataframe
df_fit_paramls_add_b2 = df_fit_paramls_add_b2.append(series, ignore_index=True)
df_fit_paramls_add_b2
# Define fig and axes
fig = plt.figure(figsize=(2.95, 1.95), dpi=192)
ax = fig.add_subplot(111)
# Define colors
colors = sns.color_palette('colorblind', n_colors=len(df_group1))
# Define markers
markers = ['o', 's', 'd', '*','^']
#Group data by treatment to plot all data as scatter
df_groupb = df_exp_b.groupby(['Treatment'])
#Group data before the addition of enzyme by treatment to plot the fit on top of the data
df_group_ib = df_exp_b_add_i.groupby(['Treatment'])
#Group data after the addition of enzyme by treatment to plot the fit on top of the data
df_group_fb = df_exp_b_add_f.groupby(['Treatment'])
#Generate the fit for the data before the addition of enzyme
# Loop through replicate
for i, (group, data) in enumerate(df_group_ib):
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract km
Km = 9000
# Extract fit vmax
vmax = df_fit_paramls_add_b[df_fit_paramls_add_b.enzyme_ul_ml_rxn == group].vmax.values
# Define time array
time = np.linspace(0, data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
#Generate the fit for the data after the addition of enzyme
# Loop through enzyme concentrations
for i, (group, data) in enumerate (df_group_fb):
# Define time array
time = np.linspace(data.Time_min.min(), data.Time_min.max(), 1000)
# Append experimental time points
time_exp = data.Time_min
time = np.sort(
np.unique(
np.append(time, time_exp)
)
)
# Extract initial concentration
so = data.DMSP_uM.max()
# Extract experimental concentrations
s_exp = data.DMSP_uM.values
# Define km
km = 9000
#Fit Vmax
popt, _ = scipy.optimize.leastsq(
func=resid,
x0=100,
args=(so, km, time, time_exp, s_exp)
)
vmax = popt[0]
# Plot fit
ax.plot(time, substrate_kinetics(so, vmax, Km, time), c=colors[i], label="")
# Define labels for plots
labels = ('X','2X','3X','6X','10X')
#Loop through all data to plot them as scatter
for i, (group, data) in enumerate(df_groupb):
# Plot experimental data
ax.scatter(data.Time_min, data.DMSP_uM, color=colors[i], marker=markers[i],
label=labels[i])
#Set axes labels, limits and tick marks
ax.set_ylabel(r'[DMSP] ($\mu$M)')
ax.set_xlabel(r'Time (min)')
ax.set_xlim(-1,80)
ax.set_ylim(-5,100)
ax.set_xticks(range(0, 90, 10))
ax.set_yticks (range(0, 110, 20))
# Set vertical dashed line
ax.axvline(linewidth=1, x = 38, color='black', linestyle='--')
# Set legend position
ax.legend(bbox_to_anchor=(1, -0.3), title="[Alma1]", ncol=3)
#Save figure
fig.savefig(f'{homedir}/figures/enz_deg/experiments/Alma1_enz_deg_further_Alma1.pdf', bbox_inches='tight')
| 0.74512 | 0.915696 |
# Demonstration of basic image manipulation with SIRF/CIL
This demonstration shows how to create image data objects for MR, CT and PET and how to work with them.
This demo is a jupyter notebook, i.e. intended to be run step by step.
# Initial set-up
```
# Make sure figures appears inline and animations works
%matplotlib notebook
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
from sirf.Utilities import examples_data_path
```
# Utilities
First define some handy function definitions to make subsequent code cleaner. You can ignore them when you first see this demo.
They have (minimal) documentation using Python docstrings such that you can do for instance `help(plot_2d_image)`
```
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim = templ_im.as_array().shape
# Let's make sure everything is centered.
# Because offset is used to index an array it has to be of type integer, so we do an integer division using '//'
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.clone()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim))
return(templ_im_out)
```
Note that SIRF and CIL have their own `show*` functions which will be used on other demos.
# Get brainweb data
We will download and use Brainweb data, which is made more convenient by using the Python brainweb module. We will use a FDG image for PET. MR usually provides qualitative images with an image contrast proportional to difference in T1, T2 or T2* depending on the sequence parameters. Nevertheless, we will make our life easy, by directly using the T1 map provided by the brainweb for MR.
```
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
FDG_arr = vol['PET']
T1_arr = vol['T1']
uMap_arr = vol['uMap']
```
## Display it
The convention for the image dimensions in the brainweb images is [z, y, x]. If we want to
display the central slice (i.e. z), we therefore have to use the 0th dimension of the array.
We are using an integer division using '//' to ensure we can use the value to index the array.
```
plt.figure();
slice_show = FDG_arr.shape[0]//2
# The images are very large, so we only want to visualise the central part of the image. In Python this can be
# achieved by using e.g. 100:-100 as indices. This will "crop" the first 100 and last 100 voxels of the array.
plot_2d_image([1,3,1], FDG_arr[slice_show, 100:-100, 100:-100], 'FDG', cmap="hot")
plot_2d_image([1,3,2], T1_arr[slice_show, 100:-100, 100:-100], 'T1', cmap="Greys_r")
plot_2d_image([1,3,3], uMap_arr[slice_show, 100:-100, 100:-100], 'uMap', cmap="bone")
```
More than likely, this image came out a bit small for your set-up. You can check the default image size as follows (note: units are inches)
```
plt.rcParams['figure.figsize']
```
You can then change them to a size more suitable for your situation, e.g.
```
plt.rcParams['figure.figsize']=[10,7]
```
Now execute the cell above that plots the images again to see if that helped.
You can make this change permanent by changing your `matplotlibrc` file (this might be non-trivial when running on Docker or JupyterHub instance!). You will need to search for `figure.figsize` in that file. Its location can be found as follows:
```
import matplotlib
matplotlib.matplotlib_fname()
```
# SIRF/CIL ImageData based on Brainweb
In order to create an __MR__, __PET__ or __CT__ `ImageData` object, we need some information about the modality, the hardware used for scanning and the to some extent also the acquisition and reconstruction process. Most of this information is contained in the raw data files which can be exported from the __MR__ and __PET__ scanners. For __CT__ the parameters can be defined manually.
In the following we will now go through each modality separately and show how a simple `ImageData` object can be created. In the last part of the notebook we will then show examples about how to display the image data with python or how to manipulate the image data (e.g. multiply it with a constant or calculate its norm).
In order to make our life easier, we will assume that the voxel size and image orientation for __MR__, __PET__ and __CT__ are all the same and they are the same as the brainweb data. This is of course not true, real-life applications and/or synergistic image reconstruction we would need to resample the brainweb images before using them as input to the `ImageData` objects.
# MR
Use the 'mr' prefix for all Gadgetron-based SIRF functions.
This is done here to explicitly differentiate between SIRF mr functions and
anything else.
```
import sirf.Gadgetron as mr
```
We'll need a template MR acquisition data object
```
templ_mr = mr.AcquisitionData(os.path.join(examples_data_path('MR'), 'simulated_MR_2D_cartesian.h5'))
```
In MR the dimensions of the image data depend of course on the data acquisition but they are also influenced by the reconstruction process. Therefore, we need to carry out an example reconstruction, in order to have all the information about the image.
```
# Simple reconstruction
preprocessed_data = mr.preprocess_acquisition_data(templ_mr)
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
```
If the above failed with an error 'Server running Gadgetron not accessible', you probably still have to start a Gadgetron server. Check the [DocForParticipants](https://github.com/SyneRBI/SIRF-Exercises/blob/master/DocForParticipants.md#start-a-Gadgetron-server).
Now we have got an MR image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_mr = crop_and_fill(im_mr, T1_arr)
# im_mr is an MR image object. In order to visualise it we need access to the underlying data array. This is
# provided by the function as_array(). This yields a numpy array which can then be easily displayed. More
# information on this is also provided at the end of the notebook.
plt.figure();
plot_2d_image([1,1,1], numpy.abs(im_mr.as_array())[im_mr.as_array().shape[0]//2, :, :], 'MR', cmap="Greys_r")
```
# CT
Use the 'ct' prefix for all CIL-based functions.
This is done here to explicitly differentiate between CIL ct functions and
anything else.
```
import cil.framework as ct
```
Create a template Cone Beam CT acquisition geometry
```
N = 120
angles = numpy.linspace(0, 360, 50, True, dtype=numpy.float32)
offset = 0.4
channels = 1
ag = ct.AcquisitionGeometry.create_Cone3D((offset,-100, 0), (offset,100,0))
ag.set_panel((N,N-2))
ag.set_channels(channels)
ag.set_angles(angles, angle_unit=ct.AcquisitionGeometry.DEGREE);
```
Now we can create a template CT image object
```
ig = ag.get_ImageGeometry()
im_ct = ig.allocate(None)
```
Now we have got an CT image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_ct = crop_and_fill(im_ct, uMap_arr)
plt.figure();
plot_2d_image([1,1,1], im_ct.as_array()[im_ct.as_array().shape[0]//2, :, :], 'CT', cmap="bone")
```
# PET
Use the 'pet' prefix for all STIR-based SIRF functions.
This is done here to explicitly differentiate between SIRF pet functions and
anything else.
```
import sirf.STIR as pet
```
We'll need a template sinogram
```
templ_sino = pet.AcquisitionData(os.path.join(examples_data_path('PET'), 'mMR','mMR_template_span11.hs'))
```
Now we can create a template PET image object that would fit dimensions for that sinogram
```
im_pet = pet.ImageData(templ_sino)
```
Now we have got a PET image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_pet = crop_and_fill(im_pet, FDG_arr)
plt.figure();
plot_2d_image([1,1,1], im_pet.as_array()[im_pet.as_array().shape[0]//2, :, :], 'PET', cmap="hot")
```
# Basic image manipulations
Images (like most other things in SIRF and CIL) are represented as *objects*, in this case of type `ImageData`.
In practice, this means that you can only manipulate its data via *methods*.
Image objects contain the actual voxel values, but also information on the number of voxels,
voxel size, etc. There are methods to get this information.
There are additional methods for other manipulations, such as basic image arithmetic (e.g.,
you can add image objects).
Because we created an `ImageData` object for each modality we can now simply select which modality we want to look at. Because SIRF is implemented to make the transition from one modality to the next very easy, many of the *methods* and *attributes* are exactly the same between __MR__, __PET__ or __CT__ . There are of course *methods* and *attributes* which are modality-specific but the basic handling of the `ImageData` objects is very similar between __MR__, __PET__ or __CT__ .
```
# Make a copy of the image of a specific modality
image_data_object = im_ct.clone()
```
What is an ImageData?
Images are represented by objects with several methods. The most important method
is `as_array()` which we'll use below.
```
# Let's see what all the methods are.
help(pet.ImageData)
# Use as_array to extract an array of voxel values
# The resulting array as a `numpy` array, as standard in Python.
image_array=image_data_object.as_array()
# We can use the standard `numpy` methods on this array, such as getting its `shape` (i.e. dimensions).
print(image_array.shape)
# Whenever we want to do something with the image-values, we have to do it via this array.
# Let's print a voxel-value roughly in the centre of the object.
# We will not use the centre because the intensity here happens to be 0.
centre = numpy.array(image_array.shape)//2
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Manipulate the image data for illustration
```
# Multiply the data with a factor
image_array *= 0.01
# Stick this new data into the original image object.
# (This will not modify the file content, only the variable in memory.)
image_data_object.fill(image_array)
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
You can do basic math manipulations with ImageData objects
So the above lines can be done directly on the `image` object
```
image_data_object *= 0.01
# Let's check
image_array=image_data_object.as_array()
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Display the middle slice of the image (which is really a 3D volume)
We will use our own `plot_2d_image` function (which was defined above) for brevity.
```
# Create a new figure
plt.figure()
# Display the slice (numpy.absolute is only necessary for MR but doesn't matter for PET or CT)
plot_2d_image([1,1,1], numpy.absolute(image_array[centre[0], :, :]), 'image data', cmap="viridis")
```
Some other things to do with ImageData objects
```
print(image_data_object.norm())
another_image=image_data_object*3+8.3
and_another=another_image+image_data_object
```
|
github_jupyter
|
# Make sure figures appears inline and animations works
%matplotlib notebook
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
from sirf.Utilities import examples_data_path
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim = templ_im.as_array().shape
# Let's make sure everything is centered.
# Because offset is used to index an array it has to be of type integer, so we do an integer division using '//'
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.clone()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim))
return(templ_im_out)
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
FDG_arr = vol['PET']
T1_arr = vol['T1']
uMap_arr = vol['uMap']
plt.figure();
slice_show = FDG_arr.shape[0]//2
# The images are very large, so we only want to visualise the central part of the image. In Python this can be
# achieved by using e.g. 100:-100 as indices. This will "crop" the first 100 and last 100 voxels of the array.
plot_2d_image([1,3,1], FDG_arr[slice_show, 100:-100, 100:-100], 'FDG', cmap="hot")
plot_2d_image([1,3,2], T1_arr[slice_show, 100:-100, 100:-100], 'T1', cmap="Greys_r")
plot_2d_image([1,3,3], uMap_arr[slice_show, 100:-100, 100:-100], 'uMap', cmap="bone")
plt.rcParams['figure.figsize']
plt.rcParams['figure.figsize']=[10,7]
import matplotlib
matplotlib.matplotlib_fname()
import sirf.Gadgetron as mr
templ_mr = mr.AcquisitionData(os.path.join(examples_data_path('MR'), 'simulated_MR_2D_cartesian.h5'))
# Simple reconstruction
preprocessed_data = mr.preprocess_acquisition_data(templ_mr)
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
im_mr = crop_and_fill(im_mr, T1_arr)
# im_mr is an MR image object. In order to visualise it we need access to the underlying data array. This is
# provided by the function as_array(). This yields a numpy array which can then be easily displayed. More
# information on this is also provided at the end of the notebook.
plt.figure();
plot_2d_image([1,1,1], numpy.abs(im_mr.as_array())[im_mr.as_array().shape[0]//2, :, :], 'MR', cmap="Greys_r")
import cil.framework as ct
N = 120
angles = numpy.linspace(0, 360, 50, True, dtype=numpy.float32)
offset = 0.4
channels = 1
ag = ct.AcquisitionGeometry.create_Cone3D((offset,-100, 0), (offset,100,0))
ag.set_panel((N,N-2))
ag.set_channels(channels)
ag.set_angles(angles, angle_unit=ct.AcquisitionGeometry.DEGREE);
ig = ag.get_ImageGeometry()
im_ct = ig.allocate(None)
im_ct = crop_and_fill(im_ct, uMap_arr)
plt.figure();
plot_2d_image([1,1,1], im_ct.as_array()[im_ct.as_array().shape[0]//2, :, :], 'CT', cmap="bone")
import sirf.STIR as pet
templ_sino = pet.AcquisitionData(os.path.join(examples_data_path('PET'), 'mMR','mMR_template_span11.hs'))
im_pet = pet.ImageData(templ_sino)
im_pet = crop_and_fill(im_pet, FDG_arr)
plt.figure();
plot_2d_image([1,1,1], im_pet.as_array()[im_pet.as_array().shape[0]//2, :, :], 'PET', cmap="hot")
# Make a copy of the image of a specific modality
image_data_object = im_ct.clone()
# Let's see what all the methods are.
help(pet.ImageData)
# Use as_array to extract an array of voxel values
# The resulting array as a `numpy` array, as standard in Python.
image_array=image_data_object.as_array()
# We can use the standard `numpy` methods on this array, such as getting its `shape` (i.e. dimensions).
print(image_array.shape)
# Whenever we want to do something with the image-values, we have to do it via this array.
# Let's print a voxel-value roughly in the centre of the object.
# We will not use the centre because the intensity here happens to be 0.
centre = numpy.array(image_array.shape)//2
print(image_array[centre[0], centre[1]+20, centre[2]+20])
# Multiply the data with a factor
image_array *= 0.01
# Stick this new data into the original image object.
# (This will not modify the file content, only the variable in memory.)
image_data_object.fill(image_array)
print(image_array[centre[0], centre[1]+20, centre[2]+20])
image_data_object *= 0.01
# Let's check
image_array=image_data_object.as_array()
print(image_array[centre[0], centre[1]+20, centre[2]+20])
# Create a new figure
plt.figure()
# Display the slice (numpy.absolute is only necessary for MR but doesn't matter for PET or CT)
plot_2d_image([1,1,1], numpy.absolute(image_array[centre[0], :, :]), 'image data', cmap="viridis")
print(image_data_object.norm())
another_image=image_data_object*3+8.3
and_another=another_image+image_data_object
| 0.555676 | 0.943608 |
# Fine-tuning a BERT model
In this workshop, we will work through fine-tuning a BERT model using the tensorflow-models PIP package.
Let's first revise the basics
<img src="https://miro.medium.com/max/2884/1*YwLWH3PbD34vkkBjx6f6EA.png">
<img src ="https://cdn.analyticsvidhya.com/wp-content/uploads/2019/06/Screenshot-from-2019-06-17-19-53-10.png">
<img src = "https://miro.medium.com/max/5672/1*p4LFBwyHtCw_Qq9paDampA.png">
## Setup
### Install the TensorFlow Model Garden pip package
* `tf-models-official` is the stable Model Garden package. Note that it may not include the latest changes in the `tensorflow_models` github repo. To include latest changes, you may install `tf-models-nightly`,
which is the nightly Model Garden package created daily automatically.
* pip will install all models and dependencies automatically.
```
!pip install -q tf-models-official==2.4.0
```
### Imports
```
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
```
### Resources
This directory contains the configuration, vocabulary, and a pre-trained checkpoint used in this tutorial:
```
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
```
You can get a pre-trained BERT encoder from [TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2):
```
hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"
```
## The data
For this example we used the [GLUE MRPC dataset from TFDS](https://www.tensorflow.org/datasets/catalog/glue#gluemrpc).
This dataset is not set up so that it can be directly fed into the BERT model, so this section also handles the necessary preprocessing.
### Get the dataset from TensorFlow Datasets
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
* Number of labels: 2.
* Size of training dataset: 3668.
* Size of evaluation dataset: 408.
* Maximum sequence length of training and evaluation dataset: 128.
```
glue, info = tfds.load('glue/mrpc', with_info=True,
# It's small, load the whole dataset
batch_size=-1)
list(glue.keys())
```
The `info` object describes the dataset and it's features:
```
info.features
```
The two classes are:
```
info.features['label'].names
```
Here is one example from the training set:
```
glue_train = glue['train']
for key, value in glue_train.items():
print(f"{key:9s}: {value[0].numpy()}")
```
### The BERT tokenizer
To fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training.
The BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a `keras.layer` like you can with `preprocessing.TextVectorization`.
The following code rebuilds the tokenizer that was used by the base model:
```
# Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
print("Vocab size:", len(tokenizer.vocab))
```
Tokenize a sentence:
```
tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
```
### Preprocess the data
The section manually preprocessed the dataset into the format expected by the model.
This dataset is small, so preprocessing can be done quickly and easily in memory. For larger datasets the `tf_models` library includes some tools for preprocessing and re-serializing a dataset. See [Appendix: Re-encoding a large dataset](#re_encoding_tools) for details.
#### Encode the sentences
The model expects its two inputs sentences to be concatenated together. This input is expected to start with a `[CLS]` "This is a classification problem" token, and each sentence should end with a `[SEP]` "Separator" token:
```
tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
```
Start by encoding all the sentences while appending a `[SEP]` token, and packing them into ragged-tensors:
```
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s.numpy()))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
sentence1 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence2"]])
print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
```
Now prepend a `[CLS]` token, and concatenate the ragged tensors to form a single `input_word_ids` tensor for each example. `RaggedTensor.to_tensor()` zero pads to the longest sequence.
```
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
```
#### Mask and input type
The model expects two additional inputs:
* The input mask
* The input type
The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the `input_word_ids`, and contains a `1` anywhere the `input_word_ids` is not padding.
```
input_mask = tf.ones_like(input_word_ids).to_tensor()
plt.pcolormesh(input_mask)
```
The "input type" also has the same shape, but inside the non-padded region, contains a `0` or a `1` indicating which sentence the token is a part of.
```
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()
plt.pcolormesh(input_type_ids)
```
#### Put it all together
Collect the above text parsing code into a single function, and apply it to each split of the `glue/mrpc` dataset.
```
def encode_sentence(s, tokenizer):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
def bert_encode(glue_dict, tokenizer):
num_examples = len(glue_dict["sentence1"])
sentence1 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence1"])])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence2"])])
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
input_mask = tf.ones_like(input_word_ids).to_tensor()
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat(
[type_cls, type_s1, type_s2], axis=-1).to_tensor()
inputs = {
'input_word_ids': input_word_ids.to_tensor(),
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']
glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']
glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels = glue['test']['label']
```
Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match:
```
for key, value in glue_train.items():
print(f'{key:15s} shape: {value.shape}')
print(f'glue_train_labels shape: {glue_train_labels.shape}')
```
## The model
### Build the model
The first step is to download the configuration for the pre-trained model.
```
import json
bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())
bert_config = bert.configs.BertConfig.from_dict(config_dict)
config_dict
```
The `config` defines the core BERT Model, which is a Keras model to predict the outputs of `num_classes` from the inputs with maximum sequence length `max_seq_length`.
This function returns both the encoder and the classifier.
```
bert_classifier, bert_encoder = bert.bert_models.classifier_model(
bert_config, num_labels=2)
```
The classifier has three inputs and one output:
```
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
```
Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes:
```
glue_batch = {key: val[:10] for key, val in glue_train.items()}
bert_classifier(
glue_batch, training=True
).numpy()
```
The `TransformerEncoder` in the center of the classifier above **is** the `bert_encoder`.
Inspecting the encoder, we see its stack of `Transformer` layers connected to those same three inputs:
```
tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
```
### Restore the encoder weights
When built the encoder is randomly initialized. Restore the encoder's weights from the checkpoint:
```
checkpoint = tf.train.Checkpoint(encoder=bert_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
```
Note: The pretrained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). See the [Hub appendix](#hub_bert) for details.
### Set up the optimizer
BERT adopts the Adam optimizer with weight decay (aka "[AdamW](https://arxiv.org/abs/1711.05101)").
It also employs a learning rate schedule that firstly warms up from 0 and then decays to 0.
```
# Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
```
This returns an `AdamWeightDecay` optimizer with the learning rate schedule set:
```
type(optimizer)
```
### Train the model
The metric is accuracy and we use sparse categorical cross-entropy as loss.
```
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
glue_train, glue_train_labels,
validation_data=(glue_validation, glue_validation_labels),
batch_size=32,
epochs=epochs)
```
Now run the fine-tuned model on a custom example to see that it works.
Start by encoding some sentence pairs:
```
my_examples = bert_encode(
glue_dict = {
'sentence1':[
'The rain in Spain falls mainly on the plain.',
'Look I fine tuned BERT.'],
'sentence2':[
'It mostly rains on the flat lands of Spain.',
'Is it working? This does not match.']
},
tokenizer=tokenizer)
```
The model should report class `1` "match" for the first example and class `0` "no-match" for the second:
```
result = bert_classifier(my_examples, training=False)
result = tf.argmax(result).numpy()
result
np.array(info.features['label'].names)[result]
```
### Save the model
Often the goal of training a model is to _use_ it for something, so export the model and then restore it to be sure that it works.
```
export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
my_examples['input_mask'],
my_examples['input_type_ids']], training=False)
original_result = bert_classifier(my_examples, training=False)
# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
```
## Appendix
<a id=re_encoding_tools></a>
### Re-encoding a large dataset
This tutorial you re-encoded the dataset in memory, for clarity.
This was only possible because `glue/mrpc` is a very small dataset. To deal with larger datasets `tf_models` library includes some tools for processing and re-encoding a dataset for efficient training.
The first step is to describe which features of the dataset should be transformed:
```
processor = nlp.data.classifier_data_lib.TfdsProcessor(
tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
process_text_fn=bert.tokenization.convert_to_unicode)
```
Then apply the transformation to generate new TFRecord files.
```
# Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"
max_seq_length = 128
batch_size = 32
eval_batch_size = 32
# Generate and save training data into a tf record file
input_meta_data = (
nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
processor=processor,
data_dir=None, # It is `None` because data is from tfds, not local dir.
tokenizer=tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length))
```
Finally create `tf.data` input pipelines from those TFRecord files:
```
training_dataset = bert.run_classifier.get_dataset_fn(
train_data_output_path,
max_seq_length,
batch_size,
is_training=True)()
evaluation_dataset = bert.run_classifier.get_dataset_fn(
eval_data_output_path,
max_seq_length,
eval_batch_size,
is_training=False)()
```
The resulting `tf.data.Datasets` return `(features, labels)` pairs, as expected by `keras.Model.fit`:
```
training_dataset.element_spec
```
#### Create tf.data.Dataset for training and evaluation
If you need to modify the data loading here is some code to get you started:
```
def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
"""Creates input dataset from (tf)records files for train/eval."""
dataset = tf.data.TFRecordDataset(file_path)
if is_training:
dataset = dataset.shuffle(100)
dataset = dataset.repeat()
def decode_record(record):
name_to_features = {
'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'label_ids': tf.io.FixedLenFeature([], tf.int64),
}
return tf.io.parse_single_example(record, name_to_features)
def _select_data_from_record(record):
x = {
'input_word_ids': record['input_ids'],
'input_mask': record['input_mask'],
'input_type_ids': record['segment_ids']
}
y = record['label_ids']
return (x, y)
dataset = dataset.map(decode_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
_select_data_from_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(batch_size, drop_remainder=is_training)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
return dataset
# Set up batch sizes
batch_size = 32
eval_batch_size = 32
# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
train_data_output_path,
input_meta_data['max_seq_length'],
batch_size,
is_training=True)
evaluation_dataset = create_classifier_dataset(
eval_data_output_path,
input_meta_data['max_seq_length'],
eval_batch_size,
is_training=False)
training_dataset.element_spec
```
<a id="hub_bert"></a>
### TFModels BERT on TFHub
You can get [the BERT model](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2) off the shelf from [TFHub](https://tensorflow.org/hub). It would not be hard to add a classification head on top of this `hub.KerasLayer`
```
# Note: 350MB download.
import tensorflow_hub as hub
hub_model_name = "bert_en_uncased_L-12_H-768_A-12" #@param ["bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_en_uncased_L-12_H-768_A-12", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_zh_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12"]
hub_encoder = hub.KerasLayer(f"https://tfhub.dev/tensorflow/{hub_model_name}/3",
trainable=True)
print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
```
Test run it on a batch of data:
```
result = hub_encoder(
inputs=dict(
input_word_ids=glue_train['input_word_ids'][:10],
input_mask=glue_train['input_mask'][:10],
input_type_ids=glue_train['input_type_ids'][:10],),
training=False,
)
print("Pooled output shape:", result['pooled_output'].shape)
print("Sequence output shape:", result['sequence_output'].shape)
```
At this point it would be simple to add a classification head yourself.
The `bert_models.classifier_model` function can also build a classifier onto the encoder from TensorFlow Hub:
```
hub_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=0.1,
initializer=tf.keras.initializers.TruncatedNormal(
stddev=0.02))
```
The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The `BertEncoder` model is now a single layer:
```
tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
try:
tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
assert False
except Exception as e:
print(f"{type(e).__name__}: {e}")
```
<a id="model_builder_functions"></a>
### Low level model building
If you need a more control over the construction of the model it's worth noting that the `classifier_model` function used earlier is really just a thin wrapper over the `nlp.modeling.networks.BertEncoder` and `nlp.modeling.models.BertClassifier` classes. Just remember that if you start modifying the architecture it may not be correct or possible to reload the pre-trained checkpoint so you'll need to retrain from scratch.
Build the encoder:
```
bert_encoder_config = config_dict.copy()
# You need to rename a few fields to make this work:
bert_encoder_config['attention_dropout_rate'] = bert_encoder_config.pop('attention_probs_dropout_prob')
bert_encoder_config['activation'] = tf_utils.get_activation(bert_encoder_config.pop('hidden_act'))
bert_encoder_config['dropout_rate'] = bert_encoder_config.pop('hidden_dropout_prob')
bert_encoder_config['initializer'] = tf.keras.initializers.TruncatedNormal(
stddev=bert_encoder_config.pop('initializer_range'))
bert_encoder_config['max_sequence_length'] = bert_encoder_config.pop('max_position_embeddings')
bert_encoder_config['num_layers'] = bert_encoder_config.pop('num_hidden_layers')
bert_encoder_config
manual_encoder = nlp.modeling.networks.BertEncoder(**bert_encoder_config)
```
Restore the weights:
```
checkpoint = tf.train.Checkpoint(encoder=manual_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
```
Test run it:
```
result = manual_encoder(my_examples, training=True)
print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
```
Wrap it in a classifier:
```
manual_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=bert_encoder_config['dropout_rate'],
initializer=bert_encoder_config['initializer'])
manual_classifier(my_examples, training=True).numpy()
```
<a id="optiizer_schedule"></a>
### Optimizers and schedules
The optimizer used to train the model was created using the `nlp.optimization.create_optimizer` function:
```
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
```
That high level wrapper sets up the learning rate schedules and the optimizer.
The base learning rate schedule used here is a linear decay to zero over the training run:
```
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
plt.plot([decay_schedule(n) for n in range(num_train_steps)])
```
This, in turn is wrapped in a `WarmUp` schedule that linearly increases the learning rate to the target value over the first 10% of training:
```
warmup_steps = num_train_steps * 0.1
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
```
Then create the `nlp.optimization.AdamWeightDecay` using that schedule, configured for the BERT model:
```
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
```
|
github_jupyter
|
!pip install -q tf-models-official==2.4.0
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"
glue, info = tfds.load('glue/mrpc', with_info=True,
# It's small, load the whole dataset
batch_size=-1)
list(glue.keys())
info.features
info.features['label'].names
glue_train = glue['train']
for key, value in glue_train.items():
print(f"{key:9s}: {value[0].numpy()}")
# Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
print("Vocab size:", len(tokenizer.vocab))
tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s.numpy()))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
sentence1 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence2"]])
print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
input_mask = tf.ones_like(input_word_ids).to_tensor()
plt.pcolormesh(input_mask)
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()
plt.pcolormesh(input_type_ids)
def encode_sentence(s, tokenizer):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
def bert_encode(glue_dict, tokenizer):
num_examples = len(glue_dict["sentence1"])
sentence1 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence1"])])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence2"])])
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
input_mask = tf.ones_like(input_word_ids).to_tensor()
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat(
[type_cls, type_s1, type_s2], axis=-1).to_tensor()
inputs = {
'input_word_ids': input_word_ids.to_tensor(),
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']
glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']
glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels = glue['test']['label']
for key, value in glue_train.items():
print(f'{key:15s} shape: {value.shape}')
print(f'glue_train_labels shape: {glue_train_labels.shape}')
import json
bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())
bert_config = bert.configs.BertConfig.from_dict(config_dict)
config_dict
bert_classifier, bert_encoder = bert.bert_models.classifier_model(
bert_config, num_labels=2)
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
glue_batch = {key: val[:10] for key, val in glue_train.items()}
bert_classifier(
glue_batch, training=True
).numpy()
tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
checkpoint = tf.train.Checkpoint(encoder=bert_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
# Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
type(optimizer)
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
glue_train, glue_train_labels,
validation_data=(glue_validation, glue_validation_labels),
batch_size=32,
epochs=epochs)
my_examples = bert_encode(
glue_dict = {
'sentence1':[
'The rain in Spain falls mainly on the plain.',
'Look I fine tuned BERT.'],
'sentence2':[
'It mostly rains on the flat lands of Spain.',
'Is it working? This does not match.']
},
tokenizer=tokenizer)
result = bert_classifier(my_examples, training=False)
result = tf.argmax(result).numpy()
result
np.array(info.features['label'].names)[result]
export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
my_examples['input_mask'],
my_examples['input_type_ids']], training=False)
original_result = bert_classifier(my_examples, training=False)
# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
processor = nlp.data.classifier_data_lib.TfdsProcessor(
tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
process_text_fn=bert.tokenization.convert_to_unicode)
# Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"
max_seq_length = 128
batch_size = 32
eval_batch_size = 32
# Generate and save training data into a tf record file
input_meta_data = (
nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
processor=processor,
data_dir=None, # It is `None` because data is from tfds, not local dir.
tokenizer=tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length))
training_dataset = bert.run_classifier.get_dataset_fn(
train_data_output_path,
max_seq_length,
batch_size,
is_training=True)()
evaluation_dataset = bert.run_classifier.get_dataset_fn(
eval_data_output_path,
max_seq_length,
eval_batch_size,
is_training=False)()
training_dataset.element_spec
def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
"""Creates input dataset from (tf)records files for train/eval."""
dataset = tf.data.TFRecordDataset(file_path)
if is_training:
dataset = dataset.shuffle(100)
dataset = dataset.repeat()
def decode_record(record):
name_to_features = {
'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'label_ids': tf.io.FixedLenFeature([], tf.int64),
}
return tf.io.parse_single_example(record, name_to_features)
def _select_data_from_record(record):
x = {
'input_word_ids': record['input_ids'],
'input_mask': record['input_mask'],
'input_type_ids': record['segment_ids']
}
y = record['label_ids']
return (x, y)
dataset = dataset.map(decode_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
_select_data_from_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(batch_size, drop_remainder=is_training)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
return dataset
# Set up batch sizes
batch_size = 32
eval_batch_size = 32
# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
train_data_output_path,
input_meta_data['max_seq_length'],
batch_size,
is_training=True)
evaluation_dataset = create_classifier_dataset(
eval_data_output_path,
input_meta_data['max_seq_length'],
eval_batch_size,
is_training=False)
training_dataset.element_spec
# Note: 350MB download.
import tensorflow_hub as hub
hub_model_name = "bert_en_uncased_L-12_H-768_A-12" #@param ["bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_en_uncased_L-12_H-768_A-12", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_zh_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12"]
hub_encoder = hub.KerasLayer(f"https://tfhub.dev/tensorflow/{hub_model_name}/3",
trainable=True)
print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
result = hub_encoder(
inputs=dict(
input_word_ids=glue_train['input_word_ids'][:10],
input_mask=glue_train['input_mask'][:10],
input_type_ids=glue_train['input_type_ids'][:10],),
training=False,
)
print("Pooled output shape:", result['pooled_output'].shape)
print("Sequence output shape:", result['sequence_output'].shape)
hub_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=0.1,
initializer=tf.keras.initializers.TruncatedNormal(
stddev=0.02))
tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
try:
tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
assert False
except Exception as e:
print(f"{type(e).__name__}: {e}")
bert_encoder_config = config_dict.copy()
# You need to rename a few fields to make this work:
bert_encoder_config['attention_dropout_rate'] = bert_encoder_config.pop('attention_probs_dropout_prob')
bert_encoder_config['activation'] = tf_utils.get_activation(bert_encoder_config.pop('hidden_act'))
bert_encoder_config['dropout_rate'] = bert_encoder_config.pop('hidden_dropout_prob')
bert_encoder_config['initializer'] = tf.keras.initializers.TruncatedNormal(
stddev=bert_encoder_config.pop('initializer_range'))
bert_encoder_config['max_sequence_length'] = bert_encoder_config.pop('max_position_embeddings')
bert_encoder_config['num_layers'] = bert_encoder_config.pop('num_hidden_layers')
bert_encoder_config
manual_encoder = nlp.modeling.networks.BertEncoder(**bert_encoder_config)
checkpoint = tf.train.Checkpoint(encoder=manual_encoder)
checkpoint.read(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
result = manual_encoder(my_examples, training=True)
print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
manual_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=bert_encoder_config['dropout_rate'],
initializer=bert_encoder_config['initializer'])
manual_classifier(my_examples, training=True).numpy()
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
plt.plot([decay_schedule(n) for n in range(num_train_steps)])
warmup_steps = num_train_steps * 0.1
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
| 0.81409 | 0.946498 |
# The Trigger Function
## Transforming Continuous Preferences into Discrete Events
This notebook is a mathematical deep dive into the derivation of the Trigger Function used in Conviction Voting for the 1Hive use case.
The role of the trigger function in the conviction voting algorithm is to determine if a sufficient amount of conviction has accumulated in support of a particular proposal, at which point it passes from being a candidate proposal to an active proposal.
In the 1Hive use case for conviction, proposals map to precise quantities of resources $r$ requested from a communal resource pool $R$ (which is time varying $R_t$ but we will drop the subscript for ease of reading). Furthermore, there is a supply of governance tokens $S$ which are being used as part of the goverance process. In the implementation the quantity $S$ will be the effective supply which is the subset of the total Supply for the governance token in question.
We assume a time varying supply $S_t$ and therefore we can interpret $S_t$ as the effective supply without loss of generality. From here forward, we will drop the subscript and refer to $S$ for ease of reading. The process of passing a proposal results in an allocation of $r$ funds as shown in the figure below.

This diagram shows the trigger function logic, which depends on token supply $S$, total resources available $R$ and total conviction $y$ at time $t$, as well as the proposal's requested resources $r$, the maximum share of funds a proposal can take ($\beta$) and a tuning parameter for the trigger function ($\rho$). Essentially, this function controls the maximum amount of funds that can be requested by a proposal ($\beta$), using an equation resembling electron repulsion to ensure conviction increases massively beyond that point, so that no proposals may request more than $\beta$ share of total funds.
<br>
## Parameter Definition
* $\alpha \in (0,1)$ is the parameter that determines the half life decay rate of conviction, as defined in the [Deriving Alpha notebook](https://nbviewer.jupyter.org/github/BlockScience/Aragon_Conviction_Voting/blob/master/models/v3/Deriving_Alpha.ipynb) and should be tuned according to a desired half life.
* $\beta\in (0,1)$ is the max % of total funds that can be discharged by a single proposal, and is the asymptotic limit for the trigger function. It is impossible to discharge more than $\beta$ share of funds.
* $\rho \in (0, \beta^2)$ is a the scale factor for the trigger function. Note that we require $0<\rho <\beta^2$
The trigger function is defined by: $y^*(r) = \frac{\rho S}{(1-\alpha)\left(\beta - \frac{r}{R}\right)^2 }$
The geometric properties of this function with respect to the parameter choices are shown here:

On this plot we can see that there is a maximum conviction that can be reached for a proposal, and also a maximum achievable funds released for a single proposal, which are important bounds for a community to establish for their funding pool.
Note that by requiring that: $0<\rho <\beta^2$ the following holds $0<\frac{\rho}{\beta^2}<1$ and $0<\beta - \sqrt \rho <\beta <1$
## Initializing Conditions for Plot Series
```
import numpy as np
import matplotlib.pyplot as plt
import inspect
import warnings
warnings.filterwarnings("ignore")
from cadCAD.configuration.utils import config_sim
from model.parts.utils import *
from model.parts.sys_params import *
initial_values
params
supply = initial_values['supply']
funds = initial_values['funds']
alpha = params['alpha'][0]
beta = params['beta'][0]
rho = params['rho'][0]
def trigger(requested, funds, supply, alpha, beta, rho):
'''
Function that determines threshold for proposals being accepted.
Refactored slightly from built in to be explicit for demo
'''
share = requested/funds
if share < beta:
threshold = rho*supply/(beta-share)**2 * 1/(1-alpha)
return threshold
else:
return np.inf
```
The actual trigger function used in the V3 simulation is below:
```
trigger_simulation = inspect.getsource(trigger_threshold)
print(trigger_simulation)
```
## Simple derivations
We can plug in some boundary conditions to determine our minimum required and maximum achievable conviction. We can also determine the maximum achievable funds a proposal is able to request, to understand the upper bounds of individual proposal funding.
* min_required_conviction = $y^*(0) = \frac{\rho S}{(1-\alpha)\beta^2}$
* max_achievable_conviction = $\frac{S}{1-\alpha}$
* min_required_conviction_as_a_share_of_max = $\frac{\rho S}{(1-\alpha)\beta^2} \cdot \frac{1-\alpha}{S} = \frac{\rho}{\beta^2}$
* To compute the max_achievable_request solve: $\frac{S}{1-\alpha} = \frac{\rho S}{(1-\alpha)\left(\beta-\\frac{r}{R}\right)^2}$
* max_achievable_request = $r = (\beta -\sqrt\rho)F$
```
min_required_conviction = trigger(0, funds, supply, alpha, beta, rho)
print("min_required_conviction = "+str(min_required_conviction))
max_achievable_conviction = supply/(1-alpha)
print("max_achievable_conviction = "+str(max_achievable_conviction))
print("")
print("min_achievable_conviction_as_a_share_of_max_achievable_conviction = "+str(min_required_conviction/max_achievable_conviction))
print("")
max_request = beta*funds
max_achievable_request = (beta - np.sqrt(rho))*funds
print("max_achievable_request = "+str(max_achievable_request))
print("total_funds = "+str(funds))
print("")
print("max_achievable_request_as_a_share_of_funds = "+str(max_achievable_request/funds))
granularity = 100
requests = np.arange(0,.9*max_request, max_request/granularity)
requests_as_share_of_funds = requests/funds
conviction_required = np.array([trigger(r, funds, supply, alpha, beta, rho) for r in requests])
conviction_required_as_share_of_max = conviction_required/max_achievable_conviction
```
## Plot series 1: Examining the Shape of the Trigger Function Compared to Absolute Funds Requested
These plots demonstrate the trigger function shape, showing how the amount of conviction required increases as amount of requested (absolute) funds increases. These plots are based on alpha, Supply and Funds as initialized above.
```
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction,log=True)
```
The above plots look at the shape of the trigger function on a linear and log scale, where you can see conviction required to pass a proposal increase with the absolute amount of funds requested.
## Plot series 2: Examining the Shape of the Trigger Function Compared to Relative Funds Requested
These plots demonstrate the trigger function shape, showing how the amount of conviction required increases as the **proportion** of requested funds (relative to total funds) increases. These plots are based on alpha, Supply and Funds as initialized above.
```
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction,log=True)
```
The above plots look at the shape of the trigger function on a linear and log scale, where you can see conviction required to pass a proposal increase with the percentage of total funds requested. The two green lines intersect at persistent, unanimous support for a proposal, and the maximum that can be requested (in this case) is 15% of the total pool of funds.
## Plot series 3: Heat Maps
The next set of plots show that conviction required increases to a maximum with the proportion of total funds requested, capping out (in this case) at 15% of total funds available. Note that since we are using **relative** funds requested, these plots are invariant to alpha and effective supply. (In other words, since we are only looking at funds requested relative to the total funds, which are both affected by changes in alpha or effective supply, our conviction required for relative funds requested remains unchanged)
```
params
supply_sweep = trigger_sweep('effective_supply',trigger, params, supply)
alpha_sweep = trigger_sweep('alpha',trigger, params, supply)
trigger_grid(supply_sweep, alpha_sweep)
```
## Conclusion
We recommend that implementers take careful consideration of their choices of the $\beta$ (% of total funds that can be requested) and $\rho$ (scaling factor) parameters, as these will have a high impact on your system design. $\alpha$ and other parameters are calculable from these parameters, but matter less when considering proposals requesting relative share of total funds.
To get a feel for how $\beta$ and $\rho$ impact the trigger function, play around with this [desmos graph](https://www.desmos.com/calculator/yxklrjs5m3). (Note: this is just a tool to play with the curve shape, don't be confused by variable names! $\rho$ is w in the calculator, due to the lack of greek characters ;)
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import inspect
import warnings
warnings.filterwarnings("ignore")
from cadCAD.configuration.utils import config_sim
from model.parts.utils import *
from model.parts.sys_params import *
initial_values
params
supply = initial_values['supply']
funds = initial_values['funds']
alpha = params['alpha'][0]
beta = params['beta'][0]
rho = params['rho'][0]
def trigger(requested, funds, supply, alpha, beta, rho):
'''
Function that determines threshold for proposals being accepted.
Refactored slightly from built in to be explicit for demo
'''
share = requested/funds
if share < beta:
threshold = rho*supply/(beta-share)**2 * 1/(1-alpha)
return threshold
else:
return np.inf
trigger_simulation = inspect.getsource(trigger_threshold)
print(trigger_simulation)
min_required_conviction = trigger(0, funds, supply, alpha, beta, rho)
print("min_required_conviction = "+str(min_required_conviction))
max_achievable_conviction = supply/(1-alpha)
print("max_achievable_conviction = "+str(max_achievable_conviction))
print("")
print("min_achievable_conviction_as_a_share_of_max_achievable_conviction = "+str(min_required_conviction/max_achievable_conviction))
print("")
max_request = beta*funds
max_achievable_request = (beta - np.sqrt(rho))*funds
print("max_achievable_request = "+str(max_achievable_request))
print("total_funds = "+str(funds))
print("")
print("max_achievable_request_as_a_share_of_funds = "+str(max_achievable_request/funds))
granularity = 100
requests = np.arange(0,.9*max_request, max_request/granularity)
requests_as_share_of_funds = requests/funds
conviction_required = np.array([trigger(r, funds, supply, alpha, beta, rho) for r in requests])
conviction_required_as_share_of_max = conviction_required/max_achievable_conviction
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_absolute_terms(requests, conviction_required,max_request,
max_achievable_request,max_achievable_conviction,
min_required_conviction,log=True)
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction)
shape_of_trigger_in_relative_terms(requests_as_share_of_funds, conviction_required_as_share_of_max
,max_request, funds, max_achievable_request,
max_achievable_conviction,
min_required_conviction,log=True)
params
supply_sweep = trigger_sweep('effective_supply',trigger, params, supply)
alpha_sweep = trigger_sweep('alpha',trigger, params, supply)
trigger_grid(supply_sweep, alpha_sweep)
| 0.335895 | 0.982757 |
<h1>contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"></ul></div>
```
import dataiku
import numpy as np
from path import Path
import oct2py as op
import pandas as pd
Path("octave-workspace").remove_p()
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
model_name = 'one_tree_1_img'
octave_code = 'octave'
```
Compile **C++** 'MEX' files for Octave/Matlab for local OS - might need some massaging depending on local OS
```
%%capture
"""
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE rgbConvertMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE gradientMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE convConst.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE imPadMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE imResampleMex.cpp
!cd {octave_code}/toolbox/classify/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE binaryTreeTrain1.cpp
!cd {octave_code}/toolbox/classify/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE forestInds.cpp
"""
octave = op.Oct2Py()
octave.restart()
octave.eval('clear all')
%%capture
octave.addpath(octave.genpath(octave_code))
posWinDir = 'images/simple/posWinDir'
negWinDir = 'images/simple/negWinDir'
params = pd.read_csv('train_params.csv')
# params = train_params_df.loc[train_params_df['name'] == model_name]
opts_modelDs_height = params['opts.modelDs.height'].values[0]
opts_modelDs_width = params['opts.modelDs.width'].values[0]
opts_modelDsPad_height = params['opts.modelDsPad.height'].values[0]
opts_modelDsPad_width = params['opts.modelDsPad.width'].values[0]
stride = params['stride'].values[0]
cascThr = params['cascThr'].values[0]
cascCal = params['cascCal'].values[0]
nWeak = params['nWeak'].values[0]
seed = params['seed'].values[0]
nPos = params['nPos'].values[0]
nNeg = params['nNeg'].values[0]
nPerNeg = params['nPerNeg'].values[0]
nAccNeg = params['nAccNeg'].values[0]
winsSave = params['winsSave'].values[0]
%%capture
"""
opts_modelDs_height = 19
opts_modelDs_width = 19
opts_modelDsPad_height = 20
opts_modelDsPad_width = 20
stride = 1
cascThr = -1
cascCal = 0.005
nWeak = 1
seed = 0
nPos = 1
nNeg = 0
nPerNeg = 1
nAccNeg = 1
winsSave = 0
"""
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
octave.eval("opts=acfTrain();")
octave.eval("opts.posWinDir = '" + posWinDir + "';")
octave.eval("opts.negWinDir = '" + negWinDir + "';")
octave.eval("opts.modelDs = [" + str(opts_modelDs_height) + " " + str(opts_modelDs_width) + "];")
octave.eval("opts.modelDsPad = [" + str(opts_modelDsPad_height) + " " + str(opts_modelDsPad_width) + "];")
octave.eval("opts.stride = " + str(stride) + ";")
octave.eval("opts.cascThr = " + str(cascThr) + ";")
octave.eval("opts.cascCal = " + str(cascCal) + ";")
octave.eval("opts.nWeak = " + str(nWeak) + ";")
octave.eval("opts.seed = " + str(seed) + ";")
octave.eval("opts.nPos = " + str(nPos) + ";")
octave.eval("opts.nNeg = " + str(nNeg) + ";")
octave.eval("opts.nPerNeg = " + str(nPerNeg) + ";")
octave.eval("opts.nAccNeg = " + str(nAccNeg) + ";")
octave.eval("opts.winsSave = " + str(winsSave) + ";")
octave.eval("opts.nWeak = " + str(1) + ";")
!ls {posWinDir}
!ls {negWinDir}
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
octave.eval("model = acfTrain(opts);")
```
Convert **Matlab** classifier object to JSON
```
octave.eval("simple_model_json = savejson('" + model_name + "',model);")
octave.eval("json_file = fopen('" + model_name + ".json" + "', 'w');")
octave.eval("fdisp(json_file, simple_model_json);")
octave.eval("fclose(json_file);")
!echo {model_name}
!ls {model_name}.json
!cat {model_name}.json
Path("octave-workspace").remove_p()
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
# !ls {train_model_JSON}
# !head -10 {train_model_JSON}/{model_name}.json
```
|
github_jupyter
|
import dataiku
import numpy as np
from path import Path
import oct2py as op
import pandas as pd
Path("octave-workspace").remove_p()
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
model_name = 'one_tree_1_img'
octave_code = 'octave'
%%capture
"""
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE rgbConvertMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE gradientMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE convConst.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE imPadMex.cpp
!cd {octave_code}/toolbox/channels/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE imResampleMex.cpp
!cd {octave_code}/toolbox/classify/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE binaryTreeTrain1.cpp
!cd {octave_code}/toolbox/classify/private/ ; mkoctfile --mex -DMATLAB_MEX_FILE forestInds.cpp
"""
octave = op.Oct2Py()
octave.restart()
octave.eval('clear all')
%%capture
octave.addpath(octave.genpath(octave_code))
posWinDir = 'images/simple/posWinDir'
negWinDir = 'images/simple/negWinDir'
params = pd.read_csv('train_params.csv')
# params = train_params_df.loc[train_params_df['name'] == model_name]
opts_modelDs_height = params['opts.modelDs.height'].values[0]
opts_modelDs_width = params['opts.modelDs.width'].values[0]
opts_modelDsPad_height = params['opts.modelDsPad.height'].values[0]
opts_modelDsPad_width = params['opts.modelDsPad.width'].values[0]
stride = params['stride'].values[0]
cascThr = params['cascThr'].values[0]
cascCal = params['cascCal'].values[0]
nWeak = params['nWeak'].values[0]
seed = params['seed'].values[0]
nPos = params['nPos'].values[0]
nNeg = params['nNeg'].values[0]
nPerNeg = params['nPerNeg'].values[0]
nAccNeg = params['nAccNeg'].values[0]
winsSave = params['winsSave'].values[0]
%%capture
"""
opts_modelDs_height = 19
opts_modelDs_width = 19
opts_modelDsPad_height = 20
opts_modelDsPad_width = 20
stride = 1
cascThr = -1
cascCal = 0.005
nWeak = 1
seed = 0
nPos = 1
nNeg = 0
nPerNeg = 1
nAccNeg = 1
winsSave = 0
"""
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
octave.eval("opts=acfTrain();")
octave.eval("opts.posWinDir = '" + posWinDir + "';")
octave.eval("opts.negWinDir = '" + negWinDir + "';")
octave.eval("opts.modelDs = [" + str(opts_modelDs_height) + " " + str(opts_modelDs_width) + "];")
octave.eval("opts.modelDsPad = [" + str(opts_modelDsPad_height) + " " + str(opts_modelDsPad_width) + "];")
octave.eval("opts.stride = " + str(stride) + ";")
octave.eval("opts.cascThr = " + str(cascThr) + ";")
octave.eval("opts.cascCal = " + str(cascCal) + ";")
octave.eval("opts.nWeak = " + str(nWeak) + ";")
octave.eval("opts.seed = " + str(seed) + ";")
octave.eval("opts.nPos = " + str(nPos) + ";")
octave.eval("opts.nNeg = " + str(nNeg) + ";")
octave.eval("opts.nPerNeg = " + str(nPerNeg) + ";")
octave.eval("opts.nAccNeg = " + str(nAccNeg) + ";")
octave.eval("opts.winsSave = " + str(winsSave) + ";")
octave.eval("opts.nWeak = " + str(1) + ";")
!ls {posWinDir}
!ls {negWinDir}
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
octave.eval("model = acfTrain(opts);")
octave.eval("simple_model_json = savejson('" + model_name + "',model);")
octave.eval("json_file = fopen('" + model_name + ".json" + "', 'w');")
octave.eval("fdisp(json_file, simple_model_json);")
octave.eval("fclose(json_file);")
!echo {model_name}
!ls {model_name}.json
!cat {model_name}.json
Path("octave-workspace").remove_p()
Path("Detector.mat").remove_p()
Path("Log.txt").remove_p()
# !ls {train_model_JSON}
# !head -10 {train_model_JSON}/{model_name}.json
| 0.269806 | 0.460592 |
## Supervised Learning
## Project: Finding Donors for *CharityML*
In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Please specify WHICH VERSION OF PYTHON you are using when submitting this notebook. Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
## Getting Started
In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries.
----
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data)
```
### Implementation: Data Exploration
A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:
- The total number of records, `'n_records'`
- The number of individuals making more than \$50,000 annually, `'n_greater_50k'`.
- The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`.
- The percentage of individuals making more than \$50,000 annually, `'greater_percent'`.
** HINT: ** You may need to look at the table above to understand how the `'income'` entries are formatted.
```
# TODO: Total number of records
#n_records = data.count(axis='rows')
n_records = len(data)
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = len(data[data.income==">50K"])
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = len(data[data.income=="<=50K"])
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = n_greater_50k/n_records * 100
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals making more than $50,000: {}".format(n_greater_50k))
print("Individuals making at most $50,000: {}".format(n_at_most_50k))
print("Percentage of individuals making more than $50,000: {}%".format(greater_percent))
```
** Featureset Exploration **
* **age**: continuous.
* **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
* **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
* **education-num**: continuous.
* **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
* **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
* **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
* **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other.
* **sex**: Female, Male.
* **capital-gain**: continuous.
* **capital-loss**: continuous.
* **hours-per-week**: continuous.
* **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
----
## Preparing the Data
Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.
### Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`.
Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.
```
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
```
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully.
Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.
```
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_log_transformed = pd.DataFrame(data = features_raw)
features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_log_transformed, transformed = True)
```
### Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.
Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this.
```
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_log_minmax_transform = pd.DataFrame(data = features_log_transformed)
features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical])
# Show an example of a record with scaling applied
display(features_log_minmax_transform.head(n = 5))
```
### Implementation: Data Preprocessing
From the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`.
| | someFeature | | someFeature_A | someFeature_B | someFeature_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following:
- Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) to perform one-hot encoding on the `'features_log_minmax_transform'` data.
- Convert the target label `'income_raw'` to numerical entries.
- Set records with "<=50K" to `0` and records with ">50K" to `1`.
```
# TODO: One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies()
features_final = pd.get_dummies(features_log_minmax_transform)
# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.replace(["<=50K",">50K"],[0,1])
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
print(encoded)
```
### Shuffle and Split Data
Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split.
```
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features_final,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
```
----
## Evaluating Model Performance
In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a *naive predictor*.
### Metrics and the Naive Predictor
*CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).
Looking at the distribution of classes (those who make at most \$50,000, and those who make more), it's clear most individuals do not make more than \$50,000. This can greatly affect **accuracy**, since we could simply say *"this person does not make more than \$50,000"* and generally be right, without ever looking at the data! Making such a statement would be called **naive**, since we have not considered any information to substantiate the claim. It is always important to consider the *naive prediction* for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, *CharityML* would identify no one as donors.
#### Note: Recap of accuracy, precision, recall
** Accuracy ** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
** Precision ** tells us what proportion of messages we classified as spam, actually were spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of
`[True Positives/(True Positives + False Positives)]`
** Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of
`[True Positives/(True Positives + False Negatives)]`
For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios).
### Question 1 - Naive Predictor Performace
* If we chose a model that always predicted an individual made more than $50,000, what would that model's accuracy and F-score be on this dataset? You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later.
** Please note ** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from.
** HINT: **
* When we have a model that always predicts '1' (i.e. the individual makes more than 50k) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total.
* Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives.
```
'''
TP = np.sum(income) # Counting the ones as this is the naive case. Note that 'income' is the 'income_raw' data
encoded to numerical values done in the data preprocessing step.
FP = income.count() - TP # Specific to the naive case
TN = 0 # No predicted negatives in the naive case
FN = 0 # No predicted negatives in the naive case
'''
# TODO: Calculate accuracy, precision and recall
TP = np.sum(income)
FP = income.count() - TP
TN = 0
FN = 0
#print(TP)
#print(FP)
accuracy = TP/(TP+FP)
recall = TP/(TP+FN)
precision = accuracy
# TODO: Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
fscore = (1+0.5**2)*(precision*recall)/(0.5**2*precision+recall)
# Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))
```
### Supervised Learning Models
**The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent Classifier (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
### Question 2 - Model Application
List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen
- Describe one real-world application in industry where the model can be applied.
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
** HINT: **
Structure your answer in the same format as above^, with 4 parts for each of the three models you pick. Please include references with your answer.
Answer:
Random Forest
* Random forest model can be applied in medical domain to identify a disease based on symptoms. Example: detection of Alzheimer's disease.
* Strengths - very good for large datasets, gives estimates of feature's importance, can be run in parallel to speed up training, reduces variance caused by decision trees by combining multiple decision trees.
* Weaknesses - relatively high prediction time
* Candidacy - random forest gives good performance when there are categorical variables. As there are around 45000 entries, random forest can train much better.
Gradient Boosting
* Gradient Boosting can be applied in ranking algorithms, like ranking of searches by search engines. Example: McRank: Learning to Rank Using Multiple Classification and Gradient Boosting.
* Strengths - very good for large datasets, reduces bias and variance, combines multiple weak predictors to a build strong predictor.
* Weaknesses - relatively high training time, over-fitting if the data sample is too small.
* Candidacy - the data we have is sufficiently large and clean so gradient boosting is suitable in this case.
Logistic Regression
* Logistic Regression is very widely used in the case of binary classification problems, very common example being whether a user will buy a product or not.
* Strengths - fast in training and prediction time, gives good results in case of less features
* Weaknesses - assumes linear decision boundary, cannot decode complex relationships between features.
* Candidacy - problem is of binary classification with clean data, all favourable conditions for logistic regression.
### Implementation - Creating a Training and Predicting Pipeline
To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.
In the code block below, you will need to implement the following:
- Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics).
- Fit the learner to the sampled training data and record the training time.
- Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`.
- Record the total prediction time.
- Calculate the accuracy score for both the training subset and testing set.
- Calculate the F-score for both the training subset and testing set.
- Make sure that you set the `beta` parameter!
```
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end-start
# TODO: Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end-start
# TODO: Compute accuracy on the first 300 training samples which is y_train[:300]
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO: Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test,predictions_test)
# TODO: Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300],predictions_train,beta=0.5)
# TODO: Compute F-score on the test set which is y_test
results['f_test'] = fbeta_score(y_test,predictions_test,beta=0.5)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
```
### Implementation: Initial Model Evaluation
In the code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`.
- Use a `'random_state'` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Calculate the number of records equal to 1%, 10%, and 100% of the training data.
- Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively.
**Note:** Depending on which algorithms you chose, the following implementation may take some time to run!
```
# TODO: Import the three supervised learning models from sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
# TODO: Initialize the three models
clf_A = RandomForestClassifier(random_state = 0)
clf_B = GradientBoostingClassifier(random_state = 0)
clf_C = LogisticRegression(random_state = 0)
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
# HINT: samples_100 is the entire training set i.e. len(y_train)
# HINT: samples_10 is 10% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
# HINT: samples_1 is 1% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
samples_100 = len(y_train)
samples_10 = int(len(y_train)*10/100)
samples_1 = int(len(y_train)*1/100)
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
```
----
## Improving Results
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score.
### Question 3 - Choosing the Best Model
* Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000.
** HINT: **
Look at the graph at the bottom left from the cell above(the visualization created by `vs.evaluate(results, accuracy, fscore)`) and check the F score for the testing set when 100% of the training set is used. Which model has the highest score? Your answer should include discussion of the:
* metrics - F score on the testing when 100% of the training data is used,
* prediction/training time
* the algorithm's suitability for the data.
Answer:
Gradient boosting classifier is the best among three as it takes 0.04 sec in model predicting also with accuracy and F-score to be maximum which shows it is good with precision and recall.
### Question 4 - Describing the Model in Layman's Terms
* In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical jargon, such as describing equations.
HINT:
When explaining your model, if using external resources please include all citations.
Answer:
In Gradient boosting classifier we have to find out all the people who earn more than $50K in order to predict how much donation should be requested to them. This is done by selecting certain features(like income) from the dataset.
During the training process initially the model trained gives very bad results as it cannot classsify properly but with rigrous and continous training everytime the new model is trained it gets better so we combine them . This helps in better predicting model which helps us to classify the people who earn more than than $50k annualy so in order to have more chance of donation.
### Implementation: Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Initialize the classifier you've chosen and store it in `clf`.
- Set a `random_state` if one is available to the same state you set before.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available!
- Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$).
- Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`.
**Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!
```
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Initialize the classifier
clf = GradientBoostingClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune, using a dictionary if needed.
# HINT: parameters = {'parameter_1': [value1, value2], 'parameter_2': [value1, value2]}
parameters = {'n_estimators' : [100,200] , 'learning_rate' : [0.1,2]}
# TODO: Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score,beta=0.5)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf , parameters , scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
```
### Question 5 - Final Model Evaluation
* What is your optimized model's accuracy and F-score on the testing data?
* Are these scores better or worse than the unoptimized model?
* How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_
**Note:** Fill in the table below with your results, and then provide discussion in the **Answer** box.
#### Results:
| Metric | Unoptimized Model | Optimized Model |
| :------------: | :---------------: | :-------------: |
| Accuracy Score | 0.8630 | 0.8678 |
| F-score | 0.7395 | 0.7469 |
Answer:
* Yes, these are better than unoptimized model.
* accuracy score is improved by : 0.62
* F-score is improved by : 0.4552
----
## Feature Importance
An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.
Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset.
### Question 6 - Feature Relevance Observation
When **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?
Answer:
* Income: It helps to decide wheather the person can donate some amount or not because if he is not earning well then he might not donate.
* capital-gain : If a person is earning a high profit then there are more chances he is willingly to donate.
* capital-loss : If the losses are occured in individual's profession then it is unlikely that he is up for donation.
* education : The more and better educated an individual is the higher are chances that he is open for donation.
* age : it is a important factor as a certain working age people are more open for donation then compared to old people.
### Implementation - Extracting Feature Importance
Choose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.
In the code cell below, you will need to implement the following:
- Import a supervised learning model from sklearn if it is different from the three used earlier.
- Train the supervised model on the entire training set.
- Extract the feature importances using `'.feature_importances_'`.
```
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set using .fit(X_train, y_train)
model = GradientBoostingClassifier().fit(X_train,y_train)
# TODO: Extract the feature importances using .feature_importances_
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
```
### Question 7 - Extracting Feature Importance
Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
* How do these five features compare to the five features you discussed in **Question 6**?
* If you were close to the same answer, how does this visualization confirm your thoughts?
* If you were not close, why do you think these features are more relevant?
Answer:
* 4 out of 5 are the similar ones except for the 'married'.
* It helps a lot as selecting some features regarding the given data and out of a lot of features is quite important and necessary to visualize data.
### Feature Selection
How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*.
```
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print("Final Model trained on full data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("\nFinal Model trained on reduced data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)))
```
### Question 8 - Effects of Feature Selection
* How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?
* If training time was a factor, would you consider using the reduced data as your training set?
Answer: The accuracy and F-score are slightly reduced on reduced data. No, I would not reduce data if training time was a factor as training data is far more important than the time factor.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
##Before You Submit
You will also need run the following in order to convert the Jupyter notebook into HTML, so that your submission will include both files.
```
!!jupyter nbconvert *.ipynb
```
|
github_jupyter
|
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data)
# TODO: Total number of records
#n_records = data.count(axis='rows')
n_records = len(data)
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = len(data[data.income==">50K"])
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = len(data[data.income=="<=50K"])
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = n_greater_50k/n_records * 100
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals making more than $50,000: {}".format(n_greater_50k))
print("Individuals making at most $50,000: {}".format(n_at_most_50k))
print("Percentage of individuals making more than $50,000: {}%".format(greater_percent))
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_log_transformed = pd.DataFrame(data = features_raw)
features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_log_transformed, transformed = True)
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_log_minmax_transform = pd.DataFrame(data = features_log_transformed)
features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical])
# Show an example of a record with scaling applied
display(features_log_minmax_transform.head(n = 5))
# TODO: One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies()
features_final = pd.get_dummies(features_log_minmax_transform)
# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.replace(["<=50K",">50K"],[0,1])
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
print(encoded)
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features_final,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
'''
TP = np.sum(income) # Counting the ones as this is the naive case. Note that 'income' is the 'income_raw' data
encoded to numerical values done in the data preprocessing step.
FP = income.count() - TP # Specific to the naive case
TN = 0 # No predicted negatives in the naive case
FN = 0 # No predicted negatives in the naive case
'''
# TODO: Calculate accuracy, precision and recall
TP = np.sum(income)
FP = income.count() - TP
TN = 0
FN = 0
#print(TP)
#print(FP)
accuracy = TP/(TP+FP)
recall = TP/(TP+FN)
precision = accuracy
# TODO: Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
fscore = (1+0.5**2)*(precision*recall)/(0.5**2*precision+recall)
# Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end-start
# TODO: Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end-start
# TODO: Compute accuracy on the first 300 training samples which is y_train[:300]
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO: Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test,predictions_test)
# TODO: Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300],predictions_train,beta=0.5)
# TODO: Compute F-score on the test set which is y_test
results['f_test'] = fbeta_score(y_test,predictions_test,beta=0.5)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
# TODO: Import the three supervised learning models from sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
# TODO: Initialize the three models
clf_A = RandomForestClassifier(random_state = 0)
clf_B = GradientBoostingClassifier(random_state = 0)
clf_C = LogisticRegression(random_state = 0)
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
# HINT: samples_100 is the entire training set i.e. len(y_train)
# HINT: samples_10 is 10% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
# HINT: samples_1 is 1% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
samples_100 = len(y_train)
samples_10 = int(len(y_train)*10/100)
samples_1 = int(len(y_train)*1/100)
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Initialize the classifier
clf = GradientBoostingClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune, using a dictionary if needed.
# HINT: parameters = {'parameter_1': [value1, value2], 'parameter_2': [value1, value2]}
parameters = {'n_estimators' : [100,200] , 'learning_rate' : [0.1,2]}
# TODO: Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score,beta=0.5)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf , parameters , scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set using .fit(X_train, y_train)
model = GradientBoostingClassifier().fit(X_train,y_train)
# TODO: Extract the feature importances using .feature_importances_
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print("Final Model trained on full data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("\nFinal Model trained on reduced data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)))
!!jupyter nbconvert *.ipynb
| 0.457137 | 0.993248 |
# [ATM 623: Climate Modeling](../index.ipynb)
[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
# Sensitivity to height-dependent changes in water vapor
## Warning: content out of date and not maintained
You really should be looking at [The Climate Laboratory book](https://brian-rose.github.io/ClimateLaboratoryBook) by Brian Rose, where all the same content (and more!) is kept up to date.
***Here you are likely to find broken links and broken code.***
This assigment is due by email before class on Thursday March 30, 2017.
### Reading assignment
Here is a classic older paper about the climatic effects of small perturbations of tropospheric water vapor:
[Shine and Sinha (1991): Sensitivity of the Earth's climate to height-dependent changes in the water vapour mixing ratio. Nature 354:382-384](http://www.nature.com/nature/journal/v354/n6352/abs/354382a0.html)
(I will email you a pdf copy since UAlbany does not have electronic access to this article)
Read the article, then write at least one paragraph summary and commentary.
### Computational assignment
Your assignment is to make similar calculations using the RRTMG radiative transfer model in `climlab`. Specifically:
- Set up a Radiative-Convective model with RRTMG radiation, a critical lapse rate of 6.5 K/km, and the Manabe relative humidity profile.
- Tune the model so that, **at radiative-convective equilibrium**, it has both **realistic surface temperature** (near 288 K) and **realistic energy balance** (ASR and OLR both near 239 W/m2). To do this, you will probably need to adjust surface albedo and introduce some clouds into the single-column model. See the lecture notes for examples. Make sure your methodology is explained clearly.
- Now, using your tuned-up model, consider the effects of **small perturbations in absolute specific humidity**:
- Following Shine and Sinha, use a perturbation of 0.001 g/kg (but recall that `climlab` uses units of kg/kg for specific humidity).
- Add the small perturbation to the model specific humidity at one vertical level only.
- Calcuate the instantaneous radiative forcing at the top of atmosphere due to this increase.
- Calculate the equilibrium surface warming associated with this increase.
- Repeat these calculations for every vertical level.
- Make plots like Fig. 3 and Fig. 1 of the paper: radiative forcing and surface warming plotted as functions of the vertical level at which the water vapor is added.
- Next, repeat these calculations but instead of using a fixed perturbation, **increase the specific humidity by 10% of its reference value at every level**. Again, plot the radiative forcing and surface warming as functions of the vertical level at which the water vapor is added.
- Finally, **if you added some clouds to your tuned-up reference model**, take a look at **changes in the Cloud Radiative Effect** in some of your perturbations. Are they non-zero? Do you think it is meaningful to call these a *cloud feedback*? Why or why not?
Comment on what you find, including similarities and differences to the Shine and Sinha results. Offer some thoughts about why your results differ from theirs, if applicable. Do your results support the main conclusions in the paper?
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
____________
## Credits
The author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)
Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
____________
|
github_jupyter
|
# [ATM 623: Climate Modeling](../index.ipynb)
[Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany
# Sensitivity to height-dependent changes in water vapor
## Warning: content out of date and not maintained
You really should be looking at [The Climate Laboratory book](https://brian-rose.github.io/ClimateLaboratoryBook) by Brian Rose, where all the same content (and more!) is kept up to date.
***Here you are likely to find broken links and broken code.***
This assigment is due by email before class on Thursday March 30, 2017.
### Reading assignment
Here is a classic older paper about the climatic effects of small perturbations of tropospheric water vapor:
[Shine and Sinha (1991): Sensitivity of the Earth's climate to height-dependent changes in the water vapour mixing ratio. Nature 354:382-384](http://www.nature.com/nature/journal/v354/n6352/abs/354382a0.html)
(I will email you a pdf copy since UAlbany does not have electronic access to this article)
Read the article, then write at least one paragraph summary and commentary.
### Computational assignment
Your assignment is to make similar calculations using the RRTMG radiative transfer model in `climlab`. Specifically:
- Set up a Radiative-Convective model with RRTMG radiation, a critical lapse rate of 6.5 K/km, and the Manabe relative humidity profile.
- Tune the model so that, **at radiative-convective equilibrium**, it has both **realistic surface temperature** (near 288 K) and **realistic energy balance** (ASR and OLR both near 239 W/m2). To do this, you will probably need to adjust surface albedo and introduce some clouds into the single-column model. See the lecture notes for examples. Make sure your methodology is explained clearly.
- Now, using your tuned-up model, consider the effects of **small perturbations in absolute specific humidity**:
- Following Shine and Sinha, use a perturbation of 0.001 g/kg (but recall that `climlab` uses units of kg/kg for specific humidity).
- Add the small perturbation to the model specific humidity at one vertical level only.
- Calcuate the instantaneous radiative forcing at the top of atmosphere due to this increase.
- Calculate the equilibrium surface warming associated with this increase.
- Repeat these calculations for every vertical level.
- Make plots like Fig. 3 and Fig. 1 of the paper: radiative forcing and surface warming plotted as functions of the vertical level at which the water vapor is added.
- Next, repeat these calculations but instead of using a fixed perturbation, **increase the specific humidity by 10% of its reference value at every level**. Again, plot the radiative forcing and surface warming as functions of the vertical level at which the water vapor is added.
- Finally, **if you added some clouds to your tuned-up reference model**, take a look at **changes in the Cloud Radiative Effect** in some of your perturbations. Are they non-zero? Do you think it is meaningful to call these a *cloud feedback*? Why or why not?
Comment on what you find, including similarities and differences to the Shine and Sinha results. Offer some thoughts about why your results differ from theirs, if applicable. Do your results support the main conclusions in the paper?
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
____________
## Credits
The author of this notebook is [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.
It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php)
Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
____________
| 0.790975 | 0.897111 |
# CirComPara Pipeline
To demonstrate Dugong ́s effectiveness to distribute and run bioinformatics tools in alternative computational environments, the CirComPara pipeline was implemented in a Dugong container and tested in different OS with the aid of virtual machines (VM) or cloud computing servers.
CirComPara is a computational pipeline to detect, quantify, and correlate expression of linear and circular RNAs from RNA-seq data. Is a highly complex pipeline, which employs a series of bioinformatics software and was originally designed to run in an Ubuntu Server 16.04 LTS (x64).
Although authors provide details regarding the expected versions of each software and their dependency requirements, several problems can still be encountered during CirComPara implementation by inexperienced users.
See documentation for CirComPara installation details: https://github.com/egaffo/CirComPara
-----------------------------------------------------------------------------------------------------------------------
## Pipeline steps
- The test data is already unpacked and available in the path: **/headless/CirComPara/test_circompara/**
- The **meta.csv** and **vars.py** files are already configured to run CirComPara, as documented: https://github.com/egaffo/CirComPara
- Defining the folder for the analysis with the CirComPara of the test data provided by the developers of the tool:
```
from functools import partial
from os import chdir
chdir('/headless/CirComPara/test_circompara/analysis')
```
- Viewing files from /headless/CirComPara/test_circompara/
```
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/')
```
- Viewing the contents of the configuration file: vars.py
```
!cat /headless/CirComPara/test_circompara/analysis/vars.py
```
- Viewing the contents of the configuration file: meta.csv
```
!cat /headless/CirComPara/test_circompara/analysis/meta.csv
```
- Running CirCompara with test data
```
!../../circompara
```
-----------------------------------------------------------------------------------------------------------------------
## Results:
- Viewing output files after running CirComPara:
```
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/analysis/')
```
- Viewing graphic files after running CirComPara:
```
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/corr_density_plot-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/cumulative_expression_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/show_circrnas_per_method-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_per_gene-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/correlations_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_2reads_2methods_sample-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circ_gene_expr-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_gene_expressed_by_sample-1.png")
```
-----------------------------------------------------------------------------------------------------------------------
**NOTE:** This pipeline is just an example of what you can do with Dugong. I
|
github_jupyter
|
from functools import partial
from os import chdir
chdir('/headless/CirComPara/test_circompara/analysis')
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/')
!cat /headless/CirComPara/test_circompara/analysis/vars.py
!cat /headless/CirComPara/test_circompara/analysis/meta.csv
!../../circompara
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/analysis/')
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/corr_density_plot-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/cumulative_expression_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/show_circrnas_per_method-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_per_gene-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/correlations_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_2reads_2methods_sample-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circ_gene_expr-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_gene_expressed_by_sample-1.png")
| 0.237399 | 0.871693 |
```
%%html
<span style="color:red; font-family:Helvetica Neue, Helvetica, Arial, sans-serif; font-size:2em;">An Exception was encountered at 'In [13]'.</span>
%load_ext autoreload
%autoreload 2
import glob
import nibabel as nib
import os
import time
import pandas as pd
import numpy as np
from mricode.utils import log_textfile
from mricode.utils import copy_colab
from mricode.utils import return_iter
from mricode.utils import return_csv
from mricode.models.SimpleCNN import SimpleCNN
from mricode.models.DenseNet import MyDenseNet
import tensorflow as tf
from tensorflow.keras.layers import Conv3D
from tensorflow import nn
from tensorflow.python.ops import nn_ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.keras.engine.base_layer import InputSpec
from tensorflow.python.keras.utils import conv_utils
tf.__version__
tf.test.is_gpu_available()
path_output = './output/'
path_tfrecords = '/data2/res64/down/'
path_csv = '/data2/csv/'
filename_res = {'train': 'intell_residual_train.csv', 'val': 'intell_residual_valid.csv', 'test': 'intell_residual_test.csv'}
filename_final = filename_res
sample_size = 'allimages'
batch_size = 8
onlyt1 = False
modelname = 'runAllImages64_SimpleCNN_T1_'
Model = SimpleCNN
t1_mean=1.3779395849814497
t1_std=3.4895845243139503
t2_mean=2.22435586968901
t2_std=5.07708743178319
ad_mean=1.3008901218593748e-05
ad_std=0.009966655860940228
fa_mean=0.0037552628409334037
fa_std=0.012922319568740915
md_mean=9.827903909139596e-06
md_std=0.009956973204022659
rd_mean=8.237404999587111e-06
rd_std=0.009954672598675338
train_iter, val_iter, test_iter = return_iter(path_tfrecords, sample_size, batch_size, onlyt1=onlyt1)
if False:
t1_mean = 0.
t1_std = 0.
t2_mean = 0.
t2_std = 0.
ad_mean = 0.
ad_std = 0.
fa_mean = 0.
fa_std = 0.
md_mean = 0.
md_std = 0.
rd_mean = 0.
rd_std = 0.
n = 0.
for b in train_iter:
t1_mean += np.mean(b['t1'])
t1_std += np.std(b['t1'])
t2_mean += np.mean(b['t2'])
t2_std += np.std(b['t2'])
a = np.asarray(b['ad'])
a = a.copy()
a[np.isnan(a)] = 0
ad_mean += np.mean(a)
ad_std += np.std(a)
a = np.asarray(b['fa'])
a = a.copy()
a[np.isnan(a)] = 0
fa_mean += np.mean(a)
fa_std += np.std(a)
a = np.asarray(b['md'])
a = a.copy()
a[np.isnan(a)] = 0
md_mean += np.mean(a)
md_std += np.std(a)
a = np.asarray(b['rd'])
a = a.copy()
a[np.isnan(a)] = 0
rd_mean += np.mean(a)
rd_std += np.std(a)
n += np.asarray(b['t1']).shape[0]
t1_mean /= n
t1_std /= n
t2_mean /= n
t2_std /= n
ad_mean /= n
ad_std /= n
fa_mean /= n
fa_std /= n
md_mean /= n
md_std /= n
rd_mean /= n
rd_std /= n
t1_mean, t1_std, t2_mean, t2_std, ad_mean, ad_std, fa_mean, fa_std, md_mean, md_std, rd_mean, rd_std
train_df, val_df, test_df, norm_dict = return_csv(path_csv, filename_final, False)
norm_dict
cat_cols = {'female': 2, 'race.ethnicity': 5, 'high.educ_group': 4, 'income_group': 8, 'married': 6}
num_cols = [x for x in list(val_df.columns) if '_norm' in x]
def calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict):
for col in num_cols:
tmp_col = col
tmp_std = norm_dict[tmp_col.replace('_norm','')]['std']
tmp_y_true = tf.cast(y_true[col], tf.float32).numpy()
tmp_y_pred = np.squeeze(y_pred[col].numpy())
if not(tmp_col in out_loss):
out_loss[tmp_col] = np.sum(np.square(tmp_y_true-tmp_y_pred))
else:
out_loss[tmp_col] += np.sum(np.square(tmp_y_true-tmp_y_pred))
if not(tmp_col in out_acc):
out_acc[tmp_col] = np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std))
else:
out_acc[tmp_col] += np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std))
for col in list(cat_cols.keys()):
tmp_col = col
if not(tmp_col in out_loss):
out_loss[tmp_col] = tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy()
else:
out_loss[tmp_col] += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy()
if not(tmp_col in out_acc):
out_acc[tmp_col] = tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy()
else:
out_acc[tmp_col] += tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy()
return(out_loss, out_acc)
def format_output(out_loss, out_acc, n, cols, print_bl=False):
loss = 0
acc = 0
output = []
for col in cols:
output.append([col, out_loss[col]/n, out_acc[col]/n])
loss += out_loss[col]/n
acc += out_acc[col]/n
df = pd.DataFrame(output)
df.columns = ['name', 'loss', 'acc']
if print_bl:
print(df)
return(loss, acc, df)
@tf.function
def train_step(X, y, model, optimizer, cat_cols, num_cols):
with tf.GradientTape() as tape:
predictions = model(X)
i = 0
loss = tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]]))
for i in range(1,len(num_cols)):
loss += tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]]))
for col in list(cat_cols.keys()):
loss += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y[col]), tf.squeeze(predictions[col]))
gradients = tape.gradient(loss, model.trainable_variables)
mean_std = [x.name for x in model.non_trainable_variables if ('batch_norm') in x.name and ('mean' in x.name or 'variance' in x.name)]
with tf.control_dependencies(mean_std):
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return(y, predictions, loss)
@tf.function
def test_step(X, y, model):
predictions = model(X)
return(y, predictions)
def epoch(data_iter, df, model, optimizer, cat_cols, num_cols, norm_dict):
out_loss = {}
out_acc = {}
n = 0.
n_batch = 0.
total_time_dataload = 0.
total_time_model = 0.
start_time = time.time()
for batch in data_iter:
total_time_dataload += time.time() - start_time
start_time = time.time()
t1 = (tf.cast(batch['t1'], tf.float32)-t1_mean)/t1_std
t2 = (batch['t2']-t2_mean)/t2_std
ad = batch['ad']
ad = tf.where(tf.math.is_nan(ad), tf.zeros_like(ad), ad)
ad = (ad-ad_mean)/ad_std
fa = batch['fa']
fa = tf.where(tf.math.is_nan(fa), tf.zeros_like(fa), fa)
fa = (fa-fa_mean)/fa_std
md = batch['md']
md = tf.where(tf.math.is_nan(md), tf.zeros_like(md), md)
md = (md-md_mean)/md_std
rd = batch['rd']
rd = tf.where(tf.math.is_nan(rd), tf.zeros_like(rd), rd)
rd = (rd-rd_mean)/rd_std
subjectid = decoder(batch['subjectid'])
y = get_labels(df, subjectid, list(cat_cols.keys())+num_cols)
X = tf.concat([t1], axis=4)
#X = tf.concat([t1, t2], axis=4)
if optimizer != None:
y_true, y_pred, loss = train_step(X, y, model, optimizer, cat_cols, num_cols)
else:
y_true, y_pred = test_step(X, y, model)
out_loss, out_acc = calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict)
n += X.shape[0]
n_batch += 1
if (n_batch % 10) == 0:
print(n_batch)
total_time_model += time.time() - start_time
start_time = time.time()
return (out_loss, out_acc, n, total_time_model, total_time_dataload)
def get_labels(df, subjectid, cols = ['nihtbx_fluidcomp_uncorrected_norm']):
subjects_df = pd.DataFrame(subjectid)
result_df = pd.merge(subjects_df, df, left_on=0, right_on='subjectkey', how='left')
output = {}
for col in cols:
output[col] = np.asarray(result_df[col].values)
return output
def best_val(df_best, df_val, df_test):
df_best = pd.merge(df_best, df_val, how='left', left_on='name', right_on='name')
df_best = pd.merge(df_best, df_test, how='left', left_on='name', right_on='name')
df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_test'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_test']
df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_val'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_val']
df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test']
df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val']
df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test']
df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val']
df_best = df_best.drop(['cur_loss_val', 'cur_acc_val', 'cur_loss_test', 'cur_acc_test'], axis=1)
return(df_best)
decoder = np.vectorize(lambda x: x.decode('UTF-8'))
template = 'Epoch {0}, Loss: {1:.3f}, Accuracy: {2:.3f}, Val Loss: {3:.3f}, Val Accuracy: {4:.3f}, Time Model: {5:.3f}, Time Data: {6:.3f}'
for col in [0]:
log_textfile(path_output + modelname + 'multitask_test' + '.log', cat_cols),
log_textfile(path_output + modelname + 'multitask_test' + '.log', num_cols)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam(lr = 0.001)
model = Model(cat_cols, num_cols)
df_best = None
for e in range(20):
log_textfile(path_output + modelname + 'multitask_test' + '.log', 'Epochs: ' + str(e))
loss = tf.Variable(0.)
acc = tf.Variable(0.)
val_loss = tf.Variable(0.)
val_acc = tf.Variable(0.)
test_loss = tf.Variable(0.)
test_acc = tf.Variable(0.)
tf.keras.backend.set_learning_phase(True)
train_out_loss, train_out_acc, n, time_model, time_data = epoch(train_iter, train_df, model, optimizer, cat_cols, num_cols, norm_dict)
tf.keras.backend.set_learning_phase(False)
val_out_loss, val_out_acc, n, _, _ = epoch(val_iter, val_df, model, None, cat_cols, num_cols, norm_dict)
test_out_loss, test_out_acc, n, _, _ = epoch(test_iter, test_df, model, None, cat_cols, num_cols, norm_dict)
loss, acc, _ = format_output(train_out_loss, train_out_acc, n, list(cat_cols.keys())+num_cols)
val_loss, val_acc, df_val = format_output(val_out_loss, val_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False)
test_loss, test_acc, df_test = format_output(test_out_loss, test_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False)
df_val.columns = ['name', 'cur_loss_val', 'cur_acc_val']
df_test.columns = ['name', 'cur_loss_test', 'cur_acc_test']
if e == 0:
df_best = pd.merge(df_test, df_val, how='left', left_on='name', right_on='name')
df_best.columns = ['name', 'best_loss_test', 'best_acc_test', 'best_loss_val', 'best_acc_val']
df_best = best_val(df_best, df_val, df_test)
print(df_best[['name', 'best_loss_test', 'best_acc_test']])
print(df_best[['name', 'best_loss_val', 'best_acc_val']])
log_textfile(path_output + modelname + 'multitask_test' + '.log', template.format(e, loss, acc, val_loss, val_acc, time_model, time_data))
if e in [13, 16]:
optimizer.lr = optimizer.lr/3
log_textfile(path_output + modelname + 'multitask_test' + '.log', 'Learning rate: ' + str(optimizer.lr))
df_best.to_csv(path_output + modelname + 'multitask_test' + '.csv')
error
batch = next(iter(train_iter))
t1 = (tf.cast(batch['t1'], tf.float32)-t1_mean)/t1_std
t2 = (batch['t2']-t2_mean)/t2_std
ad = batch['ad']
ad = tf.where(tf.math.is_nan(ad), tf.zeros_like(ad), ad)
ad = (ad-ad_mean)/ad_std
fa = batch['fa']
fa = tf.where(tf.math.is_nan(fa), tf.zeros_like(fa), fa)
fa = (fa-fa_mean)/fa_std
md = batch['md']
md = tf.where(tf.math.is_nan(md), tf.zeros_like(md), md)
md = (md-md_mean)/md_std
rd = batch['rd']
rd = tf.where(tf.math.is_nan(rd), tf.zeros_like(rd), rd)
rd = (rd-rd_mean)/rd_std
#subjectid = decoder(batch['subjectid'])
#y = get_labels(df, subjectid, list(cat_cols.keys())+num_cols)
#X = tf.concat([t1, t2, ad, fa, md, rd], axis=4)
X = tf.concat([t1, t2], axis=4)
tf.keras.backend.set_learning_phase(True)
model(X)['female']
tf.keras.backend.set_learning_phase(False)
model(X)['female']
mean_std = [x.name for x in model.non_trainable_variables if ('batch_norm') in x.name and ('mean' in x.name or 'variance' in x.name)]
model = Model(cat_cols, num_cols)
model.non_trainable_variables
```
|
github_jupyter
|
%%html
<span style="color:red; font-family:Helvetica Neue, Helvetica, Arial, sans-serif; font-size:2em;">An Exception was encountered at 'In [13]'.</span>
%load_ext autoreload
%autoreload 2
import glob
import nibabel as nib
import os
import time
import pandas as pd
import numpy as np
from mricode.utils import log_textfile
from mricode.utils import copy_colab
from mricode.utils import return_iter
from mricode.utils import return_csv
from mricode.models.SimpleCNN import SimpleCNN
from mricode.models.DenseNet import MyDenseNet
import tensorflow as tf
from tensorflow.keras.layers import Conv3D
from tensorflow import nn
from tensorflow.python.ops import nn_ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.keras.engine.base_layer import InputSpec
from tensorflow.python.keras.utils import conv_utils
tf.__version__
tf.test.is_gpu_available()
path_output = './output/'
path_tfrecords = '/data2/res64/down/'
path_csv = '/data2/csv/'
filename_res = {'train': 'intell_residual_train.csv', 'val': 'intell_residual_valid.csv', 'test': 'intell_residual_test.csv'}
filename_final = filename_res
sample_size = 'allimages'
batch_size = 8
onlyt1 = False
modelname = 'runAllImages64_SimpleCNN_T1_'
Model = SimpleCNN
t1_mean=1.3779395849814497
t1_std=3.4895845243139503
t2_mean=2.22435586968901
t2_std=5.07708743178319
ad_mean=1.3008901218593748e-05
ad_std=0.009966655860940228
fa_mean=0.0037552628409334037
fa_std=0.012922319568740915
md_mean=9.827903909139596e-06
md_std=0.009956973204022659
rd_mean=8.237404999587111e-06
rd_std=0.009954672598675338
train_iter, val_iter, test_iter = return_iter(path_tfrecords, sample_size, batch_size, onlyt1=onlyt1)
if False:
t1_mean = 0.
t1_std = 0.
t2_mean = 0.
t2_std = 0.
ad_mean = 0.
ad_std = 0.
fa_mean = 0.
fa_std = 0.
md_mean = 0.
md_std = 0.
rd_mean = 0.
rd_std = 0.
n = 0.
for b in train_iter:
t1_mean += np.mean(b['t1'])
t1_std += np.std(b['t1'])
t2_mean += np.mean(b['t2'])
t2_std += np.std(b['t2'])
a = np.asarray(b['ad'])
a = a.copy()
a[np.isnan(a)] = 0
ad_mean += np.mean(a)
ad_std += np.std(a)
a = np.asarray(b['fa'])
a = a.copy()
a[np.isnan(a)] = 0
fa_mean += np.mean(a)
fa_std += np.std(a)
a = np.asarray(b['md'])
a = a.copy()
a[np.isnan(a)] = 0
md_mean += np.mean(a)
md_std += np.std(a)
a = np.asarray(b['rd'])
a = a.copy()
a[np.isnan(a)] = 0
rd_mean += np.mean(a)
rd_std += np.std(a)
n += np.asarray(b['t1']).shape[0]
t1_mean /= n
t1_std /= n
t2_mean /= n
t2_std /= n
ad_mean /= n
ad_std /= n
fa_mean /= n
fa_std /= n
md_mean /= n
md_std /= n
rd_mean /= n
rd_std /= n
t1_mean, t1_std, t2_mean, t2_std, ad_mean, ad_std, fa_mean, fa_std, md_mean, md_std, rd_mean, rd_std
train_df, val_df, test_df, norm_dict = return_csv(path_csv, filename_final, False)
norm_dict
cat_cols = {'female': 2, 'race.ethnicity': 5, 'high.educ_group': 4, 'income_group': 8, 'married': 6}
num_cols = [x for x in list(val_df.columns) if '_norm' in x]
def calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict):
for col in num_cols:
tmp_col = col
tmp_std = norm_dict[tmp_col.replace('_norm','')]['std']
tmp_y_true = tf.cast(y_true[col], tf.float32).numpy()
tmp_y_pred = np.squeeze(y_pred[col].numpy())
if not(tmp_col in out_loss):
out_loss[tmp_col] = np.sum(np.square(tmp_y_true-tmp_y_pred))
else:
out_loss[tmp_col] += np.sum(np.square(tmp_y_true-tmp_y_pred))
if not(tmp_col in out_acc):
out_acc[tmp_col] = np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std))
else:
out_acc[tmp_col] += np.sum(np.square((tmp_y_true-tmp_y_pred)*tmp_std))
for col in list(cat_cols.keys()):
tmp_col = col
if not(tmp_col in out_loss):
out_loss[tmp_col] = tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy()
else:
out_loss[tmp_col] += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y_true[col]), tf.squeeze(y_pred[col])).numpy()
if not(tmp_col in out_acc):
out_acc[tmp_col] = tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy()
else:
out_acc[tmp_col] += tf.reduce_sum(tf.dtypes.cast((y_true[col] == tf.argmax(y_pred[col], axis=-1)), tf.float32)).numpy()
return(out_loss, out_acc)
def format_output(out_loss, out_acc, n, cols, print_bl=False):
loss = 0
acc = 0
output = []
for col in cols:
output.append([col, out_loss[col]/n, out_acc[col]/n])
loss += out_loss[col]/n
acc += out_acc[col]/n
df = pd.DataFrame(output)
df.columns = ['name', 'loss', 'acc']
if print_bl:
print(df)
return(loss, acc, df)
@tf.function
def train_step(X, y, model, optimizer, cat_cols, num_cols):
with tf.GradientTape() as tape:
predictions = model(X)
i = 0
loss = tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]]))
for i in range(1,len(num_cols)):
loss += tf.keras.losses.MSE(tf.cast(y[num_cols[i]], tf.float32), tf.squeeze(predictions[num_cols[i]]))
for col in list(cat_cols.keys()):
loss += tf.keras.losses.SparseCategoricalCrossentropy()(tf.squeeze(y[col]), tf.squeeze(predictions[col]))
gradients = tape.gradient(loss, model.trainable_variables)
mean_std = [x.name for x in model.non_trainable_variables if ('batch_norm') in x.name and ('mean' in x.name or 'variance' in x.name)]
with tf.control_dependencies(mean_std):
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return(y, predictions, loss)
@tf.function
def test_step(X, y, model):
predictions = model(X)
return(y, predictions)
def epoch(data_iter, df, model, optimizer, cat_cols, num_cols, norm_dict):
out_loss = {}
out_acc = {}
n = 0.
n_batch = 0.
total_time_dataload = 0.
total_time_model = 0.
start_time = time.time()
for batch in data_iter:
total_time_dataload += time.time() - start_time
start_time = time.time()
t1 = (tf.cast(batch['t1'], tf.float32)-t1_mean)/t1_std
t2 = (batch['t2']-t2_mean)/t2_std
ad = batch['ad']
ad = tf.where(tf.math.is_nan(ad), tf.zeros_like(ad), ad)
ad = (ad-ad_mean)/ad_std
fa = batch['fa']
fa = tf.where(tf.math.is_nan(fa), tf.zeros_like(fa), fa)
fa = (fa-fa_mean)/fa_std
md = batch['md']
md = tf.where(tf.math.is_nan(md), tf.zeros_like(md), md)
md = (md-md_mean)/md_std
rd = batch['rd']
rd = tf.where(tf.math.is_nan(rd), tf.zeros_like(rd), rd)
rd = (rd-rd_mean)/rd_std
subjectid = decoder(batch['subjectid'])
y = get_labels(df, subjectid, list(cat_cols.keys())+num_cols)
X = tf.concat([t1], axis=4)
#X = tf.concat([t1, t2], axis=4)
if optimizer != None:
y_true, y_pred, loss = train_step(X, y, model, optimizer, cat_cols, num_cols)
else:
y_true, y_pred = test_step(X, y, model)
out_loss, out_acc = calc_loss_acc(out_loss, out_acc, y_true, y_pred, cat_cols, num_cols, norm_dict)
n += X.shape[0]
n_batch += 1
if (n_batch % 10) == 0:
print(n_batch)
total_time_model += time.time() - start_time
start_time = time.time()
return (out_loss, out_acc, n, total_time_model, total_time_dataload)
def get_labels(df, subjectid, cols = ['nihtbx_fluidcomp_uncorrected_norm']):
subjects_df = pd.DataFrame(subjectid)
result_df = pd.merge(subjects_df, df, left_on=0, right_on='subjectkey', how='left')
output = {}
for col in cols:
output[col] = np.asarray(result_df[col].values)
return output
def best_val(df_best, df_val, df_test):
df_best = pd.merge(df_best, df_val, how='left', left_on='name', right_on='name')
df_best = pd.merge(df_best, df_test, how='left', left_on='name', right_on='name')
df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_test'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_test']
df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'best_loss_val'] = df_best.loc[df_best['best_loss_val']>=df_best['cur_loss_val'], 'cur_loss_val']
df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test']
df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']<=df_best['cur_acc_val'])&(df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val']
df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_test'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_test']
df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'best_acc_val'] = df_best.loc[(df_best['best_acc_val']>=df_best['cur_acc_val'])&(~df_best['name'].isin(['female', 'race.ethnicity', 'high.educ_group', 'income_group', 'married'])), 'cur_acc_val']
df_best = df_best.drop(['cur_loss_val', 'cur_acc_val', 'cur_loss_test', 'cur_acc_test'], axis=1)
return(df_best)
decoder = np.vectorize(lambda x: x.decode('UTF-8'))
template = 'Epoch {0}, Loss: {1:.3f}, Accuracy: {2:.3f}, Val Loss: {3:.3f}, Val Accuracy: {4:.3f}, Time Model: {5:.3f}, Time Data: {6:.3f}'
for col in [0]:
log_textfile(path_output + modelname + 'multitask_test' + '.log', cat_cols),
log_textfile(path_output + modelname + 'multitask_test' + '.log', num_cols)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam(lr = 0.001)
model = Model(cat_cols, num_cols)
df_best = None
for e in range(20):
log_textfile(path_output + modelname + 'multitask_test' + '.log', 'Epochs: ' + str(e))
loss = tf.Variable(0.)
acc = tf.Variable(0.)
val_loss = tf.Variable(0.)
val_acc = tf.Variable(0.)
test_loss = tf.Variable(0.)
test_acc = tf.Variable(0.)
tf.keras.backend.set_learning_phase(True)
train_out_loss, train_out_acc, n, time_model, time_data = epoch(train_iter, train_df, model, optimizer, cat_cols, num_cols, norm_dict)
tf.keras.backend.set_learning_phase(False)
val_out_loss, val_out_acc, n, _, _ = epoch(val_iter, val_df, model, None, cat_cols, num_cols, norm_dict)
test_out_loss, test_out_acc, n, _, _ = epoch(test_iter, test_df, model, None, cat_cols, num_cols, norm_dict)
loss, acc, _ = format_output(train_out_loss, train_out_acc, n, list(cat_cols.keys())+num_cols)
val_loss, val_acc, df_val = format_output(val_out_loss, val_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False)
test_loss, test_acc, df_test = format_output(test_out_loss, test_out_acc, n, list(cat_cols.keys())+num_cols, print_bl=False)
df_val.columns = ['name', 'cur_loss_val', 'cur_acc_val']
df_test.columns = ['name', 'cur_loss_test', 'cur_acc_test']
if e == 0:
df_best = pd.merge(df_test, df_val, how='left', left_on='name', right_on='name')
df_best.columns = ['name', 'best_loss_test', 'best_acc_test', 'best_loss_val', 'best_acc_val']
df_best = best_val(df_best, df_val, df_test)
print(df_best[['name', 'best_loss_test', 'best_acc_test']])
print(df_best[['name', 'best_loss_val', 'best_acc_val']])
log_textfile(path_output + modelname + 'multitask_test' + '.log', template.format(e, loss, acc, val_loss, val_acc, time_model, time_data))
if e in [13, 16]:
optimizer.lr = optimizer.lr/3
log_textfile(path_output + modelname + 'multitask_test' + '.log', 'Learning rate: ' + str(optimizer.lr))
df_best.to_csv(path_output + modelname + 'multitask_test' + '.csv')
error
batch = next(iter(train_iter))
t1 = (tf.cast(batch['t1'], tf.float32)-t1_mean)/t1_std
t2 = (batch['t2']-t2_mean)/t2_std
ad = batch['ad']
ad = tf.where(tf.math.is_nan(ad), tf.zeros_like(ad), ad)
ad = (ad-ad_mean)/ad_std
fa = batch['fa']
fa = tf.where(tf.math.is_nan(fa), tf.zeros_like(fa), fa)
fa = (fa-fa_mean)/fa_std
md = batch['md']
md = tf.where(tf.math.is_nan(md), tf.zeros_like(md), md)
md = (md-md_mean)/md_std
rd = batch['rd']
rd = tf.where(tf.math.is_nan(rd), tf.zeros_like(rd), rd)
rd = (rd-rd_mean)/rd_std
#subjectid = decoder(batch['subjectid'])
#y = get_labels(df, subjectid, list(cat_cols.keys())+num_cols)
#X = tf.concat([t1, t2, ad, fa, md, rd], axis=4)
X = tf.concat([t1, t2], axis=4)
tf.keras.backend.set_learning_phase(True)
model(X)['female']
tf.keras.backend.set_learning_phase(False)
model(X)['female']
mean_std = [x.name for x in model.non_trainable_variables if ('batch_norm') in x.name and ('mean' in x.name or 'variance' in x.name)]
model = Model(cat_cols, num_cols)
model.non_trainable_variables
| 0.371821 | 0.324102 |
```
# standard library
import sys,os
sys.path.append('..')
from pprint import pprint
# data and nlp
import pandas as pd
import spacy
nlp = spacy.load("en_core_web_sm", disable=["ner"])
# visualisation
import pyLDAvis
pyLDAvis.enable_notebook()
import seaborn as sns
from matplotlib import rcParams
# figure size in inches
rcParams['figure.figsize'] = 20,10
# LDA tools
import gensim
import gensim.corpora as corpora
from gensim.models import CoherenceModel
from utils import lda_utils
# warnings
import logging, warnings
warnings.filterwarnings('ignore')
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR)
import json
# Unfortunately newlines have been parsed as nothing instead of spaces
# but the script will work just the same
with open('data/all_series_lines.json') as file:
content = file.read()
line_dict = json.loads(content)
line_dict['DS9']['episode 0']['ODO']
episodes = {}
for series_name, series in line_dict.items():
for episode_name, episode in series.items():
episode_string = ''
for character_lines in episode.values():
lines = ' '.join(character_lines)
# Avoid adding just spaces
if len(lines) != 0:
episode_string += ' ' + lines
# Add the string containing all lines from the episode to our dict
episode_key = series_name + '_' + episode_name.split()[1]
episodes[episode_key] = episode_string
# explicitly convert to a list for processing
episode_lines = list(episodes.values())
# Build the bigram and trigram models
bigram = gensim.models.Phrases(episode_lines, min_count=10, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[episode_lines], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
# Tokenize, remove stopwords etc
processed_lines = lda_utils.process_words(episode_lines, nlp, bigram_mod, trigram_mod, allowed_postags=["NOUN"])
# Convert every token to an id
id2word = corpora.Dictionary(processed_lines)
# Count frequencies of the tokens (ids) collocation within an episode
corpus = [id2word.doc2bow(episode_lines) for episode_lines in processed_lines]
lda_model = gensim.models.LdaMulticore(corpus=corpus,
id2word=id2word,
num_topics=12,
random_state=420,
chunksize=10,
passes=10,
iterations=100,
per_word_topics=True,
minimum_probability=0.0)
# Compute Perplexity
print('\nPerplexity: ', lda_model.log_perplexity(corpus)) # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model,
texts=processed_lines,
dictionary=id2word,
coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
pprint(lda_model.print_topics())
model_list, coherence_values = lda_utils.compute_coherence_values(texts=processed_lines,
corpus=corpus,
dictionary=id2word,
start=3,
limit=30,
step=2)
df_topic_keywords = lda_utils.format_topics_sentences(ldamodel=lda_model,
corpus=corpus,
texts=processed_lines)
# Format
df_dominant_topic = df_topic_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.sample(10)
values = list(lda_model.get_document_topics(corpus))
split = []
for entry in values:
topic_prevelance = []
for topic in entry:
topic_prevelance.append(topic[1])
split.append(topic_prevelance)
df = pd.DataFrame(map(list,zip(*split)))
sns.lineplot(data=df.T.rolling(50).mean())
```
|
github_jupyter
|
# standard library
import sys,os
sys.path.append('..')
from pprint import pprint
# data and nlp
import pandas as pd
import spacy
nlp = spacy.load("en_core_web_sm", disable=["ner"])
# visualisation
import pyLDAvis
pyLDAvis.enable_notebook()
import seaborn as sns
from matplotlib import rcParams
# figure size in inches
rcParams['figure.figsize'] = 20,10
# LDA tools
import gensim
import gensim.corpora as corpora
from gensim.models import CoherenceModel
from utils import lda_utils
# warnings
import logging, warnings
warnings.filterwarnings('ignore')
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR)
import json
# Unfortunately newlines have been parsed as nothing instead of spaces
# but the script will work just the same
with open('data/all_series_lines.json') as file:
content = file.read()
line_dict = json.loads(content)
line_dict['DS9']['episode 0']['ODO']
episodes = {}
for series_name, series in line_dict.items():
for episode_name, episode in series.items():
episode_string = ''
for character_lines in episode.values():
lines = ' '.join(character_lines)
# Avoid adding just spaces
if len(lines) != 0:
episode_string += ' ' + lines
# Add the string containing all lines from the episode to our dict
episode_key = series_name + '_' + episode_name.split()[1]
episodes[episode_key] = episode_string
# explicitly convert to a list for processing
episode_lines = list(episodes.values())
# Build the bigram and trigram models
bigram = gensim.models.Phrases(episode_lines, min_count=10, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[episode_lines], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
# Tokenize, remove stopwords etc
processed_lines = lda_utils.process_words(episode_lines, nlp, bigram_mod, trigram_mod, allowed_postags=["NOUN"])
# Convert every token to an id
id2word = corpora.Dictionary(processed_lines)
# Count frequencies of the tokens (ids) collocation within an episode
corpus = [id2word.doc2bow(episode_lines) for episode_lines in processed_lines]
lda_model = gensim.models.LdaMulticore(corpus=corpus,
id2word=id2word,
num_topics=12,
random_state=420,
chunksize=10,
passes=10,
iterations=100,
per_word_topics=True,
minimum_probability=0.0)
# Compute Perplexity
print('\nPerplexity: ', lda_model.log_perplexity(corpus)) # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model,
texts=processed_lines,
dictionary=id2word,
coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
pprint(lda_model.print_topics())
model_list, coherence_values = lda_utils.compute_coherence_values(texts=processed_lines,
corpus=corpus,
dictionary=id2word,
start=3,
limit=30,
step=2)
df_topic_keywords = lda_utils.format_topics_sentences(ldamodel=lda_model,
corpus=corpus,
texts=processed_lines)
# Format
df_dominant_topic = df_topic_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
df_dominant_topic.sample(10)
values = list(lda_model.get_document_topics(corpus))
split = []
for entry in values:
topic_prevelance = []
for topic in entry:
topic_prevelance.append(topic[1])
split.append(topic_prevelance)
df = pd.DataFrame(map(list,zip(*split)))
sns.lineplot(data=df.T.rolling(50).mean())
| 0.374448 | 0.210381 |
# Collecting and Graphing English Wikipedia page views 2007-2021
#### This code is made available for re-use under a [MIT license](https://opensource.org/licenses/MIT)
# Imports
```
import requests
import json
import pandas as pd
from matplotlib import rcParams
import matplotlib.pyplot as plt
import numpy as np
```
# Data acquisition
The project pulls data from Wikimedia's API. We leverage two endpoints:
1. Legacy Pagecounts API ([documentation]( https://wikitech.wikimedia.org/wiki/Analytics/AQS/Legacy_Pagecounts), [endpoint](https://wikimedia.org/api/rest_v1/#/Pagecounts_data_(legacy)/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end) ))
2. PageView API ([documentation](https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews), [endpoint](https://wikimedia.org/api/rest_v1/#/Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end))
Visit the hyperlinks to learn more about the api endpoints and their respective parameters.
### Define Wikimedia endpoints
```
pagecount_endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
pageview_endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers = {
'User-Agent': 'https://github.com/savageGrant',
'From':'[email protected]'
}
```
### Define API Parameters
define two sets of parameters, one for each API we leverage
We will be making a call for each access type we want to collect information on.
For the Legacy Pagecount API we will collect data on access types:
1. desktop-site
2. mobile-site
For the Pageview API we will collect data on access types:
1. desktop
2. mobile-web
3. mobile-app
```
# Legacy pagecount API params
legacy_pagecount = """{{"project" : "en.wikipedia.org",
"access-site" : "{placeholder}",
"granularity" : "monthly",
"start" : "2008010100",
"end" : "2016080100"
}}"""
legacy_pagecount_access_types = ['desktop-site', 'mobile-site']
# Pageview API params
pageview = """{{"project" : "en.wikipedia.org",
"access" : "{placeholder}",
"agent" : "user",
"granularity" : "monthly",
"start" : "2015070100",
"end" : "2021090100"
}}"""
pageview_access_types = ['desktop', 'mobile-web', 'mobile-app']
```
### Make the API calls for each of the access types and save to json files
```
def api_call(endpoint,parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
```
### Runs two For loops, one for each API endpoint.
### For each endpoint this makes a call for each desired access type.
### Saves the output of each call to a .json file in data_raw folder. Copy the data to the dataframes dictionary with the access_type value as the key.
The json naming covention used is apiname_accesstype_firstmonth-lastmonth.json
```
dataframes = {}
#loop over each access type and make an api_call. Save output to file and copy to the dataframes dictionary.
#legacy pagecount API calls
for access_type in legacy_pagecount_access_types:
api_params = json.loads(legacy_pagecount.format(placeholder=access_type))
monthly_pagecount = api_call(pagecount_endpoint, api_params)['items']
dataframes[access_type] = pd.DataFrame(monthly_pagecount)
with open('data_raw/pagecounts_{}_20080101-20160801.json'.format(access_type), 'w') as json_file:
json.dump(monthly_pagecount, json_file)
#pageview API calls
for access_type in pageview_access_types:
api_params = json.loads(pageview.format(placeholder=access_type))
monthly_pageview = api_call(pageview_endpoint, api_params)['items']
dataframes[access_type] = pd.DataFrame(monthly_pageview)
with open('data_raw/pageviews_{}_200150701-20210901.json'.format(access_type), 'w') as json_file:
json.dump(monthly_pageview, json_file)
```
# Data processing
This section cleans the raw data and combines the 5 dataframes that were created in the previous step into one final dataframe.
### Rename columns and combine pageview mobile-web and pageview mobile-app data into single pageview_mobile_views column.
```
#legacy_pagecount desktop-site
dataframes['desktop-site'].rename(columns={'count':'pagecount_desktop_views'}, inplace=True)
# legacy_pagecount mobile-site
dataframes['mobile-site'].rename(columns={'count':'pagecount_mobile_views'}, inplace=True)
# pageview desktop
dataframes['desktop'].rename(columns={'views':'pageview_desktop_views'}, inplace=True)
# pageview mobile combines both mobile-web and mobile-app into a single dataframe
mobile_pageview = pd.concat([dataframes['mobile-web'], dataframes['mobile-app']])\
.groupby('timestamp')['views']\
.sum().reset_index()
mobile_pageview.rename(columns={'views':'pageview_mobile_views'}, inplace=True)
```
### Merge dataframes into a final frame and drop unnecessary columns
```
final_df = mobile_pageview.merge(dataframes['desktop'],how='outer', on='timestamp').merge(dataframes['mobile-site'], how='outer', on='timestamp').merge(dataframes['desktop-site'], how='outer', on='timestamp')
final_df = final_df[['timestamp','pagecount_mobile_views','pagecount_desktop_views','pageview_mobile_views', 'pageview_desktop_views']]
```
### Create two new columns, summing the views across access type for the pagecount and the pageview APIs.
```
final_df['pagecount_all_views'] = final_df['pagecount_mobile_views'].fillna(0) + final_df['pagecount_desktop_views'].fillna(0)
final_df['pageview_all_views'] = final_df['pageview_mobile_views'].fillna(0) + final_df['pageview_desktop_views'].fillna(0)
```
### Extracting year and month from timestamp, sorting the dataframe then droping the timestamp column
```
final_df['year'] = final_df['timestamp'].str[:4]
final_df['month'] = final_df['timestamp'].str[4:6]
final_df = final_df.sort_values(by=['timestamp'])
final_df.drop(columns=['timestamp'],inplace=True)
```
### Save the cleaned data to a CSV.
Note: The csv has 0 values in rows where data does not exist, the dataframe has np.nans. This is intentional because the np.nans create a more attractive graph.
```
final_df.fillna(0).to_csv('data_clean/en-wikipedia_traffic_200712-202108.csv')
```
# Analysis
This is a simple plot of the data over time. I leverage matplotlib to generate the graph.
```
# Date column makes plotting significantly easier
final_df['Date'] = pd.to_datetime([f'{y}-{m}-01' for y, m in zip(final_df.year, final_df.month)])
# removes zeros for styling graph
final_df = final_df.replace(0,np.nan)
#plot
rcParams['figure.figsize'] = [20, 10]
plt.figure()
plt.ticklabel_format(style='plain')
plt.plot(final_df.Date, final_df.pagecount_desktop_views/1e6, color = 'black', linestyle = 'dashed', label='pagecount_desktop_views')
plt.plot(final_df.Date, final_df.pagecount_all_views/1e6, color = 'red', linestyle = 'dashed', label='pagecount_all_views')
plt.plot(final_df.Date, final_df.pagecount_mobile_views/1e6, color = 'green', linestyle = 'dashed', label='pagecount_mobile_views')
plt.plot(final_df.Date, final_df.pageview_all_views/1e6, color = 'red', linestyle = 'solid',label='pageview_all_views')
plt.plot(final_df.Date, final_df.pageview_desktop_views/1e6, color = 'black', linestyle = 'solid',label='pageview_desktop_views')
plt.plot(final_df.Date, final_df.pageview_mobile_views/1e6, color = 'green', linestyle = 'solid',label='pageview_mobile_views')
plt.legend(loc = "upper left")
plt.title('Page Views on English Wikipedia (in millions)')
plt.grid(True)
plt.figure(figsize=(12,8))
```
|
github_jupyter
|
import requests
import json
import pandas as pd
from matplotlib import rcParams
import matplotlib.pyplot as plt
import numpy as np
pagecount_endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
pageview_endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers = {
'User-Agent': 'https://github.com/savageGrant',
'From':'[email protected]'
}
# Legacy pagecount API params
legacy_pagecount = """{{"project" : "en.wikipedia.org",
"access-site" : "{placeholder}",
"granularity" : "monthly",
"start" : "2008010100",
"end" : "2016080100"
}}"""
legacy_pagecount_access_types = ['desktop-site', 'mobile-site']
# Pageview API params
pageview = """{{"project" : "en.wikipedia.org",
"access" : "{placeholder}",
"agent" : "user",
"granularity" : "monthly",
"start" : "2015070100",
"end" : "2021090100"
}}"""
pageview_access_types = ['desktop', 'mobile-web', 'mobile-app']
def api_call(endpoint,parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
dataframes = {}
#loop over each access type and make an api_call. Save output to file and copy to the dataframes dictionary.
#legacy pagecount API calls
for access_type in legacy_pagecount_access_types:
api_params = json.loads(legacy_pagecount.format(placeholder=access_type))
monthly_pagecount = api_call(pagecount_endpoint, api_params)['items']
dataframes[access_type] = pd.DataFrame(monthly_pagecount)
with open('data_raw/pagecounts_{}_20080101-20160801.json'.format(access_type), 'w') as json_file:
json.dump(monthly_pagecount, json_file)
#pageview API calls
for access_type in pageview_access_types:
api_params = json.loads(pageview.format(placeholder=access_type))
monthly_pageview = api_call(pageview_endpoint, api_params)['items']
dataframes[access_type] = pd.DataFrame(monthly_pageview)
with open('data_raw/pageviews_{}_200150701-20210901.json'.format(access_type), 'w') as json_file:
json.dump(monthly_pageview, json_file)
#legacy_pagecount desktop-site
dataframes['desktop-site'].rename(columns={'count':'pagecount_desktop_views'}, inplace=True)
# legacy_pagecount mobile-site
dataframes['mobile-site'].rename(columns={'count':'pagecount_mobile_views'}, inplace=True)
# pageview desktop
dataframes['desktop'].rename(columns={'views':'pageview_desktop_views'}, inplace=True)
# pageview mobile combines both mobile-web and mobile-app into a single dataframe
mobile_pageview = pd.concat([dataframes['mobile-web'], dataframes['mobile-app']])\
.groupby('timestamp')['views']\
.sum().reset_index()
mobile_pageview.rename(columns={'views':'pageview_mobile_views'}, inplace=True)
final_df = mobile_pageview.merge(dataframes['desktop'],how='outer', on='timestamp').merge(dataframes['mobile-site'], how='outer', on='timestamp').merge(dataframes['desktop-site'], how='outer', on='timestamp')
final_df = final_df[['timestamp','pagecount_mobile_views','pagecount_desktop_views','pageview_mobile_views', 'pageview_desktop_views']]
final_df['pagecount_all_views'] = final_df['pagecount_mobile_views'].fillna(0) + final_df['pagecount_desktop_views'].fillna(0)
final_df['pageview_all_views'] = final_df['pageview_mobile_views'].fillna(0) + final_df['pageview_desktop_views'].fillna(0)
final_df['year'] = final_df['timestamp'].str[:4]
final_df['month'] = final_df['timestamp'].str[4:6]
final_df = final_df.sort_values(by=['timestamp'])
final_df.drop(columns=['timestamp'],inplace=True)
final_df.fillna(0).to_csv('data_clean/en-wikipedia_traffic_200712-202108.csv')
# Date column makes plotting significantly easier
final_df['Date'] = pd.to_datetime([f'{y}-{m}-01' for y, m in zip(final_df.year, final_df.month)])
# removes zeros for styling graph
final_df = final_df.replace(0,np.nan)
#plot
rcParams['figure.figsize'] = [20, 10]
plt.figure()
plt.ticklabel_format(style='plain')
plt.plot(final_df.Date, final_df.pagecount_desktop_views/1e6, color = 'black', linestyle = 'dashed', label='pagecount_desktop_views')
plt.plot(final_df.Date, final_df.pagecount_all_views/1e6, color = 'red', linestyle = 'dashed', label='pagecount_all_views')
plt.plot(final_df.Date, final_df.pagecount_mobile_views/1e6, color = 'green', linestyle = 'dashed', label='pagecount_mobile_views')
plt.plot(final_df.Date, final_df.pageview_all_views/1e6, color = 'red', linestyle = 'solid',label='pageview_all_views')
plt.plot(final_df.Date, final_df.pageview_desktop_views/1e6, color = 'black', linestyle = 'solid',label='pageview_desktop_views')
plt.plot(final_df.Date, final_df.pageview_mobile_views/1e6, color = 'green', linestyle = 'solid',label='pageview_mobile_views')
plt.legend(loc = "upper left")
plt.title('Page Views on English Wikipedia (in millions)')
plt.grid(True)
plt.figure(figsize=(12,8))
| 0.473414 | 0.79158 |
```
import arviz as az
import numpy as np
from generate_data import generate_data
from utils import StanModel_cache
n = 100
Years_indiv, Mean_RT_comp_Indiv, Mean_RT_incomp_Indiv = generate_data(8, n)
y_obs = np.hstack((Mean_RT_comp_Indiv, Mean_RT_incomp_Indiv))
age = np.hstack((Years_indiv, Years_indiv))
condition = np.hstack((np.full(n, 1, dtype=int), np.full(n, 2, dtype=int))) # 1 for comp, 2 for incomp
dims = {"y_obs": ["obs_dim"]}
log_lik_dict = ["log_lik", "log_lik_ex"]
data = {
"n": 2*n,
"y_obs": y_obs,
"condition": condition,
"age": age,
"mean_rt": [Mean_RT_comp_Indiv.mean(), Mean_RT_incomp_Indiv.mean()],
"n_ex": 0,
"age_ex": np.array([], dtype=int),
"y_obs_ex": [],
"condition_ex": np.array([], dtype=int),
}
```
This code is basically the same as the one in the PyStan example (and also equivalent to the PyMC3 one) with two main differences:
* The log likelihood returned is already the sum of the 2 observations that correspond to each subject
* There are also some variable with a `_ex` sufix. These variables will be used to perform exact cross validation, they indicate the data that is not used for fitting but for cross validation.
```
loo_obs_code = """
data {
int<lower=0> n;
real y_obs[n];
int condition[n];
int<lower=0> age[n];
real mean_rt[2];
// excluded data
int<lower=0> n_ex;
real y_obs_ex[n_ex];
int condition_ex[n_ex];
int<lower=0> age_ex[n_ex];
}
parameters {
real b;
real<lower=0> sigma;
real<lower=0> a[2];
real g[2];
}
transformed parameters {
real mu[n];
real mu_ex[n_ex];
for (j in 1:n) {
mu[j] = a[condition[j]]*exp(-b*age[j]) + g[condition[j]];
}
for (i in 1:n_ex) {
mu_ex[i] = a[condition_ex[i]]*exp(-b*age_ex[i]) + g[condition_ex[i]];
}
}
model {
a ~ cauchy(0, 5);
b ~ normal(1, 1);
g ~ normal(mean_rt, .5);
sigma ~ normal(0, .2);
y_obs ~ normal(mu, sigma);
}
generated quantities {
real log_lik[n];
real log_lik_ex[n_ex];
for (j in 1:n) {
log_lik[j] = normal_lpdf(y_obs[j] | mu[j], sigma);
}
for (i in 1:n_ex) {
log_lik_ex[i] = normal_lpdf(y_obs_ex[i] | mu_ex[i], sigma);
}
}
"""
stan_model = StanModel_cache(model_code=loo_obs_code)
fit_kwargs = dict(iter=4000, control={"adapt_delta" : 0.9})
fit = stan_model.sampling(data=data, **fit_kwargs)
idata_kwargs = dict(
observed_data=["y_obs", "condition"],
constant_data=["age"],
dims=dims,
log_likelihood=log_lik_dict
)
idata_exp = az.from_pystan(fit, **idata_kwargs)
class ExpWrapper(az.PyStanSamplingWrapper):
def sel_observations(self, idx):
ages = self.idata_orig.constant_data.age.values
y = self.idata_orig.observed_data.y_obs.values
cond = self.idata_orig.observed_data.condition.values
mask = np.full_like(ages, True, dtype=bool)
mask[idx] = False
n_obs = np.sum(mask)
n_ex = np.sum(~mask)
means = [y[mask & cond == 0].mean(), y[mask & cond == 1].mean()]
observations = {
"n": n_obs,
"age": ages[mask],
"y_obs": y[mask],
"condition": cond[mask],
"mean_rt": means,
"n_ex": n_ex,
"age_ex": ages[~mask],
"y_obs_ex": y[~mask],
"condition_ex": cond[~mask]
}
return observations, "log_lik_ex"
idata_exp.sample_stats["log_likelihood"] = idata_exp.log_likelihood.log_lik
loo_psis = az.loo(idata_exp, pointwise=True)
print("(PSIS) Leave one *subject* out cross validation (whole model)\n")
loo_psis
loo_psis.pareto_k[:] = 1.2 # dirty trick: we set all pareto_k values above threshold
# to make reloo perform exact cross validation for us
exp_wrapper = ExpWrapper(
stan_model,
idata_orig=idata_exp,
sample_kwargs=fit_kwargs,
idata_kwargs=idata_kwargs
)
loo_exact = az.reloo(exp_wrapper, loo_orig=loo_psis)
print("(exact) Leave one *subject* out cross validation (whole model)\n")
loo_exact
```
|
github_jupyter
|
import arviz as az
import numpy as np
from generate_data import generate_data
from utils import StanModel_cache
n = 100
Years_indiv, Mean_RT_comp_Indiv, Mean_RT_incomp_Indiv = generate_data(8, n)
y_obs = np.hstack((Mean_RT_comp_Indiv, Mean_RT_incomp_Indiv))
age = np.hstack((Years_indiv, Years_indiv))
condition = np.hstack((np.full(n, 1, dtype=int), np.full(n, 2, dtype=int))) # 1 for comp, 2 for incomp
dims = {"y_obs": ["obs_dim"]}
log_lik_dict = ["log_lik", "log_lik_ex"]
data = {
"n": 2*n,
"y_obs": y_obs,
"condition": condition,
"age": age,
"mean_rt": [Mean_RT_comp_Indiv.mean(), Mean_RT_incomp_Indiv.mean()],
"n_ex": 0,
"age_ex": np.array([], dtype=int),
"y_obs_ex": [],
"condition_ex": np.array([], dtype=int),
}
loo_obs_code = """
data {
int<lower=0> n;
real y_obs[n];
int condition[n];
int<lower=0> age[n];
real mean_rt[2];
// excluded data
int<lower=0> n_ex;
real y_obs_ex[n_ex];
int condition_ex[n_ex];
int<lower=0> age_ex[n_ex];
}
parameters {
real b;
real<lower=0> sigma;
real<lower=0> a[2];
real g[2];
}
transformed parameters {
real mu[n];
real mu_ex[n_ex];
for (j in 1:n) {
mu[j] = a[condition[j]]*exp(-b*age[j]) + g[condition[j]];
}
for (i in 1:n_ex) {
mu_ex[i] = a[condition_ex[i]]*exp(-b*age_ex[i]) + g[condition_ex[i]];
}
}
model {
a ~ cauchy(0, 5);
b ~ normal(1, 1);
g ~ normal(mean_rt, .5);
sigma ~ normal(0, .2);
y_obs ~ normal(mu, sigma);
}
generated quantities {
real log_lik[n];
real log_lik_ex[n_ex];
for (j in 1:n) {
log_lik[j] = normal_lpdf(y_obs[j] | mu[j], sigma);
}
for (i in 1:n_ex) {
log_lik_ex[i] = normal_lpdf(y_obs_ex[i] | mu_ex[i], sigma);
}
}
"""
stan_model = StanModel_cache(model_code=loo_obs_code)
fit_kwargs = dict(iter=4000, control={"adapt_delta" : 0.9})
fit = stan_model.sampling(data=data, **fit_kwargs)
idata_kwargs = dict(
observed_data=["y_obs", "condition"],
constant_data=["age"],
dims=dims,
log_likelihood=log_lik_dict
)
idata_exp = az.from_pystan(fit, **idata_kwargs)
class ExpWrapper(az.PyStanSamplingWrapper):
def sel_observations(self, idx):
ages = self.idata_orig.constant_data.age.values
y = self.idata_orig.observed_data.y_obs.values
cond = self.idata_orig.observed_data.condition.values
mask = np.full_like(ages, True, dtype=bool)
mask[idx] = False
n_obs = np.sum(mask)
n_ex = np.sum(~mask)
means = [y[mask & cond == 0].mean(), y[mask & cond == 1].mean()]
observations = {
"n": n_obs,
"age": ages[mask],
"y_obs": y[mask],
"condition": cond[mask],
"mean_rt": means,
"n_ex": n_ex,
"age_ex": ages[~mask],
"y_obs_ex": y[~mask],
"condition_ex": cond[~mask]
}
return observations, "log_lik_ex"
idata_exp.sample_stats["log_likelihood"] = idata_exp.log_likelihood.log_lik
loo_psis = az.loo(idata_exp, pointwise=True)
print("(PSIS) Leave one *subject* out cross validation (whole model)\n")
loo_psis
loo_psis.pareto_k[:] = 1.2 # dirty trick: we set all pareto_k values above threshold
# to make reloo perform exact cross validation for us
exp_wrapper = ExpWrapper(
stan_model,
idata_orig=idata_exp,
sample_kwargs=fit_kwargs,
idata_kwargs=idata_kwargs
)
loo_exact = az.reloo(exp_wrapper, loo_orig=loo_psis)
print("(exact) Leave one *subject* out cross validation (whole model)\n")
loo_exact
| 0.575588 | 0.740245 |
# From `zipline` to `pyfolio`
[Pyfolio](http://quantopian.github.io/pyfolio/) facilitates the analysis of portfolio performance and risk in-sample and out-of-sample using many standard metrics. It produces tear sheets covering the analysis of returns, positions, and transactions, as well as event risk during periods of market stress using several built-in scenarios, and also includes Bayesian out-of-sample performance analysis.
* Open-source backtester by Quantopian Inc.
* Powers Quantopian.com
* State-of-the-art portfolio and risk analytics
* Various models for transaction costs and slippage.
* Open source and free: Apache v2 license
* Can be used:
- stand alone
- with Zipline
- on Quantopian
Run this note the following from the command line to create a `conda` environment with `zipline` and `pyfolio`:
```
conda env create -f environment.yml
```
This assumes you have miniconda3 installed.
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
import re
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from pyfolio.utils import extract_rets_pos_txn_from_zipline
from pyfolio.plotting import (plot_perf_stats,
show_perf_stats,
plot_rolling_beta,
plot_rolling_returns,
plot_rolling_sharpe,
plot_drawdown_periods,
plot_drawdown_underwater)
from pyfolio.timeseries import perf_stats, extract_interesting_date_ranges
sns.set_style('whitegrid')
```
## Converting data from zipline to pyfolio
```
with pd.HDFStore('backtests.h5') as store:
backtest = store['backtest/equal_weight']
backtest.info()
```
`pyfolio` relies on portfolio returns and position data, and can also take into account the transaction costs and slippage losses of trading activity. The metrics are computed using the empyrical library that can also be used on a standalone basis. The performance DataFrame produced by the zipline backtesting engine can be translated into the requisite pyfolio input.
```
returns, positions, transactions = extract_rets_pos_txn_from_zipline(backtest)
returns.head().append(returns.tail())
positions.info()
positions.columns = [c for c in positions.columns[:-1]] + ['cash']
positions.index = positions.index.normalize()
positions.info()
transactions.symbol = transactions.symbol.apply(lambda x: x.symbol)
transactions.head().append(transactions.tail())
HDF_PATH = Path('..', 'data', 'assets.h5')
```
### Sector Map
```
assets = positions.columns[:-1]
with pd.HDFStore(HDF_PATH) as store:
df = store.get('us_equities/stocks')['sector'].dropna()
df = df[~df.index.duplicated()]
sector_map = df.reindex(assets).fillna('Unknown').to_dict()
```
### Benchmark
```
with pd.HDFStore(HDF_PATH) as store:
benchmark_rets = store['sp500/fred'].close.pct_change()
benchmark_rets.name = 'S&P500'
benchmark_rets = benchmark_rets.tz_localize('UTC').filter(returns.index)
benchmark_rets.tail()
perf_stats(returns=returns,
factor_returns=benchmark_rets)
# positions=positions,
# transactions=transactions)
fig, ax = plt.subplots(figsize=(14, 5))
plot_perf_stats(returns=returns,
factor_returns=benchmark_rets,
ax=ax)
sns.despine()
fig.tight_layout();
```
## Returns Analysis
Testing a trading strategy involves backtesting against historical data to fine-tune alpha factor parameters, as well as forward-testing against new market data to validate that the strategy performs well out of sample or if the parameters are too closely tailored to specific historical circumstances.
Pyfolio allows for the designation of an out-of-sample period to simulate walk-forward testing. There are numerous aspects to take into account when testing a strategy to obtain statistically reliable results, which we will address here.
```
oos_date = '2016-01-01'
show_perf_stats(returns=returns,
factor_returns=benchmark_rets,
positions=positions,
transactions=transactions,
live_start_date=oos_date)
```
### Rolling Returns OOS
The `plot_rolling_returns` function displays cumulative in and out-of-sample returns against a user-defined benchmark (we are using the S&P 500):
```
plot_rolling_returns(returns=returns,
factor_returns=benchmark_rets,
live_start_date=oos_date,
cone_std=(1.0, 1.5, 2.0))
plt.gcf().set_size_inches(14, 8)
sns.despine()
plt.tight_layout();
```
The plot includes a cone that shows expanding confidence intervals to indicate when out-of-sample returns appear unlikely given random-walk assumptions. Here, our strategy did not perform well against the benchmark during the simulated 2017 out-of-sample period
## Summary Performance Statistics
pyfolio offers several analytic functions and plots. The perf_stats summary displays the annual and cumulative returns, volatility, skew, and kurtosis of returns and the SR. The following additional metrics (which can also be calculated individually) are most important:
- Max drawdown: Highest percentage loss from the previous peak
- Calmar ratio: Annual portfolio return relative to maximal drawdown
- Omega ratio: The probability-weighted ratio of gains versus losses for a return target, zero per default
- Sortino ratio: Excess return relative to downside standard deviation
- Tail ratio: Size of the right tail (gains, the absolute value of the 95th percentile) relative to the size of the left tail (losses, abs. value of the 5th percentile)
- Daily value at risk (VaR): Loss corresponding to a return two standard deviations below the daily mean
- Alpha: Portfolio return unexplained by the benchmark return
- Beta: Exposure to the benchmark
### Rolling Sharpe
```
plot_rolling_sharpe(returns=returns)
plt.gcf().set_size_inches(14, 8)
sns.despine()
plt.tight_layout();
```
### Rolling Beta
```
plot_rolling_beta(returns=returns, factor_returns=benchmark_rets)
plt.gcf().set_size_inches(14, 6)
sns.despine()
plt.tight_layout();
```
## Drawdown Periods
The plot_drawdown_periods(returns) function plots the principal drawdown periods for the portfolio, and several other plotting functions show the rolling SR and rolling factor exposures to the market beta or the Fama French size, growth, and momentum factors:
```
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 10))
axes = ax.flatten()
plot_drawdown_periods(returns=returns, ax=axes[0])
plot_rolling_beta(returns=returns, factor_returns=benchmark_rets, ax=axes[1])
plot_drawdown_underwater(returns=returns, ax=axes[2])
plot_rolling_sharpe(returns=returns)
sns.despine()
plt.tight_layout();
```
This plot, which highlights a subset of the visualization contained in the various tear sheets, illustrates how pyfolio allows us to drill down into the performance characteristics and exposure to fundamental drivers of risk and returns.
## Modeling Event Risk
Pyfolio also includes timelines for various events that you can use to compare the performance of a portfolio to a benchmark during this period, for example, during the fall 2015 selloff following the Brexit vote.
```
interesting_times = extract_interesting_date_ranges(returns=returns)
(interesting_times['Fall2015']
.to_frame('momentum_equal_weights').join(benchmark_rets)
.add(1).cumprod().sub(1)
.plot(lw=2, figsize=(14, 6), title='Post-Brexit Turmoil'))
sns.despine()
plt.tight_layout();
```
|
github_jupyter
|
conda env create -f environment.yml
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
import re
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from pyfolio.utils import extract_rets_pos_txn_from_zipline
from pyfolio.plotting import (plot_perf_stats,
show_perf_stats,
plot_rolling_beta,
plot_rolling_returns,
plot_rolling_sharpe,
plot_drawdown_periods,
plot_drawdown_underwater)
from pyfolio.timeseries import perf_stats, extract_interesting_date_ranges
sns.set_style('whitegrid')
with pd.HDFStore('backtests.h5') as store:
backtest = store['backtest/equal_weight']
backtest.info()
returns, positions, transactions = extract_rets_pos_txn_from_zipline(backtest)
returns.head().append(returns.tail())
positions.info()
positions.columns = [c for c in positions.columns[:-1]] + ['cash']
positions.index = positions.index.normalize()
positions.info()
transactions.symbol = transactions.symbol.apply(lambda x: x.symbol)
transactions.head().append(transactions.tail())
HDF_PATH = Path('..', 'data', 'assets.h5')
assets = positions.columns[:-1]
with pd.HDFStore(HDF_PATH) as store:
df = store.get('us_equities/stocks')['sector'].dropna()
df = df[~df.index.duplicated()]
sector_map = df.reindex(assets).fillna('Unknown').to_dict()
with pd.HDFStore(HDF_PATH) as store:
benchmark_rets = store['sp500/fred'].close.pct_change()
benchmark_rets.name = 'S&P500'
benchmark_rets = benchmark_rets.tz_localize('UTC').filter(returns.index)
benchmark_rets.tail()
perf_stats(returns=returns,
factor_returns=benchmark_rets)
# positions=positions,
# transactions=transactions)
fig, ax = plt.subplots(figsize=(14, 5))
plot_perf_stats(returns=returns,
factor_returns=benchmark_rets,
ax=ax)
sns.despine()
fig.tight_layout();
oos_date = '2016-01-01'
show_perf_stats(returns=returns,
factor_returns=benchmark_rets,
positions=positions,
transactions=transactions,
live_start_date=oos_date)
plot_rolling_returns(returns=returns,
factor_returns=benchmark_rets,
live_start_date=oos_date,
cone_std=(1.0, 1.5, 2.0))
plt.gcf().set_size_inches(14, 8)
sns.despine()
plt.tight_layout();
plot_rolling_sharpe(returns=returns)
plt.gcf().set_size_inches(14, 8)
sns.despine()
plt.tight_layout();
plot_rolling_beta(returns=returns, factor_returns=benchmark_rets)
plt.gcf().set_size_inches(14, 6)
sns.despine()
plt.tight_layout();
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 10))
axes = ax.flatten()
plot_drawdown_periods(returns=returns, ax=axes[0])
plot_rolling_beta(returns=returns, factor_returns=benchmark_rets, ax=axes[1])
plot_drawdown_underwater(returns=returns, ax=axes[2])
plot_rolling_sharpe(returns=returns)
sns.despine()
plt.tight_layout();
interesting_times = extract_interesting_date_ranges(returns=returns)
(interesting_times['Fall2015']
.to_frame('momentum_equal_weights').join(benchmark_rets)
.add(1).cumprod().sub(1)
.plot(lw=2, figsize=(14, 6), title='Post-Brexit Turmoil'))
sns.despine()
plt.tight_layout();
| 0.600305 | 0.966757 |
# Analyse of Dataset
## Import of Libraries
```
import pandas as pd
import numpy as np
```
## Import of Data
```
# load dataset
df = pd.read_csv('data/kc_house_data.csv')
# read top 5 rows dataset
df.head()
```
### Check Data Types
```
# Check data types of dataset
df.dtypes
```
### Check NA
```
# check NA values
df.isna().sum()
```
## ETL Data
### Check duplicated.
```
# Check duplicated data from ID columns.
df['id'].duplicated().value_counts()
```
**OBS:** There are 177 duplicated house ID, may be its happen because there are some update sale price how to show below.
```
# I used group by to list all houses duplicated
pd.concat(cont for i, cont in df.groupby("id") if len(cont) > 1).head(20)
```
**Remove duplicates from column ID (last)**
```
# I used drop duplicates to remove and keep only last ID
df.drop_duplicates(subset='id', keep='last', inplace=True)
#check duplicated again
df.duplicated().value_counts()
```
### Convert data type
```
# use o pd.to_datetime para converter as colunas para tipo data.
#df.dtypes
df['date'] = pd.to_datetime( df['date'])
#df.dtypes
df[['id','date']].head(20)
```
## Answering business questions.
### 1 - Quantas casas estao diponiveis para compra.
```
# Usei nunique para contar quantidade de chave unica do ID
df['id'].nunique()
```
**Resposta** Exixtem 21.436 casa disponivel para compra.
### 2 - Quantos atributos as casas possuem?
```
# use o Shape para contar a quantidade de linhas ou colunas do dataset
# df.shape[1] conta as colunas
# df.shape[0] conta as linhas
df.shape[1]
```
**Resposta** As casa possuem 21 atributos.
### 3 - Quais são os atributos das casas?
```
#use o columns para exibir os nomes das colunas do dataset
df.columns
```
**Resposta** Os atributos sao id, date, price, bedrooms, bathrooms, sqft_living, sqft_lot, floors, waterfront, view,
condition, grade, sqft_above, sqft_basement, yr_built, yr_renovated, zipcode, lat, long, sqft_living15, sqft_lot15
### 4 - Qual a casa mais cara (casa com o maior valor de venda)?
```
# use o sort_values(by=) para buscar valores ordenados por uma coluna
# pode ser usando tanbem o .rank()
df.sort_values(by='price',ascending=False).head(1)
```
**Resposta** A casa do ID 6762700020 tem o maior valor de venda de 7700000.0
### 5 - Qual a casa com o maior número de quartos?
```
df.sort_values(by='bedrooms',ascending=False).head(1)
```
**Resposta** A maior com numeros de quartos e a casa de ID 2402100895 tem 33 quartos.
### 6 - Qual a soma total de quartos do conjunto de dados?
```
df['bedrooms'].sum()
```
**Resposta** O total de quartos do conjunto de dados e 72273
### 7 - Quantas casas possuem 2 banheiros?
```
#use o loc pra contar na coluna bathroom quanto sao igual a 2
df.loc[df['bathrooms'] == 2].shape[0]
```
**Resposta** 1913 casa possuem 2 banheiros.
### 8 - Qual o preço médio de todas as casas no conjunto de dados?
```
# use o .mean() para calcular o preco medio
df['price'].mean()
```
**Resposta** Opreco medio das casa e de 541649.96
### 9 - Qual o preço médio de casas com 2 banheiros?
```
# use o .mean() com o .loc para calcular o preco medio de casas com 2 banheiros
df[['price','bathrooms']].loc[df['bathrooms'] == 2].mean()
```
**Resposta** O preco medio de casas com 2 banheiros e de 459307.01
### 10 - Qual o preço mínimo entre as casas com 3 quartos?
```
df[['price','bedrooms']].loc[df['bedrooms'] == 3].min()
```
**Resposta** Opreco medio de casas com 3 quartos e de 89000.0
### 11 - Quantas casas possuem mais de 300 metros quadrados de sala de estar?
Para responder essa questão, será necessário fazer a conversão de pés para metros quadrados, bastando, para tanto, realizar a multiplicação por 0.0929.
```
# criando uma nova coluna para receber a conversão
df['m2_living'] = df['sqft_living'] * 0.092
# verificando o resultado
df.head()
# use o .shape[] para contar as linhas na coluna m2_living com valores maior que 300
df [df ['m2_living'] > 300] .shape[0]
```
**Resposta** 2251 casas possuem sala de estar com mais de 300 m2
### 12 - Quantas casas tem mais de 2 andares?
```
#use o .shape para contar na coluna Floor
df[df['floors'] > 2] .shape[0]
```
**Resposta** 780 casas possuem masi de 2 andares.
### 13 - Quantas casas tem vista para o mar?
```
# use o != 'diferente' e o .shape para contar os valores
df[df['waterfront'] != 0] .shape[0]
```
**Resposta** 163 Casas tem vista para o mar.
### 14 - Das casas com vista para o mar, quantas tem 3 quartos?
```
# use o & para fazer o and e juntar os dicionario entre parentese "()"
df[(df['waterfront'] != 0) & (df['bedrooms'] == 3)] .shape[0]
```
**Resposta** 64 Casas tem vista para o mar e possuem 3 quartos.
### 15 - Das casas com mais de 300 metros quadrados de sala de estar, quantas tem mais de 2 banheiros?
```
# use o & para fazer o and e juntar os dicionario entre parentese "()"
df[(df['m2_living'] > 300) & (df['bathrooms'] > 2)] . shape[0]
```
**Resposta** 2082 casas tem mais de 300 m2 de sala e masi de 2 banheiros
```
df[(df['price'] > 3000000) & (df['bathrooms'] > 2)] . shape[0]
```
# Check point to Export Dataset 1.0
**Criando copia do dataset modificado para visualizacao no POWER BI**
```
dfv = df
dfv.to_csv('data/kc_house_data_viz.csv', index=False)
dfv.head()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
# load dataset
df = pd.read_csv('data/kc_house_data.csv')
# read top 5 rows dataset
df.head()
# Check data types of dataset
df.dtypes
# check NA values
df.isna().sum()
# Check duplicated data from ID columns.
df['id'].duplicated().value_counts()
# I used group by to list all houses duplicated
pd.concat(cont for i, cont in df.groupby("id") if len(cont) > 1).head(20)
# I used drop duplicates to remove and keep only last ID
df.drop_duplicates(subset='id', keep='last', inplace=True)
#check duplicated again
df.duplicated().value_counts()
# use o pd.to_datetime para converter as colunas para tipo data.
#df.dtypes
df['date'] = pd.to_datetime( df['date'])
#df.dtypes
df[['id','date']].head(20)
# Usei nunique para contar quantidade de chave unica do ID
df['id'].nunique()
# use o Shape para contar a quantidade de linhas ou colunas do dataset
# df.shape[1] conta as colunas
# df.shape[0] conta as linhas
df.shape[1]
#use o columns para exibir os nomes das colunas do dataset
df.columns
# use o sort_values(by=) para buscar valores ordenados por uma coluna
# pode ser usando tanbem o .rank()
df.sort_values(by='price',ascending=False).head(1)
df.sort_values(by='bedrooms',ascending=False).head(1)
df['bedrooms'].sum()
#use o loc pra contar na coluna bathroom quanto sao igual a 2
df.loc[df['bathrooms'] == 2].shape[0]
# use o .mean() para calcular o preco medio
df['price'].mean()
# use o .mean() com o .loc para calcular o preco medio de casas com 2 banheiros
df[['price','bathrooms']].loc[df['bathrooms'] == 2].mean()
df[['price','bedrooms']].loc[df['bedrooms'] == 3].min()
# criando uma nova coluna para receber a conversão
df['m2_living'] = df['sqft_living'] * 0.092
# verificando o resultado
df.head()
# use o .shape[] para contar as linhas na coluna m2_living com valores maior que 300
df [df ['m2_living'] > 300] .shape[0]
#use o .shape para contar na coluna Floor
df[df['floors'] > 2] .shape[0]
# use o != 'diferente' e o .shape para contar os valores
df[df['waterfront'] != 0] .shape[0]
# use o & para fazer o and e juntar os dicionario entre parentese "()"
df[(df['waterfront'] != 0) & (df['bedrooms'] == 3)] .shape[0]
# use o & para fazer o and e juntar os dicionario entre parentese "()"
df[(df['m2_living'] > 300) & (df['bathrooms'] > 2)] . shape[0]
df[(df['price'] > 3000000) & (df['bathrooms'] > 2)] . shape[0]
dfv = df
dfv.to_csv('data/kc_house_data_viz.csv', index=False)
dfv.head()
| 0.290578 | 0.874023 |
### 1. Loading Libraries
```
# Computation
import numpy as np
import pandas as pd
# Visualization
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
# Statistics
from scipy.stats import shapiro
import scipy.stats as stats
# Utils
import warnings
import os
%matplotlib inline
mpl.style.use("fivethirtyeight")
```
### 2. Loading Data
```
df = pd.read_csv('../data/final_df.csv')
df
```
### 3. Bootstrapping for `retention_1d`
#### 3A. Looking at `retention_1d` between `gate_30` and `gate_40`.
`retention_1` is a flag indicating whethere a user will go back and play the game one day after he or she installed it. It is an important metrics indicating user engagement and retention.
1. The overall 1-day retention is 44.52%.
2. 1-day retention of `gate_30` is 44.82% and 1-day retention of `gate_40` is 44.23%, which is not very different from each other.
```
df['retention_1'].mean()
df.groupby('version')['retention_1'].mean()
```
#### 3B. Bootstrapping for `retention_1d`
While the difference in `retention_1d` between `gate_30` and `gate_40` is very small, it can make a big difference when we look at millions of players if the game is grow in the future.
I would like to understand whether the difference is significant. In this notebook, I will use bootstrapping: I will re-sample the dataset with replacement for 10,000 times, and calculate 1-day retention for those samples.
This will give me an idea of how confident I should be about the difference between `gate_30` and `gate_40` - the procedure will give us an idea of how uncertain the numbers are.
After bootstrapping, it seems like moving from `gate_30` to `gate_40` has an impact on 1-day retention. The rationale is unclear, and depends on questions like whether a player knows in advance there will be a gate in front of them.
```
# Creating an list with bootstrapped means for each AB-group
bs_1d = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['retention_1'].mean()
bs_1d.append(bs_mean)
fig, axes = plt.subplots(1, 2, figsize = (10, 5))
bs_1d_df = pd.DataFrame(bs_1d, columns=['gate_30', 'gate_40'])
bs_1d_df.plot(kind = 'kde', ax = axes[0])
bs_1d_df['diff'] = ((bs_1d_df['gate_30'] - bs_1d_df['gate_40'])/bs_1d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_1d_df['diff'].plot(kind='kde', ax = axes[1])
plt.suptitle('Bootstrap distribution Density Estimate plot for 1-day retention', fontsize = 20)
axes[0].set_title('Density estimates of gate_30 vs gate_40', fontsize = 15)
axes[1].set_title('Density estimates of relative difference', fontsize = 15)
plt.tight_layout(pad = 2)
fig.savefig('../eda/1_day_ret_bs_density.png', dpi=fig.dpi)
```
#### 3C. Understanding the Difference
Looking at the percentage difference plot, we can see that there seems to be evidence for a difference.
```
bs_1d_df['diff'] = ((bs_1d_df['gate_30'] - bs_1d_df['gate_40'])/bs_1d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_1d_df['diff'].plot(kind='kde')
print('The probability that 1-day retention is greater when the gate is at level 30 is {}%.'.format((bs_1d_df['diff'] > 0).mean()*100))
```
### 4. Bootstrapping for `retention_7d`
#### 4A. Looking at `retention_7d` between `gate_30` and `gate_40`
The difference seems to be bigger between the control and treatment for `retention_7`.
```
df.groupby('version')['retention_7'].mean()
df['retention_7'].mean()
```
#### 4B. Bootstrapping for `retention_7d`
```
bs_7d = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['retention_7'].mean()
bs_7d.append(bs_mean)
fig, axes = plt.subplots(1, 2, figsize = (10, 5))
bs_7d_df = pd.DataFrame(bs_7d, columns=['gate_30', 'gate_40'])
bs_7d_df.plot(kind = 'kde', ax = axes[0])
bs_7d_df['diff'] = ((bs_7d_df['gate_30'] - bs_7d_df['gate_40'])/bs_7d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_7d_df['diff'].plot(kind='kde', ax = axes[1])
plt.suptitle('Bootstrap distribution Density Estimate plot for 7-day retention', fontsize = 20)
axes[0].set_title('Density estimates of gate_30 vs gate_40', fontsize = 15)
axes[1].set_title('Density estimates of relative difference', fontsize = 15)
plt.tight_layout(pad = 2)
fig.savefig('../eda/1_day_ret_bs_density.png', dpi=fig.dpi)
bs_7d_df = pd.DataFrame(bs_7d, columns=['gate_30', 'gate_40'])
bs_7d_df.plot(kind = 'kde')
```
#### 4C. Understanding the Difference
```
bs_7d_df['diff'] = ((bs_7d_df['gate_30'] - bs_7d_df['gate_40'])/bs_7d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_7d_df['diff'].plot(kind='kde')
print('The probability that 7-day retention is greater when the gate is at level 30 is {}%.'.format((bs_7d_df['diff'] > 0).mean()*100))
```
### 5. Bootstrapping for `sum_gamerounds`
Changing from `gate_30` to `gate_40` doesn't seem to have an impact on game rounds.
```
df.groupby('version')['sum_gamerounds'].mean()
bs_gr = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['sum_gamerounds'].mean()
bs_gr.append(bs_mean)
bs_gr_df = pd.DataFrame(bs_gr, columns=['gate_30', 'gate_40'])
bs_gr_df.plot(kind = 'kde')
bs_gr_df['diff'] = ((bs_gr_df['gate_30'] - bs_gr_df['gate_40'])/bs_gr_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_gr_df['diff'].plot(kind='kde')
print('The probability that mean of game rounds is greater when the gate is at level 30 is {}%.'.format((bs_gr_df['diff'] > 0).mean()*100))
```
|
github_jupyter
|
# Computation
import numpy as np
import pandas as pd
# Visualization
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
# Statistics
from scipy.stats import shapiro
import scipy.stats as stats
# Utils
import warnings
import os
%matplotlib inline
mpl.style.use("fivethirtyeight")
df = pd.read_csv('../data/final_df.csv')
df
df['retention_1'].mean()
df.groupby('version')['retention_1'].mean()
# Creating an list with bootstrapped means for each AB-group
bs_1d = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['retention_1'].mean()
bs_1d.append(bs_mean)
fig, axes = plt.subplots(1, 2, figsize = (10, 5))
bs_1d_df = pd.DataFrame(bs_1d, columns=['gate_30', 'gate_40'])
bs_1d_df.plot(kind = 'kde', ax = axes[0])
bs_1d_df['diff'] = ((bs_1d_df['gate_30'] - bs_1d_df['gate_40'])/bs_1d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_1d_df['diff'].plot(kind='kde', ax = axes[1])
plt.suptitle('Bootstrap distribution Density Estimate plot for 1-day retention', fontsize = 20)
axes[0].set_title('Density estimates of gate_30 vs gate_40', fontsize = 15)
axes[1].set_title('Density estimates of relative difference', fontsize = 15)
plt.tight_layout(pad = 2)
fig.savefig('../eda/1_day_ret_bs_density.png', dpi=fig.dpi)
bs_1d_df['diff'] = ((bs_1d_df['gate_30'] - bs_1d_df['gate_40'])/bs_1d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_1d_df['diff'].plot(kind='kde')
print('The probability that 1-day retention is greater when the gate is at level 30 is {}%.'.format((bs_1d_df['diff'] > 0).mean()*100))
df.groupby('version')['retention_7'].mean()
df['retention_7'].mean()
bs_7d = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['retention_7'].mean()
bs_7d.append(bs_mean)
fig, axes = plt.subplots(1, 2, figsize = (10, 5))
bs_7d_df = pd.DataFrame(bs_7d, columns=['gate_30', 'gate_40'])
bs_7d_df.plot(kind = 'kde', ax = axes[0])
bs_7d_df['diff'] = ((bs_7d_df['gate_30'] - bs_7d_df['gate_40'])/bs_7d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_7d_df['diff'].plot(kind='kde', ax = axes[1])
plt.suptitle('Bootstrap distribution Density Estimate plot for 7-day retention', fontsize = 20)
axes[0].set_title('Density estimates of gate_30 vs gate_40', fontsize = 15)
axes[1].set_title('Density estimates of relative difference', fontsize = 15)
plt.tight_layout(pad = 2)
fig.savefig('../eda/1_day_ret_bs_density.png', dpi=fig.dpi)
bs_7d_df = pd.DataFrame(bs_7d, columns=['gate_30', 'gate_40'])
bs_7d_df.plot(kind = 'kde')
bs_7d_df['diff'] = ((bs_7d_df['gate_30'] - bs_7d_df['gate_40'])/bs_7d_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_7d_df['diff'].plot(kind='kde')
print('The probability that 7-day retention is greater when the gate is at level 30 is {}%.'.format((bs_7d_df['diff'] > 0).mean()*100))
df.groupby('version')['sum_gamerounds'].mean()
bs_gr = []
for i in range(10000):
# Here, we allow sample rows more than once, by setting replace=True
# And make sure in every sample, the size is equal to the original size
# by setting frac = 1
bs_mean = df.sample(frac=1, replace=True).groupby('version')['sum_gamerounds'].mean()
bs_gr.append(bs_mean)
bs_gr_df = pd.DataFrame(bs_gr, columns=['gate_30', 'gate_40'])
bs_gr_df.plot(kind = 'kde')
bs_gr_df['diff'] = ((bs_gr_df['gate_30'] - bs_gr_df['gate_40'])/bs_gr_df['gate_40'])* 100
# Plotting the bootstrapping % difference
bs_gr_df['diff'].plot(kind='kde')
print('The probability that mean of game rounds is greater when the gate is at level 30 is {}%.'.format((bs_gr_df['diff'] > 0).mean()*100))
| 0.618089 | 0.898722 |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/colabmarkdown.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# colabmarkdown.ipynb
# メモ
1. ColabのマークダウンはGitHubのマークダウンと基本は同じだが若干違う。
2. 最大の違いは`html`を貼り付けられないこと。Colabのマークダウンでは`html`は使えない。使える場合もあるが、保証されていない。コードセルで`%%html`magicを使うことができる。
3. 細かい違いも、Colabだけを使用していると気にならないが、マークダウンにして他のサイトに貼り付けると表示がおかしい。
4. 基本的には機能のほとんどないシンプルなマークダウンである。
5. このファイルの読者はColabで開き、実験しながら進むことを想定する。セルをクリックしてセルの編集ボタンを押すか、セルをダブルクリックして編集に入り、上下もしくは左右に編集結果を並べながらテキストセルを編集することになる。編集から抜けるには`Esc`か、編集から抜ける(マークダウンエディタを閉じる)ボタンを押す。
6. 大なり記号(>)による引用、バッククオート(`)によるコード表示もcolabと他のサイトでは表示がちがったりする。 確実に文字列をそのまま表示するには、コードセルに%%script falseとして、実行しないコードとして書くのが確実である。
# 見出し
見出しを作成するには、1 つから 6 つの `#` シンボルを見出しのテキストの前に追加する。 使用する `#` の数によって、見出しのサイズが決まる。
```
# 最大の見出し
## 2番目に大きな見出し
###### 最も小さい見出し
```
Colabでは`#`の数と位置によって自動で目次が作られ、目次からジャンプできるので、```### [見出し](#midashi)```とわざわざ書く必要はない。
行の下に`===`と`---`と書くと、`#`と`##`の代わりになる。
# テキスト
マークダウンの改行1個は無視される。複数の文で1つの段落になり、改行を2回続けると段落ができて、改行される。
**太字**、*斜体*、~~取り消し線~~で強調を示すことができる。
```
**太字**、*斜体*、~~取り消し線~~で強調を示すことができる。
```
***太字かつ斜体はこうする。***
```
***太字かつ斜体はこうする。***
```
**太字の中に _斜体_ はこうする。**
```
**太字の中に _斜体_ はこうする。**
```
一般に`*`の代わりに`_`を使うことができる。`_`は __スペースが必要__ かも。
趣味の問題でもあるが、**太字**や*斜体*を多用すると画面がうるさくなると思う。
関連して、アルファベット alphabet や数字 3 の前後にスペースを入れるのは読みやすくはなるが、統一的にそうするのは自己管理がむずかしいので必須とはしたくない。コードの中置演算子の前後にスペースを置くことほどにも頑張らない。
This _is_ a pen.
This -is- a pen.
This ~is~ a pen.
***This is a pen.***
### 引用
テキストは`>`で引用(インデント)できる。
> アブラハムリンカーンの言葉::
>> フランス語で 失礼する
だがしかし!!!!
これはあまり用途がないのではないか。
Colabではコードが別途コードセルに書くことになるで、引用はバッククォート3つのコード表示の方が実用性が高いのではないか。スペース4個でも可だが、バッククォート3つの方が確実だろう。スペースもフォーマットされずちゃんと表示される。
```
フランス語で 失礼する
```
単一のバッククォートで文章内のコードやコマンドを`ハイライト`できる。
独立したブロック内にコードあるいはテキストをフォーマットするには、3 重のバッククォートを使用する。バッククォート内のテキストはフォーマットされない。
### リンク
リンクのテキストをブラケット `[ ]` で囲み、URL をカッコ `( )` で囲めば、インラインのリンクを作成できる。
だがしかし!!!!
このやりかただとURLが見えなくなる。一方、URLが書かれているだけで自動的にリンクは生じるので、明示的にリンクを示したいときは、リンクを2回書けばよい。
```
[Colab](https://colab.research.google.com) (https://colab.research.google.com)
```
結果
[Colab](https://colab.research.google.com) (https://colab.research.google.com)
### リスト
1 つ以上の行の前に `-` または `*` を置くことで、順序なしリストを作成できる。
```
- 織田信長
- 豊臣秀吉
- 徳川家康
```
- 織田信長
- 豊臣秀吉
- 徳川家康
リストを順序付けするには、各行の前に数字を置く。書いた数字に関係なく最初に書いた数字からの連番になる。常に1を書くのが簡単かもしれない。
```
5. 秀忠
2. 家光
1. 家綱
```
5. 秀忠
2. 家光
1. 家綱
### 入れ子になったリスト
1つ以上のリストアイテムを他のアイテムの下にインデントすることで、入れ子になったリストを作成できる。
```
1. 最初のアイテム
1. 次のアイテム
1. これはどうだ
1. もとにもどしてみる
1. 最後のアイテム
```
1. 最初のアイテム
1. 次のアイテム
1. これはどうだ
1. もとにもどしてみる
1. 最後のアイテム
### タスクリスト(チェックボックス)
タスクリスト(チェックボックス)
タスクリストを作成するには、リストアイテムの前に空白を置き、その後に`[ ]`を続ける。 タスクを完了したとマークするには`[x]`を使う。
```
- [x] Finish my changes
- [ ] Push my commits to GitHub
- [ ] Open a pull request
```
* [x] Finish my changes
* [ ] Push my commits to GitHub
* [ ] Open a pull request
### エスケープ
マークダウンのキャラクター(`*`とか`_`とか)の前に`\` を使うことで、マークダウンのフォーマットを無視 (エスケープ) することができる 。
```
`\*新しいプロジェクト\* を \*古いプロジェクト\* にリネームしましょう`
**とか**です。
```
\*新しいプロジェクト\* を \*古いプロジェクト\* にリネームしましょう
**とか**です。
# 数式
$y=x^2$
$e^{i\pi} + 1 = 0$
$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$
$\frac{n!}{k!(n-k)!} = {n \choose k}$
$A_{m,n} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m,1} & a_{m,2} & \cdots & a_{m,n}
\end{pmatrix}$
$$
\begin{align}
y &= x^2 \\
lkj;lkj &= j
\end{align}
$$
```
%%latex
\begin{align}
y &= x^2 \\
lkj;lkj &= j
\end{align}
%%latex
\begin{array}{rl}
y &= x^2 \\
lkj;lkj &= j
\end{array}
```
# 表 table
```
First column name | Second column name
--- | ---
Row 1, Col 1 | Row 1, Col 2
Row 2, Col 1 | Row 2, Col 2
```
First column name | Second column name
--- | ---
Row 1, Col 1 | Row 1, Col 2
Row 2, Col 1 | Row 2, Col 2
横線はダッシュ3つで描ける (\-\-\-):
```
---
```
---
# 参考
### 参考リンク
* [Formatting text in Colaboratory: A guide to Colaboratory markdown](
https://colab.research.google.com/notebooks/markdown_guide.ipynb) (https://docs.github.com/ja/github/writing-on-github/basic-writing-and-formatting-syntax)
* [GitHub Flavored Markdown の仕様](https://github.github.com/gfm/) (https://github.github.com/gfm/)
* [GitHub 上での書き込みと書式設定について](https://docs.github.com/ja/github/writing-on-github/about-writing-and-formatting-on-github) (https://docs.github.com/ja/github/writing-on-github/about-writing-and-formatting-on-github)
* [GitHub での執筆](https://docs.github.com/ja/github/writing-on-github/basic-writing-and-formatting-syntax) (https://docs.github.com/ja/github/writing-on-github/basic-writing-and-formatting-syntax)
* [高度なフォーマットを使用して作業する](https://docs.github.com/ja/github/writing-on-github/working-with-advanced-formatting) (https://docs.github.com/ja/github/writing-on-github/working-with-advanced-formatting)
* [Markdown をマスターする](https://guides.github.com/features/mastering-markdown/) (https://guides.github.com/features/mastering-markdown/)
|
github_jupyter
|
# 最大の見出し
## 2番目に大きな見出し
###### 最も小さい見出し
**太字**、*斜体*、~~取り消し線~~で強調を示すことができる。
***太字かつ斜体はこうする。***
**太字の中に _斜体_ はこうする。**
フランス語で 失礼する
[Colab](https://colab.research.google.com) (https://colab.research.google.com)
- 織田信長
- 豊臣秀吉
- 徳川家康
5. 秀忠
2. 家光
1. 家綱
1. 最初のアイテム
1. 次のアイテム
1. これはどうだ
1. もとにもどしてみる
1. 最後のアイテム
- [x] Finish my changes
- [ ] Push my commits to GitHub
- [ ] Open a pull request
`\*新しいプロジェクト\* を \*古いプロジェクト\* にリネームしましょう`
**とか**です。
%%latex
\begin{align}
y &= x^2 \\
lkj;lkj &= j
\end{align}
%%latex
\begin{array}{rl}
y &= x^2 \\
lkj;lkj &= j
\end{array}
First column name | Second column name
--- | ---
Row 1, Col 1 | Row 1, Col 2
Row 2, Col 1 | Row 2, Col 2
---
| 0.264074 | 0.981823 |
Gaussian discriminant analysis con stessa matrice di covarianza per le distribuzioni delle due classi e conseguente separatore lineare.
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as st
```
Definizioni relative alla visualizzazione
```
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
```
Leggiamo i dati da un file csv in un dataframe pandas. I dati hanno 3 valori: i primi due corrispondono alle features e sono assegnati alle colonne x1 e x2 del dataframe; il terzo è il valore target, assegnato alla colonna t. Vengono poi creati una matrice X delle features e un vettore target t
```
# legge i dati in dataframe pandas
data = pd.read_csv("../dataset/ex2data1.txt", header=0, delimiter=',', names=['x1','x2','t'])
# calcola dimensione dei dati
n = len(data)
n0 = len(data[data.t==0])
# calcola dimensionalità delle features
nfeatures = len(data.columns)-1
X = np.array(data[['x1','x2']])
t = np.array(data['t'])
```
Visualizza il dataset.
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title('Dataset', fontsize=12)
plt.show()
```
Calcola le medie delle due distribuzioni.
```
mu0=np.array(np.mean(data[data.t==0][['x1','x2']]))
mu1=np.array(np.mean(data[data.t==1][['x1','x2']]))
```
Consideriamo una matrice di covarianza uguale per le distribuzioni delle due classi. La stimiamo pari alla matrice di covarianza dell'intero dataset. Calcoliamo anche la sua inversa, che appare nella definizione delle distribuzioni gaussiane.
```
sigma=np.cov(X.T)
```
Stimiamo la probabilità a priori della classe C0 come rapporto tra il numero di elementi del dataset appartenenti alla classe e la dimensione totale del dataset
```
prior=float(n0)/n
```
Deriviamo il vettore theta dei tre coefficienti dell'iperpiano di separazione tra le due classi, utilizzando la definizione analitica fornita dal modello.
```
# inversa della matrice di covarianza
sigmainv=np.matrix(sigma).I
# vettori colonna delle medie
m0=np.matrix(mu0).T
m1=np.matrix(mu1).T
# coefficienti associati alle due feature
theta=np.asarray(sigmainv *(m0-m1)).ravel()
# termine noto
theta0=-0.5*m0.T*sigmainv*m0+0.5*mu1.T*sigmainv*m1+np.log(prior)-np.log(1-prior)
# concatenazione del termine noto nel vettore dei coefficienti
theta = np.append(theta0[0,0], theta)
print("theta: [{0:5.3f}, {1:5.3f}, {2:5.3f}]".format(theta[0],theta[1],theta[2]))
```
Definiamo la griglia 100x100 da utilizzare per la visualizzazione delle varie distribuzioni.
```
# insieme delle ascisse dei punti
u = np.linspace(min(X[:,0]), max(X[:,0]), 100)
# insieme delle ordinate dei punti
v = np.linspace(min(X[:,1]), max(X[:,1]), 100)
# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)
U, V = np.meshgrid(u, v)
```
Calcola sui punti della griglia le probabilità delle classi $p(x|C_0), p(x|C_1)$ e le probabilità a posteriori delle classi $p(C_0|x), p(C_1|x)$
```
# funzioni che calcolano le probabilità secondo le distribuzioni delle due classi
vf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))
vf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))
# calcola le probabilità delle due distribuzioni sulla griglia
p0=vf0(U,V)
p1=vf1(U,V)
```
Visualizzazione della distribuzione di $p(x|C_0)$.
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_0)$', fontsize=12)
plt.show()
```
Visualizzazione della distribuzione di $p(x|C_1)$.
```
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_1)$', fontsize=12)
plt.show()
```
Calcoliamo ora la distribuzione a posteriori delle classi $C_0$ e $C_1$ per tutti i punti della griglia, applicando la regola di Bayes
```
# calcola il rapporto tra le likelihood delle classi per tutti i punti della griglia
z=p0/p1
# calcola il rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia
zbayes=p0*prior/(p1*(1-prior))
# calcola evidenza
ev = p0*prior+p1*(1-prior)
# calcola le probabilità a posteriori di C0 e di C1
pp0 = p0*prior/ev
pp1 = p1*(1-prior)/ev
```
Visualizzazione di $p(C_0|x)$
```
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, zbayes, [1.0], colors=[colors[7]],linewidths=[1])
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1], linestyles='dashed')
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_0|x)$", fontsize=12)
plt.show()
```
Visualizzazione di $p(C_1|x)$
```
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, zbayes, [1.0], colors=[colors[7]],linewidths=[1])
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1], linestyles='dashed')
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_1|x)$", fontsize=12)
plt.show()
```
Effettua predizioni sugli elementi del dataset.
```
# probabilità degli elementi rispetto alla distribuzione di C0
p0_d = vf0(X[:,0],X[:,1])
# probabilità degli elementi rispetto alla distribuzione di C1
p1_d = vf1(X[:,0],X[:,1])
# rapporto tra le probabilità di appartenenza a C0 e C1
z_d = p0_d*prior/(p1_d*(1-prior))
# predizioni del modello
pred = np.where(z_d<1, 1, 0)
# numero di elementi mal classificati
nmc = abs(pred-t).sum()
# accuracy
acc = 1-float(nmc)/n
print("Accuracy: {0:5.3f}".format(acc))
```
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
# legge i dati in dataframe pandas
data = pd.read_csv("../dataset/ex2data1.txt", header=0, delimiter=',', names=['x1','x2','t'])
# calcola dimensione dei dati
n = len(data)
n0 = len(data[data.t==0])
# calcola dimensionalità delle features
nfeatures = len(data.columns)-1
X = np.array(data[['x1','x2']])
t = np.array(data['t'])
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title('Dataset', fontsize=12)
plt.show()
mu0=np.array(np.mean(data[data.t==0][['x1','x2']]))
mu1=np.array(np.mean(data[data.t==1][['x1','x2']]))
sigma=np.cov(X.T)
prior=float(n0)/n
# inversa della matrice di covarianza
sigmainv=np.matrix(sigma).I
# vettori colonna delle medie
m0=np.matrix(mu0).T
m1=np.matrix(mu1).T
# coefficienti associati alle due feature
theta=np.asarray(sigmainv *(m0-m1)).ravel()
# termine noto
theta0=-0.5*m0.T*sigmainv*m0+0.5*mu1.T*sigmainv*m1+np.log(prior)-np.log(1-prior)
# concatenazione del termine noto nel vettore dei coefficienti
theta = np.append(theta0[0,0], theta)
print("theta: [{0:5.3f}, {1:5.3f}, {2:5.3f}]".format(theta[0],theta[1],theta[2]))
# insieme delle ascisse dei punti
u = np.linspace(min(X[:,0]), max(X[:,0]), 100)
# insieme delle ordinate dei punti
v = np.linspace(min(X[:,1]), max(X[:,1]), 100)
# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)
U, V = np.meshgrid(u, v)
# funzioni che calcolano le probabilità secondo le distribuzioni delle due classi
vf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))
vf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))
# calcola le probabilità delle due distribuzioni sulla griglia
p0=vf0(U,V)
p1=vf1(U,V)
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_0)$', fontsize=12)
plt.show()
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap
imshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
plt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])
# rappresenta i punti del dataset
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
# rappresenta la media della distribuzione
ax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)
# inserisce titoli, etc.
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title('Distribuzione di $p(x|C_1)$', fontsize=12)
plt.show()
# calcola il rapporto tra le likelihood delle classi per tutti i punti della griglia
z=p0/p1
# calcola il rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia
zbayes=p0*prior/(p1*(1-prior))
# calcola evidenza
ev = p0*prior+p1*(1-prior)
# calcola le probabilità a posteriori di C0 e di C1
pp0 = p0*prior/ev
pp1 = p1*(1-prior)/ev
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, zbayes, [1.0], colors=[colors[7]],linewidths=[1])
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1], linestyles='dashed')
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_0|x)$", fontsize=12)
plt.show()
fig = plt.figure(figsize=(8,8))
ax = fig.gca()
imshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)
plt.contour(U, V, zbayes, [1.0], colors=[colors[7]],linewidths=[1])
plt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1], linestyles='dashed')
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.xlim(u.min(), u.max())
plt.ylim(v.min(), v.max())
plt.title("Distribuzione di $p(C_1|x)$", fontsize=12)
plt.show()
# probabilità degli elementi rispetto alla distribuzione di C0
p0_d = vf0(X[:,0],X[:,1])
# probabilità degli elementi rispetto alla distribuzione di C1
p1_d = vf1(X[:,0],X[:,1])
# rapporto tra le probabilità di appartenenza a C0 e C1
z_d = p0_d*prior/(p1_d*(1-prior))
# predizioni del modello
pred = np.where(z_d<1, 1, 0)
# numero di elementi mal classificati
nmc = abs(pred-t).sum()
# accuracy
acc = 1-float(nmc)/n
print("Accuracy: {0:5.3f}".format(acc))
| 0.201499 | 0.913213 |
# Exploring Instrumental Variables with the [HIV Simulator](https://whynot.readthedocs.io/en/latest/simulators.html#adams-hiv-simulator)
This notebook demonstrates how to generate observational datasets with non-trivial confounding and uses these datasets to explore instrumental variables.
```
import whynot as wn
import numpy as np
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
```
# Instrumental Variables Background
Suppose that we measure a set of features $X_1,\dots,X_n$, and a target outcome $Y$, for multiple different units. Some fraction of the units receives a treatment; hence, we also have access to a binary variable $T$ which indicates whether the given unit was treated or not.
We are interested in finding the average causal effect of treating a unit. In the language of causality, we want to find
$$$$
$$\mathbb{E}[Y|\text{do}(T=1)] - \mathbb{E}[Y|\text{do}(T=0)].$$
We assume that the outcome is generated as a linear function of the features and the treatment:
$$$$
$$Y = \alpha T + \sum_{i=1}^n \beta_i X_i.$$
If the treatment is uncorrelated with the feature variables, ordinary least squares (OLS) yields unbiased results, giving $\alpha$ in expectation. However, the treatment is often correlated with the features; the fact that a unit receives a treatment indicates that a treatment was necessary in the first place.
One way to get around this issue is by using instrumental variables (IVs). A valid instrument $Z$ is a variable which is independent of $X_1,\dots,X_n$, and affects $Y$ only through $T$. Then, one way to estimate $\alpha$ is to first "guess" $T$ from $Z$ (denoted $\hat T$), and then regress $Y$ onto $\{\hat T, X_1,\dots,X_n\}$ (instead of $\{T, X_1,\dots,X_n\}$). When $T$ is continuous, one common approach to estimating $\alpha$ is using two-stage least-squares (2SLS), in which $\hat T$ is obtained by regressing $T$ onto $Z$.
# Setting up the simulator
We design an experiment on the [HIV simulator](https://whynot.readthedocs.io/en/latest/simulators.html#adams-hiv-simulator) to demonstrate how to use instrumental variables to solve non-trivial causal inference problems.
We consider an experiment where units (in this case, people) are more likely to receive effective treatment if their indicators of infection are worse. In other words, **treatment status is confounded with indicators of infection.**
First, we write a function to generate the initial state (covariates) for each unit.
```
def initial_covariate_distribution(rng):
"""Sample initial state by randomly perturbing the default state.
Parameters
----------
rng: numpy random number generator.
Return
------
wn.hiv.State: Initial state of the simulator.
"""
state = wn.hiv.State()
state.uninfected_T1 *= rng.uniform(0.45, 2.15)
state.infected_T1 *= rng.uniform(0.45, 2.15)
state.uninfected_T2 *= rng.uniform(0.45, 2.15)
state.infected_T2 *= rng.uniform(0.45, 2.15)
state.free_virus *= rng.uniform(0.45, 2.15)
state.immune_response *= rng.uniform(0.45, 2.15)
# Whether or not the unit is "enrolled in the study"
state.instrument = int(rng.rand() < 0.5)
return state
```
Next, we write a function describing the probability of treatment assignment.
In our model, the probability of treatment is higher if immune response and free virus are above a critical threshold. As an instrument, we suppose each unit is enrolled in the trial with some probability. Only "enrolled" units are actually treated.
```
def treatment_propensity(intervention, untreated_run):
"""Probability of treating each unit.
We are more likely to treat units with high immune response and free virus
at the time of intervention.
Parameters
-----------
intervention: whynot.simulator.hiv.Intervention
untreated_run: whynot.dynamics.run
Rollout of the simulator without treatment.
Returns
-------
treatment_prob: Probability of assigning the unit to treatment.
"""
# Only treat units if they are enrolled in the study
run = untreated_run
if run.initial_state.instrument > 0:
if run[intervention.time].immune_response > 10 and run[intervention.time].free_virus > 1:
return 0.8
return 0.2
return 0.
```
Finally, we put these pieces together into a `DynamicsExperiment`. The covariates we have access to are 6 variables which are indicative of the individual's health, along with the instrument. The target outcome is the amount of infected macrophages (which should be lower after receiving treatment).
For detailed information on the space of configuration and intervention parameters, see [here](https://whynot.readthedocs.io/en/latest/simulator_configs/hiv.html).
```
experiment = wn.DynamicsExperiment(
name="hiv_confounding",
description="Study effect of increasing drug efficacy on infected macrophages (cells/ml) under confounding.",
# Which simulator to use
simulator=wn.hiv,
# Configuration parameters for each rollout. Run for 150 steps.
simulator_config=wn.hiv.Config(epsilon_1=0.1, end_time=150),
# What intervention to perform in the simulator.
# In time step 100, increase drug efficacy from 0.1 to 0.5
intervention=wn.hiv.Intervention(time=100, epsilon_1=0.5),
# Initial distribution over covariates
state_sampler=initial_covariate_distribution,
# Treatment assignment rule
propensity_scorer=treatment_propensity,
# Measured outcome: Infected macrophages (cells/ml) at step 150
outcome_extractor=lambda run: run[149].infected_T2,
# Observed covariates: Covariates of each unit at time of treatment and the instrument
covariate_builder=lambda intervention, run: np.append(run[100].values(), run.initial_state.instrument))
```
## Generating data
We gather data from 500 individuals, who are more likely to receive treatment if they show signs of severe infection.
```
dset = experiment.run(num_samples=500)
```
Since we can simulate counterfactual outcomes, we get the exact causal effect of receiving treatment for each individual, as well as the average causal effect.
```
print("The average causal effect of receiving treatment is: {:.2f}".format(dset.sate))
```
## Estimating treatment effects with OLS
```
# Split into covariates and the instrument
(observations, T, Y) = dset.covariates, dset.treatments, dset.outcomes
X, Z = observations[:, :-1], observations[:, -1:]
```
First we run plain OLS to estimate the average causal effect.
```
ols_predictors = X
ols_predictors = np.concatenate([T.reshape(-1,1), ols_predictors], axis=1)
ols_model = sm.OLS(Y, ols_predictors)
ols_results = ols_model.fit()
est_ols = ols_results.params[0] # treatment is the first predictor
ols_rel_error = np.abs((est_ols - dset.sate) / dset.sate)
print("Relative Error in causal estimate of OLS: {:.2f}".format(ols_rel_error))
```
## Estimating treatment effects with instrumental variables
To eliminate the bias, we turn to instrumental variables. "Enrollment" in the study $Z$ is a valid instrumental variable in this setting. We first predict the treatment indicator $\hat T$ from the instrument $Z$ using logistic regression, and then run OLS to regress $Y$ onto $\hat T$ and the other variables.
```
instrument = Z - np.mean(Z)
logistic_model = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(instrument.reshape(-1,1),T)
T_hat = logistic_model.predict(instrument.reshape(-1,1))
iv_features = np.concatenate([T_hat.reshape(-1,1), X], axis=1)
iv_model = sm.OLS(Y, iv_features)
iv_results = iv_model.fit()
est_iv = iv_results.params[0]
iv_rel_error = np.abs((est_iv - dset.sate) / dset.sate)
print("Relative Error in causal estimate of IV: {:.5f}".format(iv_rel_error))
```
|
github_jupyter
|
import whynot as wn
import numpy as np
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
def initial_covariate_distribution(rng):
"""Sample initial state by randomly perturbing the default state.
Parameters
----------
rng: numpy random number generator.
Return
------
wn.hiv.State: Initial state of the simulator.
"""
state = wn.hiv.State()
state.uninfected_T1 *= rng.uniform(0.45, 2.15)
state.infected_T1 *= rng.uniform(0.45, 2.15)
state.uninfected_T2 *= rng.uniform(0.45, 2.15)
state.infected_T2 *= rng.uniform(0.45, 2.15)
state.free_virus *= rng.uniform(0.45, 2.15)
state.immune_response *= rng.uniform(0.45, 2.15)
# Whether or not the unit is "enrolled in the study"
state.instrument = int(rng.rand() < 0.5)
return state
def treatment_propensity(intervention, untreated_run):
"""Probability of treating each unit.
We are more likely to treat units with high immune response and free virus
at the time of intervention.
Parameters
-----------
intervention: whynot.simulator.hiv.Intervention
untreated_run: whynot.dynamics.run
Rollout of the simulator without treatment.
Returns
-------
treatment_prob: Probability of assigning the unit to treatment.
"""
# Only treat units if they are enrolled in the study
run = untreated_run
if run.initial_state.instrument > 0:
if run[intervention.time].immune_response > 10 and run[intervention.time].free_virus > 1:
return 0.8
return 0.2
return 0.
experiment = wn.DynamicsExperiment(
name="hiv_confounding",
description="Study effect of increasing drug efficacy on infected macrophages (cells/ml) under confounding.",
# Which simulator to use
simulator=wn.hiv,
# Configuration parameters for each rollout. Run for 150 steps.
simulator_config=wn.hiv.Config(epsilon_1=0.1, end_time=150),
# What intervention to perform in the simulator.
# In time step 100, increase drug efficacy from 0.1 to 0.5
intervention=wn.hiv.Intervention(time=100, epsilon_1=0.5),
# Initial distribution over covariates
state_sampler=initial_covariate_distribution,
# Treatment assignment rule
propensity_scorer=treatment_propensity,
# Measured outcome: Infected macrophages (cells/ml) at step 150
outcome_extractor=lambda run: run[149].infected_T2,
# Observed covariates: Covariates of each unit at time of treatment and the instrument
covariate_builder=lambda intervention, run: np.append(run[100].values(), run.initial_state.instrument))
dset = experiment.run(num_samples=500)
print("The average causal effect of receiving treatment is: {:.2f}".format(dset.sate))
# Split into covariates and the instrument
(observations, T, Y) = dset.covariates, dset.treatments, dset.outcomes
X, Z = observations[:, :-1], observations[:, -1:]
ols_predictors = X
ols_predictors = np.concatenate([T.reshape(-1,1), ols_predictors], axis=1)
ols_model = sm.OLS(Y, ols_predictors)
ols_results = ols_model.fit()
est_ols = ols_results.params[0] # treatment is the first predictor
ols_rel_error = np.abs((est_ols - dset.sate) / dset.sate)
print("Relative Error in causal estimate of OLS: {:.2f}".format(ols_rel_error))
instrument = Z - np.mean(Z)
logistic_model = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(instrument.reshape(-1,1),T)
T_hat = logistic_model.predict(instrument.reshape(-1,1))
iv_features = np.concatenate([T_hat.reshape(-1,1), X], axis=1)
iv_model = sm.OLS(Y, iv_features)
iv_results = iv_model.fit()
est_iv = iv_results.params[0]
iv_rel_error = np.abs((est_iv - dset.sate) / dset.sate)
print("Relative Error in causal estimate of IV: {:.5f}".format(iv_rel_error))
| 0.896719 | 0.995488 |
<br>
# Introdução
Converter de zip para tar
```
#!pip install pyarrow # Necessário para usar o parquet
import os
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from paths import *
```
<br>
## Converter csv to parquet
Os arquivos do dados abertos são gigantes e não dá para trabalhar com eles, nos formatos disponibilizados.
Logo, é necessário converter para .parquet ou outro formato comprimido.
Abaixo tem funções que poderão ser úteis no *dtypes*.
```python
types_dict = {'A': int, 'B': float}
types_dict.update({col: str for col in col_names if col not in types_dict})
pd.read_csv('file.csv', dtype=types_dict)
```
<br>
Inicialmente lemos os arquivos, definimos pasta de saída e parâmetros.
A função a seguir lê o arquivo zipado, que tem um arquivo *.csv* dentro; pega as colunas e ajusta elas, indicando que o dtype de todas será texto (nesse primeiro momento!)
Após isso define o tamanho dos *chunks*! e vai lendo chunk por chunk, inserindo em um arquivo *.parquet*. Ao final, salva esse aquivo!
```
def convert_csv2parquet(input_file, output_path, encoding, sep):
try:
# Get File
my_zipfile = os.path.basename(input_file)
# Columns Names from csv file
cols = pd.read_csv(
os.path.join(input_file),
sep=sep,
encoding=encoding,
low_memory=False,
nrows=10,
dtype=str, #TODO: Improve dtypes
).columns
# Set schema from csv file: set all strings
fields = []
for col in list(cols):
col_type = pa.field(col, pa.string()),
fields.append(col_type[0])
my_schema = pa.schema(fields)
# Enumerate chunks to process
df_enum = enumerate(
pd.read_csv(
os.path.join(input_file),
sep=sep,
encoding=encoding,
low_memory=False,
chunksize=10000,
dtype=str,
)
)
# Create Output Directory
os.makedirs(output_path, exist_ok=True)
# Write parquet in chunks
pqwriter = None
for i, df in enumerate(df_enum):
table = pa.Table.from_pandas(
df[-1],
schema=my_schema,
)
# For the first chunk of records
if i == 0:
# Create a parquet write object giving it an output file
pqwriter = pq.ParquetWriter(
os.path.join(output_path, '{}.parquet.gzip'.format(my_zipfile.split('.')[0])),
compression='gzip',
schema=my_schema,
)
pqwriter.write_table(table)
# Close the parquet writer
pqwriter.close()
print('"{}" converter succeed!'.format(my_zipfile))
except Exception as e:
print(e)
# Parameters
#input_file = os.path.join(controle_path, 'controle_mensal_parametros_basicos_2020.zip')
#output_path = os.path.join(input_path_parquet, 'controle')
#encoding = 'ISO-8859-1'
#sep = ';'
#convert_csv2parquet(input_file, output_path, encoding, sep)
#df = pd.read_parquet(os.path.join(output_path, 'controle_mensal_parametros_basicos_2020.parquet.gzip'))
#df.head()
```
<br>
## Convert *csv* to *parquet*
Converte os dados obtidos em formato *csv* (inseridos dentro de um arquivo *zip*) para o formato *parquet*.<br>
Nessa primeira transformação não me preocupei com formatos, dtypes, renames etc.
```
# Loop
paths = ['cadastro', 'controle', 'vigilancia']
for path in paths:
# Paths
path_in = os.path.join(bruto_path, path)
path_out = os.path.join(input_path_parquet, path)
# Loop
list_files = os.listdir(path_in)
for file in list_files:
print('\n{}'.format(file))
convert_csv2parquet(
os.path.join(path_in, file),
path_out,
encoding='ISO-8859-1',
sep=';',
)
```
<br>
## Repartition
Com o formato *parquet*, reparticionei o arquivo para que o acesso fosse facilitado.
```
# Parameters
paths = ['cadastro', 'controle', 'vigilancia']
#paths = [path for path in paths if path.startswith('con')]
paths
for path in paths:
# Parameters
path_in = os.path.join(input_path_parquet, path)
path_out = os.path.join(input_path_parquet_partitioned, path)
os.makedirs(path_out, exist_ok=True)
# Loop
list_files = os.listdir(path_in)
print(list_files)
for file in list_files:
file_out = os.path.basename(file).split('.')[0]
print(file_out)
df = pq.read_table(os.path.join(path_in, file))
# Rename Columns
cols = df.column_names
df = df.rename_columns([x.strip().title() for x in cols])
# Save Parquet Partitioned
pq.write_to_dataset(
df,
root_path=os.path.join(path_out, file_out),
partition_cols=['Uf', 'Código Ibge']
)
```
|
github_jupyter
|
#!pip install pyarrow # Necessário para usar o parquet
import os
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from paths import *
types_dict = {'A': int, 'B': float}
types_dict.update({col: str for col in col_names if col not in types_dict})
pd.read_csv('file.csv', dtype=types_dict)
def convert_csv2parquet(input_file, output_path, encoding, sep):
try:
# Get File
my_zipfile = os.path.basename(input_file)
# Columns Names from csv file
cols = pd.read_csv(
os.path.join(input_file),
sep=sep,
encoding=encoding,
low_memory=False,
nrows=10,
dtype=str, #TODO: Improve dtypes
).columns
# Set schema from csv file: set all strings
fields = []
for col in list(cols):
col_type = pa.field(col, pa.string()),
fields.append(col_type[0])
my_schema = pa.schema(fields)
# Enumerate chunks to process
df_enum = enumerate(
pd.read_csv(
os.path.join(input_file),
sep=sep,
encoding=encoding,
low_memory=False,
chunksize=10000,
dtype=str,
)
)
# Create Output Directory
os.makedirs(output_path, exist_ok=True)
# Write parquet in chunks
pqwriter = None
for i, df in enumerate(df_enum):
table = pa.Table.from_pandas(
df[-1],
schema=my_schema,
)
# For the first chunk of records
if i == 0:
# Create a parquet write object giving it an output file
pqwriter = pq.ParquetWriter(
os.path.join(output_path, '{}.parquet.gzip'.format(my_zipfile.split('.')[0])),
compression='gzip',
schema=my_schema,
)
pqwriter.write_table(table)
# Close the parquet writer
pqwriter.close()
print('"{}" converter succeed!'.format(my_zipfile))
except Exception as e:
print(e)
# Parameters
#input_file = os.path.join(controle_path, 'controle_mensal_parametros_basicos_2020.zip')
#output_path = os.path.join(input_path_parquet, 'controle')
#encoding = 'ISO-8859-1'
#sep = ';'
#convert_csv2parquet(input_file, output_path, encoding, sep)
#df = pd.read_parquet(os.path.join(output_path, 'controle_mensal_parametros_basicos_2020.parquet.gzip'))
#df.head()
# Loop
paths = ['cadastro', 'controle', 'vigilancia']
for path in paths:
# Paths
path_in = os.path.join(bruto_path, path)
path_out = os.path.join(input_path_parquet, path)
# Loop
list_files = os.listdir(path_in)
for file in list_files:
print('\n{}'.format(file))
convert_csv2parquet(
os.path.join(path_in, file),
path_out,
encoding='ISO-8859-1',
sep=';',
)
# Parameters
paths = ['cadastro', 'controle', 'vigilancia']
#paths = [path for path in paths if path.startswith('con')]
paths
for path in paths:
# Parameters
path_in = os.path.join(input_path_parquet, path)
path_out = os.path.join(input_path_parquet_partitioned, path)
os.makedirs(path_out, exist_ok=True)
# Loop
list_files = os.listdir(path_in)
print(list_files)
for file in list_files:
file_out = os.path.basename(file).split('.')[0]
print(file_out)
df = pq.read_table(os.path.join(path_in, file))
# Rename Columns
cols = df.column_names
df = df.rename_columns([x.strip().title() for x in cols])
# Save Parquet Partitioned
pq.write_to_dataset(
df,
root_path=os.path.join(path_out, file_out),
partition_cols=['Uf', 'Código Ibge']
)
| 0.233881 | 0.704694 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Forecasting using the Energy Demand Dataset**_
## Contents
1. [Introduction](#introduction)
1. [Setup](#setup)
1. [Data and Forecasting Configurations](#data)
1. [Train](#train)
1. [Generate and Evaluate the Forecast](#forecast)
Advanced Forecasting
1. [Advanced Training](#advanced_training)
1. [Advanced Results](#advanced_results)
# Introduction<a id="introduction"></a>
In this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.
If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.
In this notebook you will learn how to:
1. Creating an Experiment using an existing Workspace
1. Configure AutoML using 'AutoMLConfig'
1. Train the model using AmlCompute
1. Explore the engineered features and results
1. Generate the forecast and compute the out-of-sample accuracy metrics
1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features
1. Run and explore the forecast with lagging features
# Setup<a id="setup"></a>
```
import logging
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import warnings
import os
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
import azureml.core
from azureml.core import Experiment, Workspace, Dataset
from azureml.train.automl import AutoMLConfig
from datetime import datetime
```
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
```
print("This notebook was created using version 1.36.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-forecasting-energydemand"
# # project folder
# project_folder = './sample_projects/automl-forecasting-energy-demand'
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
```
## Create or Attach existing AmlCompute
A compute target is required to execute a remote Automated ML run.
[Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your cluster.
amlcompute_cluster_name = "energy-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
# Data<a id="data"></a>
We will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency.
With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction.
Let's set up what we know about the dataset.
<b>Target column</b> is what we want to forecast.<br></br>
<b>Time column</b> is the time axis along which to predict.
The other columns, "temp" and "precip", are implicitly designated as features.
```
target_column_name = "demand"
time_column_name = "timeStamp"
dataset = Dataset.Tabular.from_delimited_files(
path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv"
).with_timestamp_columns(fine_grain_timestamp=time_column_name)
dataset.take(5).to_pandas_dataframe().reset_index(drop=True)
```
The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset.
```
# Cut off the end of the dataset due to large number of nan values
dataset = dataset.time_before(datetime(2017, 10, 10, 5))
```
## Split the data into train and test sets
The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing.
```
# split into train based on time
train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True)
train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)
# split into test based on time
test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5))
test.to_pandas_dataframe().reset_index(drop=True).head(5)
```
### Setting the maximum forecast horizon
The forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale.
Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecast#configure-and-run-experiment) guide.
In this example, we set the horizon to 48 hours.
```
forecast_horizon = 48
```
## Forecasting Parameters
To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.
|Property|Description|
|-|-|
|**time_column_name**|The name of your time column.|
|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|
|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.
# Train<a id="train"></a>
Instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.
|Property|Description|
|-|-|
|**task**|forecasting|
|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|
|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|
|**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.|
|**training_data**|The training data to be used within the experiment.|
|**label_column_name**|The name of the label column.|
|**compute_target**|The remote compute for training.|
|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|
|**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.|
|**forecasting_parameters**|A class holds all the forecasting related parameters.|
This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results.
```
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
freq="H", # Set the forecast frequency to be hourly
)
automl_config = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"],
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
forecasting_parameters=forecasting_parameters,
)
```
Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.
One may specify `show_output = True` to print currently running iterations to the console.
```
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
```
## Retrieve the Best Model
Below we select the best model from all the training iterations using get_output method.
```
best_run, fitted_model = remote_run.get_output()
fitted_model.steps
```
## Featurization
You can access the engineered feature names generated in time-series featurization.
```
fitted_model.named_steps["timeseriestransformer"].get_engineered_feature_names()
```
### View featurization summary
You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:
+ Raw feature name
+ Number of engineered features formed out of this raw feature
+ Type detected
+ If feature was dropped
+ List of feature transformations for the raw feature
```
# Get the featurization summary as a list of JSON
featurization_summary = fitted_model.named_steps[
"timeseriestransformer"
].get_featurization_summary()
# View the featurization summary as a pandas dataframe
pd.DataFrame.from_records(featurization_summary)
```
# Forecasting<a id="forecast"></a>
Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.
The inference will run on a remote compute. In this example, it will re-use the training compute.
```
test_experiment = Experiment(ws, experiment_name + "_inference")
```
### Retreiving forecasts from the model
We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute.
```
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the inference output file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
```
### Evaluate
To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).
```
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl metrics module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
```
# Advanced Training <a id="advanced_training"></a>
We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.
### Using lags and rolling window features
Now we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.
This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results.
```
advanced_forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
target_lags=12,
target_rolling_window_size=4,
)
automl_config = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
blocked_models=[
"ElasticNet",
"ExtremeRandomTrees",
"GradientBoosting",
"XGBoostRegressor",
"ExtremeRandomTrees",
"AutoArima",
"Prophet",
], # These models are blocked for tutorial purposes, remove this for real use cases.
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
forecasting_parameters=advanced_forecasting_parameters,
)
```
We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations.
```
advanced_remote_run = experiment.submit(automl_config, show_output=False)
advanced_remote_run.wait_for_completion()
```
### Retrieve the Best Model
```
best_run_lags, fitted_model_lags = advanced_remote_run.get_output()
```
# Advanced Results<a id="advanced_results"></a>
We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.
```
test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced")
advanced_remote_run_infer = run_remote_inference(
test_experiment=test_experiment_advanced,
compute_target=compute_target,
train_run=best_run_lags,
test_dataset=test,
target_column_name=target_column_name,
inference_folder="./forecast_advanced",
)
advanced_remote_run_infer.wait_for_completion(show_output=False)
# download the inference output file to the local machine
advanced_remote_run_infer.download_file(
"outputs/predictions.csv", "predictions_advanced.csv"
)
fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name])
fcst_adv_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl metrics module
scores = scoring.score_regression(
y_test=fcst_adv_df[target_column_name],
y_pred=fcst_adv_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(
fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b"
)
test_test = plt.scatter(
fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
```
|
github_jupyter
|
import logging
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import warnings
import os
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
import azureml.core
from azureml.core import Experiment, Workspace, Dataset
from azureml.train.automl import AutoMLConfig
from datetime import datetime
print("This notebook was created using version 1.36.0 of the Azure ML SDK")
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = "automl-forecasting-energydemand"
# # project folder
# project_folder = './sample_projects/automl-forecasting-energy-demand'
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your cluster.
amlcompute_cluster_name = "energy-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2", max_nodes=6
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
target_column_name = "demand"
time_column_name = "timeStamp"
dataset = Dataset.Tabular.from_delimited_files(
path="https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv"
).with_timestamp_columns(fine_grain_timestamp=time_column_name)
dataset.take(5).to_pandas_dataframe().reset_index(drop=True)
# Cut off the end of the dataset due to large number of nan values
dataset = dataset.time_before(datetime(2017, 10, 10, 5))
# split into train based on time
train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True)
train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)
# split into test based on time
test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5))
test.to_pandas_dataframe().reset_index(drop=True).head(5)
forecast_horizon = 48
from azureml.automl.core.forecasting_parameters import ForecastingParameters
forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
freq="H", # Set the forecast frequency to be hourly
)
automl_config = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
blocked_models=["ExtremeRandomTrees", "AutoArima", "Prophet"],
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
forecasting_parameters=forecasting_parameters,
)
remote_run = experiment.submit(automl_config, show_output=False)
remote_run.wait_for_completion()
best_run, fitted_model = remote_run.get_output()
fitted_model.steps
fitted_model.named_steps["timeseriestransformer"].get_engineered_feature_names()
# Get the featurization summary as a list of JSON
featurization_summary = fitted_model.named_steps[
"timeseriestransformer"
].get_featurization_summary()
# View the featurization summary as a pandas dataframe
pd.DataFrame.from_records(featurization_summary)
test_experiment = Experiment(ws, experiment_name + "_inference")
from run_forecast import run_remote_inference
remote_run_infer = run_remote_inference(
test_experiment=test_experiment,
compute_target=compute_target,
train_run=best_run,
test_dataset=test,
target_column_name=target_column_name,
)
remote_run_infer.wait_for_completion(show_output=False)
# download the inference output file to the local machine
remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv")
# load forecast data frame
fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name])
fcst_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl metrics module
scores = scoring.score_regression(
y_test=fcst_df[target_column_name],
y_pred=fcst_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b")
test_test = plt.scatter(
fcst_df[target_column_name], fcst_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
advanced_forecasting_parameters = ForecastingParameters(
time_column_name=time_column_name,
forecast_horizon=forecast_horizon,
target_lags=12,
target_rolling_window_size=4,
)
automl_config = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
blocked_models=[
"ElasticNet",
"ExtremeRandomTrees",
"GradientBoosting",
"XGBoostRegressor",
"ExtremeRandomTrees",
"AutoArima",
"Prophet",
], # These models are blocked for tutorial purposes, remove this for real use cases.
experiment_timeout_hours=0.3,
training_data=train,
label_column_name=target_column_name,
compute_target=compute_target,
enable_early_stopping=True,
n_cross_validations=3,
verbosity=logging.INFO,
forecasting_parameters=advanced_forecasting_parameters,
)
advanced_remote_run = experiment.submit(automl_config, show_output=False)
advanced_remote_run.wait_for_completion()
best_run_lags, fitted_model_lags = advanced_remote_run.get_output()
test_experiment_advanced = Experiment(ws, experiment_name + "_inference_advanced")
advanced_remote_run_infer = run_remote_inference(
test_experiment=test_experiment_advanced,
compute_target=compute_target,
train_run=best_run_lags,
test_dataset=test,
target_column_name=target_column_name,
inference_folder="./forecast_advanced",
)
advanced_remote_run_infer.wait_for_completion(show_output=False)
# download the inference output file to the local machine
advanced_remote_run_infer.download_file(
"outputs/predictions.csv", "predictions_advanced.csv"
)
fcst_adv_df = pd.read_csv("predictions_advanced.csv", parse_dates=[time_column_name])
fcst_adv_df.head()
from azureml.automl.core.shared import constants
from azureml.automl.runtime.shared.score import scoring
from matplotlib import pyplot as plt
# use automl metrics module
scores = scoring.score_regression(
y_test=fcst_adv_df[target_column_name],
y_pred=fcst_adv_df["predicted"],
metrics=list(constants.Metric.SCALAR_REGRESSION_SET),
)
print("[Test data scores]\n")
for key, value in scores.items():
print("{}: {:.3f}".format(key, value))
# Plot outputs
%matplotlib inline
test_pred = plt.scatter(
fcst_adv_df[target_column_name], fcst_adv_df["predicted"], color="b"
)
test_test = plt.scatter(
fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color="g"
)
plt.legend(
(test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8
)
plt.show()
| 0.755366 | 0.994568 |
```
import tensorflow as tf
import keras
from keras.layers import Dense
from keras.layers import Conv2D, AveragePooling2D, Dropout, Flatten
import numpy as np
import matplotlib.pyplot as plt
from deeplearning2020 import helpers
print(tf.config.list_physical_devices('GPU'))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
%matplotlib inline
```
## Data Preprocessing
```
batch_size = 128
num_classes = 10
epochs = 12
img_rows, img_cols = 28, 28
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images_aug = train_images.reshape(train_images.shape[0], img_rows, img_cols, 1)
test_images_aug = test_images.reshape(test_images.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
train_images_aug = train_images_aug.astype('float32')
test_images_aug = test_images_aug.astype('float32')
train_images_aug = train_images_aug / 255.0
test_images_aug = test_images_aug / 255.0
#train_images_aug = np.pad(train_images_aug, ((0,0),(2,2),(2,2),(0,0)), 'constant')
#test_images_aug = np.pad(test_images_aug, ((0,0),(2,2),(2,2),(0,0)), 'constant')
train_vec_labels = keras.utils.to_categorical(train_labels, num_classes)
test_vec_labels = keras.utils.to_categorical(test_labels, num_classes)
```
## Display Data
```
print(train_vec_labels[50])
```
## Net Architectures
```
def LeNet5():
model = keras.Sequential()
# Convolutional Layer 1
# model.add(Conv2D(filters=6, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=(32,32,1)))
model.add(Conv2D(filters=6, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=(28,28,1)))
model.add(AveragePooling2D())
# Convolutional Layer 2
model.add(Conv2D(filters=6, kernel_size=(5, 5), padding='VALID', activation='relu'))
model.add(AveragePooling2D())
model.add(Flatten())
# Hidden Layer 1
model.add(Dense(units=120, activation='relu'))
# Hidden Layer 2
model.add(Dense(units=84, activation='relu'))
# Output Layer
model.add(Dense(units=10, activation='softmax'))
return model
```
## Compile and Train Network
```
model = LeNet5()
model.summary()
sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(
optimizer=keras.optimizers.Adadelta(),
loss=keras.losses.categorical_crossentropy,
metrics=['acc'])
history = model.fit(train_images_aug, train_vec_labels,
batch_size=batch_size,
epochs=epochs,
verbose=True,
validation_data=(test_images_aug, test_vec_labels))
```
## Evaluate Model
```
eval_loss, eval_accuracy = model.evaluate(test_images_aug, test_vec_labels, verbose=False)
print("Model accuracy: %.2f" % eval_accuracy)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.annotate('%0.4f' % history.history['acc'][-1], xy=(1, history.history['acc'][-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % history.history['val_acc'][-1], xy=(1, history.history['val_acc'][-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % (history.history['acc'][-1] - history.history['val_acc'][-1]) + " diff", xy=(1, (history.history['acc'][-1] + history.history['val_acc'][-1])/2), xytext=(8, 0), xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.show()
# heavy overfitting
# Plot diff between training and validation for accuracy and loss
diff_acc = np.asarray(history.history['acc']) - np.asarray(history.history['val_acc'])
diff_loss = np.asarray(history.history['loss']) - np.asarray(history.history['val_loss'])
plt.plot(diff_acc)
plt.plot(diff_loss)
plt.title('Diffs')
plt.ylabel('Diffs')
plt.xlabel('Epoch')
plt.legend(['Acc', 'Loss'], loc='upper left')
plt.annotate('%0.4f' % diff_acc[-1], xy=(1, diff_acc[-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % diff_loss[-1], xy=(1, diff_loss[-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.show()
!pip install --upgrade deeplearning2020
from deeplearning2020 import Submission
Submission('3a850b62ce7875f05c1d5a3465803421', '2', model).submit()
```
|
github_jupyter
|
import tensorflow as tf
import keras
from keras.layers import Dense
from keras.layers import Conv2D, AveragePooling2D, Dropout, Flatten
import numpy as np
import matplotlib.pyplot as plt
from deeplearning2020 import helpers
print(tf.config.list_physical_devices('GPU'))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
%matplotlib inline
batch_size = 128
num_classes = 10
epochs = 12
img_rows, img_cols = 28, 28
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images_aug = train_images.reshape(train_images.shape[0], img_rows, img_cols, 1)
test_images_aug = test_images.reshape(test_images.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
train_images_aug = train_images_aug.astype('float32')
test_images_aug = test_images_aug.astype('float32')
train_images_aug = train_images_aug / 255.0
test_images_aug = test_images_aug / 255.0
#train_images_aug = np.pad(train_images_aug, ((0,0),(2,2),(2,2),(0,0)), 'constant')
#test_images_aug = np.pad(test_images_aug, ((0,0),(2,2),(2,2),(0,0)), 'constant')
train_vec_labels = keras.utils.to_categorical(train_labels, num_classes)
test_vec_labels = keras.utils.to_categorical(test_labels, num_classes)
print(train_vec_labels[50])
def LeNet5():
model = keras.Sequential()
# Convolutional Layer 1
# model.add(Conv2D(filters=6, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=(32,32,1)))
model.add(Conv2D(filters=6, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=(28,28,1)))
model.add(AveragePooling2D())
# Convolutional Layer 2
model.add(Conv2D(filters=6, kernel_size=(5, 5), padding='VALID', activation='relu'))
model.add(AveragePooling2D())
model.add(Flatten())
# Hidden Layer 1
model.add(Dense(units=120, activation='relu'))
# Hidden Layer 2
model.add(Dense(units=84, activation='relu'))
# Output Layer
model.add(Dense(units=10, activation='softmax'))
return model
model = LeNet5()
model.summary()
sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(
optimizer=keras.optimizers.Adadelta(),
loss=keras.losses.categorical_crossentropy,
metrics=['acc'])
history = model.fit(train_images_aug, train_vec_labels,
batch_size=batch_size,
epochs=epochs,
verbose=True,
validation_data=(test_images_aug, test_vec_labels))
eval_loss, eval_accuracy = model.evaluate(test_images_aug, test_vec_labels, verbose=False)
print("Model accuracy: %.2f" % eval_accuracy)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.annotate('%0.4f' % history.history['acc'][-1], xy=(1, history.history['acc'][-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % history.history['val_acc'][-1], xy=(1, history.history['val_acc'][-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % (history.history['acc'][-1] - history.history['val_acc'][-1]) + " diff", xy=(1, (history.history['acc'][-1] + history.history['val_acc'][-1])/2), xytext=(8, 0), xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.show()
# heavy overfitting
# Plot diff between training and validation for accuracy and loss
diff_acc = np.asarray(history.history['acc']) - np.asarray(history.history['val_acc'])
diff_loss = np.asarray(history.history['loss']) - np.asarray(history.history['val_loss'])
plt.plot(diff_acc)
plt.plot(diff_loss)
plt.title('Diffs')
plt.ylabel('Diffs')
plt.xlabel('Epoch')
plt.legend(['Acc', 'Loss'], loc='upper left')
plt.annotate('%0.4f' % diff_acc[-1], xy=(1, diff_acc[-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.annotate('%0.4f' % diff_loss[-1], xy=(1, diff_loss[-1]), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.show()
!pip install --upgrade deeplearning2020
from deeplearning2020 import Submission
Submission('3a850b62ce7875f05c1d5a3465803421', '2', model).submit()
| 0.89335 | 0.852199 |
Lambda School Data Science
*Unit 2, Sprint 3, Module 4*
---
# Model Interpretation
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your work.
- [ ] Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling.
- [ ] Make at least 1 partial dependence plot to explain your model.
- [ ] Make at least 1 Shapley force plot to explain an individual prediction.
- [ ] **Share at least 1 visualization (of any type) on Slack!**
If you aren't ready to make these plots with your own dataset, you can practice these objectives with any dataset you've worked with previously. Example solutions are available for Partial Dependence Plots with the Tanzania Waterpumps dataset, and Shapley force plots with the Titanic dataset. (These datasets are available in the data directory of this repository.)
Please be aware that **multi-class classification** will result in multiple Partial Dependence Plots (one for each class), and multiple sets of Shapley Values (one for each class).
## Stretch Goals
#### Partial Dependence Plots
- [ ] Make multiple PDPs with 1 feature in isolation.
- [ ] Make multiple PDPs with 2 features in interaction.
- [ ] Use Plotly to make a 3D PDP.
- [ ] Make PDPs with categorical feature(s). Use Ordinal Encoder, outside of a pipeline, to encode your data first. If there is a natural ordering, then take the time to encode it that way, instead of random integers. Then use the encoded data with pdpbox. Get readable category names on your plot, instead of integer category codes.
#### Shap Values
- [ ] Make Shapley force plots to explain at least 4 individual predictions.
- If your project is Binary Classification, you can do a True Positive, True Negative, False Positive, False Negative.
- If your project is Regression, you can do a high prediction with low error, a low prediction with low error, a high prediction with high error, and a low prediction with high error.
- [ ] Use Shapley values to display verbal explanations of individual predictions.
- [ ] Use the SHAP library for other visualization types.
The [SHAP repo](https://github.com/slundberg/shap) has examples for many visualization types, including:
- Force Plot, individual predictions
- Force Plot, multiple predictions
- Dependence Plot
- Summary Plot
- Summary Plot, Bar
- Interaction Values
- Decision Plots
We just did the first type during the lesson. The [Kaggle microcourse](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values) shows two more. Experiment and see what you can learn!
### Links
#### Partial Dependence Plots
- [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
- [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
- [pdpbox repo](https://github.com/SauceCat/PDPbox) & [docs](https://pdpbox.readthedocs.io/en/latest/)
- [Plotly: 3D PDP example](https://plot.ly/scikit-learn/plot-partial-dependence/#partial-dependence-of-house-value-on-median-age-and-average-occupancy)
#### Shapley Values
- [Kaggle / Dan Becker: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability)
- [Christoph Molnar: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html)
- [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/)
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install eli5
!pip install pdpbox
!pip install shap
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
import numpy as np
datapath = "../data/"
df = pd.read_csv(datapath+'project-data/LoL-Ranked-Data.csv')
df.set_index('gameId',inplace=True)
target = 'winner'
features = ['gameDuration',
'firstBlood',
'firstTower',
'firstInhibitor',
'firstBaron',
'firstDragon',
'firstRiftHerald']
def WRANGLE(x):
x = x.copy()
x = x[~x.index.duplicated(keep='first')]
return x
# This function should remove duplicate values
ranked_data = WRANGLE(df)
X = ranked_data[features]
y = ranked_data[target]
```
# Establish Baseline
```
y.value_counts(normalize=True)
from sklearn.metrics import accuracy_score
y_pred = [y.mode()] * len(y)
print('Baseline Accuracy:', accuracy_score(y,y_pred))
```
## Split Data
```
from sklearn.model_selection import train_test_split
X_1, X_test, y_1, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_1,y_1, test_size=0.2,random_state=42)
X_train.shape , y_train.shape
```
# Build Model
```
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier()
model.fit(X_train,y_train)
#dont need encoder or imputer, no null values and the values are already 'encoded'.
#I know this based on looking at data in the past assignments
```
# Check Metrics
```
print('Training Accuracy:', model.score(X_train,y_train))
print('Validation Accuracy:', model.score(X_val,y_val))
```
# Partial Dependence Plots
```
X_val.columns
from pdpbox.pdp import pdp_isolate, pdp_plot
feature_pdp = 'gameDuration'
isolate = pdp_isolate(
model=model,
dataset=X_val,
model_features=X_val.columns,
feature=feature_pdp
)
pdp_plot(isolate,feature_name=feature_pdp);
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features_pdp = ['firstTower', 'firstDragon']
interact = pdp_interact(
model=model,
dataset=X_val,
model_features=X_val.columns,
features=features_pdp
)
pdp_interact_plot(interact, plot_type = 'grid', feature_names=features_pdp);
```
# Shapley Plots
```
row = X_val.iloc[[0]]
row
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row
)
y_val.iloc[[0]]
```
|
github_jupyter
|
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install eli5
!pip install pdpbox
!pip install shap
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
import numpy as np
datapath = "../data/"
df = pd.read_csv(datapath+'project-data/LoL-Ranked-Data.csv')
df.set_index('gameId',inplace=True)
target = 'winner'
features = ['gameDuration',
'firstBlood',
'firstTower',
'firstInhibitor',
'firstBaron',
'firstDragon',
'firstRiftHerald']
def WRANGLE(x):
x = x.copy()
x = x[~x.index.duplicated(keep='first')]
return x
# This function should remove duplicate values
ranked_data = WRANGLE(df)
X = ranked_data[features]
y = ranked_data[target]
y.value_counts(normalize=True)
from sklearn.metrics import accuracy_score
y_pred = [y.mode()] * len(y)
print('Baseline Accuracy:', accuracy_score(y,y_pred))
from sklearn.model_selection import train_test_split
X_1, X_test, y_1, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_1,y_1, test_size=0.2,random_state=42)
X_train.shape , y_train.shape
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier()
model.fit(X_train,y_train)
#dont need encoder or imputer, no null values and the values are already 'encoded'.
#I know this based on looking at data in the past assignments
print('Training Accuracy:', model.score(X_train,y_train))
print('Validation Accuracy:', model.score(X_val,y_val))
X_val.columns
from pdpbox.pdp import pdp_isolate, pdp_plot
feature_pdp = 'gameDuration'
isolate = pdp_isolate(
model=model,
dataset=X_val,
model_features=X_val.columns,
feature=feature_pdp
)
pdp_plot(isolate,feature_name=feature_pdp);
from pdpbox.pdp import pdp_interact, pdp_interact_plot
features_pdp = ['firstTower', 'firstDragon']
interact = pdp_interact(
model=model,
dataset=X_val,
model_features=X_val.columns,
features=features_pdp
)
pdp_interact_plot(interact, plot_type = 'grid', feature_names=features_pdp);
row = X_val.iloc[[0]]
row
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value,
shap_values=shap_values,
features=row
)
y_val.iloc[[0]]
| 0.365117 | 0.958382 |
# Importing the libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
# Importing and loading the dataset
```
dataset = pd.read_csv('../input/loan-prediction-problem-dataset/train_u6lujuX_CVtuZ9i.csv')
dataset.head() #prints a nutshell of the dataset
```
# Dataset Info
```
dataset.info() #we get detailed info of the dataset
```
# Dataset Shape
```
dataset.shape #no of rows and columns
```
# Dataset Describtion
```
dataset.describe() #prints the numerical columns details
```
# Checking the missing data
```
dataset.isnull().sum()
```
**Taking care of missing values in "Loan Ammount","credit history"
```
dataset['LoanAmount'] = dataset['LoanAmount'].fillna(dataset['LoanAmount'].mean())
dataset['Credit_History'] = dataset['Credit_History'].fillna(dataset['Credit_History'].median())
```
**Let's confirm if there are any missing values in 'LoanAmount' & 'Credit_History**
```
dataset.isnull().sum()
```
**Now Let's drop all the missing value remaining **
```
dataset.dropna(inplace=True)
```
**Let's check the Missing values for the final time!**
```
dataset.isnull().sum()
```
**This method commonly used to handle the null values. Here, we either delete a particular row if it has a null value for a particular feature and a particular column if it has more than 70-75% of missing values. This method is advised only when there are enough samples in the data set. One has to make sure that after we have deleted the data, there is no addition of bias. Removing the data will lead to loss of information which will not give the expected results while predicting the output.**
> Lets check our dataset new shape
```
dataset.shape
```
# Deep dive into the dataset
> Comparison between Genders in getting the Loan:
```
print(pd.crosstab(dataset['Gender'],dataset['Loan_Status']))
sns.countplot(dataset['Gender'],hue=dataset['Loan_Status'])
```
Here, we can see that the Males have more chances to get the Loan.
> Comparison between Married Status in getting the Loan:
```
print(pd.crosstab(dataset['Married'],dataset['Loan_Status']))
sns.countplot(dataset['Married'],hue=dataset['Loan_Status'])
```
Here we can see married people has a greater chance to get the loan
> Comparison between Self-Employed or Not in getting the Loan:
```
print(pd.crosstab(dataset['Self_Employed'],dataset['Loan_Status']))
sns.countplot(dataset['Self_Employed'],hue=dataset['Loan_Status'])
```
Here we can see not employed people has a greater chance to get the loan
> Comparison between Property Area for getting the Loan:
```
print(pd.crosstab(dataset['Property_Area'],dataset['Loan_Status']))
sns.countplot(dataset['Property_Area'],hue=dataset['Loan_Status'])
```
The tendency of loan varies semiurban > rural > urban
# Encoding of non-numerical values
```
dataset['Loan_Status'].replace('Y',1,inplace = True)
dataset['Loan_Status'].replace('N',0,inplace = True)
dataset['Loan_Status'].value_counts()
dataset.Gender=dataset.Gender.map({'Male':1,'Female':0})
dataset['Gender'].value_counts()
dataset.Married=dataset.Married.map({'Yes':1,'No':0})
dataset['Married'].value_counts()
dataset.Dependents=dataset.Dependents.map({'0':0,'1':1,'2':2,'3+':3})
dataset['Dependents'].value_counts()
dataset.Education=dataset.Education.map({'Graduate':1,'Not Graduate':0})
dataset['Education'].value_counts()
dataset.Self_Employed=dataset.Self_Employed.map({'Yes':1,'No':0})
dataset['Self_Employed'].value_counts()
dataset.Property_Area=dataset.Property_Area.map({'Urban':2,'Rural':0,'Semiurban':1})
dataset['Property_Area'].value_counts()
dataset['LoanAmount'].value_counts()
dataset['Loan_Amount_Term'].value_counts()
dataset['Credit_History'].value_counts()
```
# Display the correlation matrix
```
plt.figure(figsize=(16,5))
sns.heatmap(dataset.corr(),annot=True)
plt.title('Correlation Matrix (for Loan Status)')
```
# Our modified dataset
```
dataset.head()
```
# Spliting the dataset into train and test set
```
X = dataset.iloc[:,1:-1].values
y = dataset.iloc[:,-1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.2, random_state=0)
```
# Feature scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
```
# Creating the ANN Model
**Importing the libraries**
```
import tensorflow as tf
tf.__version__
```
# Initialising the ANN
```
ann = tf.keras.models.Sequential()
```
1. Adding the first input layer and first hidden layer
```
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
```
2.Crating a seceond hidden layer
```
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
```
3. Adding the output layer
```
ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
```
# Training the ANN model
1. Compiling the model
```
ann.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
```
2. Training the model
```
ann.fit(X_train, y_train, batch_size =32, epochs =100)
```
# Predicting the test set result
```
y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
print(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))
```
# Making the confusion matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
dataset = pd.read_csv('../input/loan-prediction-problem-dataset/train_u6lujuX_CVtuZ9i.csv')
dataset.head() #prints a nutshell of the dataset
dataset.info() #we get detailed info of the dataset
dataset.shape #no of rows and columns
dataset.describe() #prints the numerical columns details
dataset.isnull().sum()
dataset['LoanAmount'] = dataset['LoanAmount'].fillna(dataset['LoanAmount'].mean())
dataset['Credit_History'] = dataset['Credit_History'].fillna(dataset['Credit_History'].median())
dataset.isnull().sum()
dataset.dropna(inplace=True)
dataset.isnull().sum()
dataset.shape
print(pd.crosstab(dataset['Gender'],dataset['Loan_Status']))
sns.countplot(dataset['Gender'],hue=dataset['Loan_Status'])
print(pd.crosstab(dataset['Married'],dataset['Loan_Status']))
sns.countplot(dataset['Married'],hue=dataset['Loan_Status'])
print(pd.crosstab(dataset['Self_Employed'],dataset['Loan_Status']))
sns.countplot(dataset['Self_Employed'],hue=dataset['Loan_Status'])
print(pd.crosstab(dataset['Property_Area'],dataset['Loan_Status']))
sns.countplot(dataset['Property_Area'],hue=dataset['Loan_Status'])
dataset['Loan_Status'].replace('Y',1,inplace = True)
dataset['Loan_Status'].replace('N',0,inplace = True)
dataset['Loan_Status'].value_counts()
dataset.Gender=dataset.Gender.map({'Male':1,'Female':0})
dataset['Gender'].value_counts()
dataset.Married=dataset.Married.map({'Yes':1,'No':0})
dataset['Married'].value_counts()
dataset.Dependents=dataset.Dependents.map({'0':0,'1':1,'2':2,'3+':3})
dataset['Dependents'].value_counts()
dataset.Education=dataset.Education.map({'Graduate':1,'Not Graduate':0})
dataset['Education'].value_counts()
dataset.Self_Employed=dataset.Self_Employed.map({'Yes':1,'No':0})
dataset['Self_Employed'].value_counts()
dataset.Property_Area=dataset.Property_Area.map({'Urban':2,'Rural':0,'Semiurban':1})
dataset['Property_Area'].value_counts()
dataset['LoanAmount'].value_counts()
dataset['Loan_Amount_Term'].value_counts()
dataset['Credit_History'].value_counts()
plt.figure(figsize=(16,5))
sns.heatmap(dataset.corr(),annot=True)
plt.title('Correlation Matrix (for Loan Status)')
dataset.head()
X = dataset.iloc[:,1:-1].values
y = dataset.iloc[:,-1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.2, random_state=0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
import tensorflow as tf
tf.__version__
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
ann.add(tf.keras.layers.Dense(units=6, activation='relu'))
ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
ann.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
ann.fit(X_train, y_train, batch_size =32, epochs =100)
y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
print(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
| 0.437824 | 0.959307 |
# Predict unknown validation or test set data
```
%reload_ext rpy2.ipython
import os
import argparse
import glob
import nibabel as nib
import numpy as np
from tqdm import tqdm_notebook as tqdm
import mxnet as mx
from mxnet import gluon, ndarray as nd
from unet import *
```
***
## Setup hyperparameters
```
args = argparse.Namespace()
args.data_dir = '../brats_2018_4D'
args.weights_dir = '../params/baseline/bagged_ensemble/ensemble'
args.output_dir = '../predictions/val__baseline__bagged_ensemble_soft_prediction_190101'
# Training
args.num_workers = 1
GPU_COUNT = 1
args.ctx = [mx.gpu(i) for i in range(GPU_COUNT)]
# args.ctx = [mx.gpu(1)]
# Unet
args.num_downs = 4 # Number of encoding/downsampling layers
args.classes = 4 # Number of classes for segmentation, including background
args.ngf = 32 # Number of channels in base/outermost layer
args.use_bias = True # For conv blocks
args.use_global_stats = True # For BN blocks
# Pre/post-processing
args.pad_size_val = [240, 240, 160] # Should be input vol dims unless 'crop_size_val' is larger
args.crop_size_val = [240, 240, 160] # Should be divisible by 2^num_downs
args.overlap = 0 # Fractional overlap for val patch prediction, combined with voting
args.output_dims = [240, 240, 155]
```
***
## Setup data loader
```
data = np.load('data/normalization_stats_test.npz')
means_brain = nd.array(data['means_brain'])
stds_brain = nd.array(data['stds_brain'])
testset = MRISegDataset4D(root=args.data_dir, split='test', mode='val', crop_size=args.pad_size_val, transform=brats_transform, means=means_brain, stds=stds_brain)
test_data = gluon.data.DataLoader(testset, batch_size=1, num_workers=args.num_workers, shuffle=False, last_batch='keep')
```
***
## Extract template NifTI header
```
subdir = os.path.normpath(testset.paths()[0])
img_path = os.path.join(subdir, os.listdir(subdir)[0])
hdr = nib.load(img_path).header
```
***
## Setup model and load ensemble weights
```
model = UnetGenerator(num_downs = args.num_downs,
classes = args.classes,
ngf = args.ngf,
use_bias = args.use_bias,
use_global_stats = args.use_global_stats)
model.collect_params().initialize(force_reinit=True, ctx=args.ctx)
model.hybridize()
weights_paths = [os.path.join(args.weights_dir, X) for X in sorted(os.listdir(args.weights_dir))]
```
***
## Predict test data (for each set of model `weights` in ensemble)
Save intermediate output maps with voxelwise softmax class probabilities.
```
def brats_predict(model, data, crop_size, overlap, n_classes, ctx):
output = model(data.as_in_context(ctx)).squeeze().softmax(axis = 0).asnumpy()
return output
def img_unpad(img, dims):
"""Unpad image vol back to original input dimensions"""
pad_dims = img.shape[1:]
xmin, ymin, zmin = 0, 0, 0
if pad_dims[0] > dims[0]:
xmin = (pad_dims[0] - dims[0]) // 2
if pad_dims[1] > dims[1]:
ymin = (pad_dims[1] - dims[1]) // 2
if pad_dims[2] > dims[2]:
zmin = (pad_dims[2] - dims[2]) // 2
return img[:, xmin : xmin + dims[0],
ymin : ymin + dims[1],
zmin : zmin + dims[2]]
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
for weights_path in tqdm(weights_paths):
model.load_parameters(weights_path, ctx=args.ctx[0])
output_dir = os.path.join(args.output_dir, 'runs', os.path.basename(weights_path).split('.params')[0])
if not os.path.exists(output_dir):
os.makedirs(output_dir)
for isub, (data, _) in enumerate(tqdm(test_data)):
subID = os.path.basename(os.path.normpath(testset.paths()[isub]))
mask = brats_predict(model, data, args.crop_size_val, args.overlap, n_classes=args.classes, ctx=args.ctx[0])
mask = img_unpad(mask, args.output_dims) # Crop back to original BraTS dimensions
mask = np.flip(mask, 2) # Flip AP orientation back to original BraTS convention
mask = mask * 1000
mask = mask.transpose((1,2,3,0))
mask = mask.astype(np.int16)
mask_nii = nib.Nifti1Image(mask, None, header=hdr)
mask_nii.to_filename(os.path.join(output_dir, subID + '.nii.gz'))
```
***
## Combine ensemble predictions
* Assign output class to background `0` if predicted probability of background class is > 0.5.
* Otherwise, assign output class to the maximum of the three foreground classes.
```
output_dir = os.path.join(args.output_dir, 'final')
if not os.path.exists(output_dir):
os.makedirs(output_dir)
run_dirs_parent = os.path.join(args.output_dir, 'runs')
run_dirs = [os.path.join(run_dirs_parent, X) for X in os.listdir(run_dirs_parent)]
for isub in tqdm(range(len(testset))):
subID = os.path.basename(os.path.normpath(testset.paths()[isub]))
mask = np.empty(tuple(args.output_dims) + (args.classes,) + (len(run_dirs),))
for irun, run_dir in enumerate(run_dirs):
img_path = os.path.join(run_dir, subID + '.nii.gz')
mask[..., irun] = nib.load(img_path).get_fdata()
mask_sum = mask.sum(axis = -1)
mask_out = mask_sum[..., 1:].argmax(axis = -1) + 1
not_bg = mask_sum[..., 0] < (0.5 * len(run_dirs) * 1000)
mask_out = mask_out * not_bg
mask_out[mask_out == 3] = 4 # Convert tissue class labels back to original BraTS convention
mask_nii = nib.Nifti1Image(mask_out, None, header=hdr)
mask_nii.to_filename(os.path.join(output_dir, subID + '.nii.gz'))
```
|
github_jupyter
|
%reload_ext rpy2.ipython
import os
import argparse
import glob
import nibabel as nib
import numpy as np
from tqdm import tqdm_notebook as tqdm
import mxnet as mx
from mxnet import gluon, ndarray as nd
from unet import *
args = argparse.Namespace()
args.data_dir = '../brats_2018_4D'
args.weights_dir = '../params/baseline/bagged_ensemble/ensemble'
args.output_dir = '../predictions/val__baseline__bagged_ensemble_soft_prediction_190101'
# Training
args.num_workers = 1
GPU_COUNT = 1
args.ctx = [mx.gpu(i) for i in range(GPU_COUNT)]
# args.ctx = [mx.gpu(1)]
# Unet
args.num_downs = 4 # Number of encoding/downsampling layers
args.classes = 4 # Number of classes for segmentation, including background
args.ngf = 32 # Number of channels in base/outermost layer
args.use_bias = True # For conv blocks
args.use_global_stats = True # For BN blocks
# Pre/post-processing
args.pad_size_val = [240, 240, 160] # Should be input vol dims unless 'crop_size_val' is larger
args.crop_size_val = [240, 240, 160] # Should be divisible by 2^num_downs
args.overlap = 0 # Fractional overlap for val patch prediction, combined with voting
args.output_dims = [240, 240, 155]
data = np.load('data/normalization_stats_test.npz')
means_brain = nd.array(data['means_brain'])
stds_brain = nd.array(data['stds_brain'])
testset = MRISegDataset4D(root=args.data_dir, split='test', mode='val', crop_size=args.pad_size_val, transform=brats_transform, means=means_brain, stds=stds_brain)
test_data = gluon.data.DataLoader(testset, batch_size=1, num_workers=args.num_workers, shuffle=False, last_batch='keep')
subdir = os.path.normpath(testset.paths()[0])
img_path = os.path.join(subdir, os.listdir(subdir)[0])
hdr = nib.load(img_path).header
model = UnetGenerator(num_downs = args.num_downs,
classes = args.classes,
ngf = args.ngf,
use_bias = args.use_bias,
use_global_stats = args.use_global_stats)
model.collect_params().initialize(force_reinit=True, ctx=args.ctx)
model.hybridize()
weights_paths = [os.path.join(args.weights_dir, X) for X in sorted(os.listdir(args.weights_dir))]
def brats_predict(model, data, crop_size, overlap, n_classes, ctx):
output = model(data.as_in_context(ctx)).squeeze().softmax(axis = 0).asnumpy()
return output
def img_unpad(img, dims):
"""Unpad image vol back to original input dimensions"""
pad_dims = img.shape[1:]
xmin, ymin, zmin = 0, 0, 0
if pad_dims[0] > dims[0]:
xmin = (pad_dims[0] - dims[0]) // 2
if pad_dims[1] > dims[1]:
ymin = (pad_dims[1] - dims[1]) // 2
if pad_dims[2] > dims[2]:
zmin = (pad_dims[2] - dims[2]) // 2
return img[:, xmin : xmin + dims[0],
ymin : ymin + dims[1],
zmin : zmin + dims[2]]
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
for weights_path in tqdm(weights_paths):
model.load_parameters(weights_path, ctx=args.ctx[0])
output_dir = os.path.join(args.output_dir, 'runs', os.path.basename(weights_path).split('.params')[0])
if not os.path.exists(output_dir):
os.makedirs(output_dir)
for isub, (data, _) in enumerate(tqdm(test_data)):
subID = os.path.basename(os.path.normpath(testset.paths()[isub]))
mask = brats_predict(model, data, args.crop_size_val, args.overlap, n_classes=args.classes, ctx=args.ctx[0])
mask = img_unpad(mask, args.output_dims) # Crop back to original BraTS dimensions
mask = np.flip(mask, 2) # Flip AP orientation back to original BraTS convention
mask = mask * 1000
mask = mask.transpose((1,2,3,0))
mask = mask.astype(np.int16)
mask_nii = nib.Nifti1Image(mask, None, header=hdr)
mask_nii.to_filename(os.path.join(output_dir, subID + '.nii.gz'))
output_dir = os.path.join(args.output_dir, 'final')
if not os.path.exists(output_dir):
os.makedirs(output_dir)
run_dirs_parent = os.path.join(args.output_dir, 'runs')
run_dirs = [os.path.join(run_dirs_parent, X) for X in os.listdir(run_dirs_parent)]
for isub in tqdm(range(len(testset))):
subID = os.path.basename(os.path.normpath(testset.paths()[isub]))
mask = np.empty(tuple(args.output_dims) + (args.classes,) + (len(run_dirs),))
for irun, run_dir in enumerate(run_dirs):
img_path = os.path.join(run_dir, subID + '.nii.gz')
mask[..., irun] = nib.load(img_path).get_fdata()
mask_sum = mask.sum(axis = -1)
mask_out = mask_sum[..., 1:].argmax(axis = -1) + 1
not_bg = mask_sum[..., 0] < (0.5 * len(run_dirs) * 1000)
mask_out = mask_out * not_bg
mask_out[mask_out == 3] = 4 # Convert tissue class labels back to original BraTS convention
mask_nii = nib.Nifti1Image(mask_out, None, header=hdr)
mask_nii.to_filename(os.path.join(output_dir, subID + '.nii.gz'))
| 0.368633 | 0.679538 |
## Simple Selenium ##
It might be quite tricky sometimes just to get selenium up and running
First, the commands in terminal to install and get the necessary drivers
- brew install wget
- wget https://github.com/mozilla/geckodriver/releases/download/v0.21.0/geckodriver-v0.21.0-linux64.tar.gz
- tar xvfz geckodriver-v0.21.0-linux64.tar.gz
- mv geckodriver ~/.local/bin
Here, I have used version 21.0 of geckodriver for Mozilla, and version 61.0.2 for Firefox
Sometimes, errors just occur because of mismatching versions, so a little trial and error may be required.
A good application of Selenium would be to use it to create a simple Instagram bot.
We first import the main library in selenium and import Firefox, which basically allows us to control the Firefox browser. We could also use Chrome, but that would also mean downloading a different geckodriver (and finding the right combination of Chrome and geckodriver that would work nice together).
```
from selenium.webdriver import Firefox
```
And some of the other utilities from the selenium library
```
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
```
And just one more library 'time' to build in some delays.
```
import time
```
We first start a browser session with Firefox. You will see the Firefox browser pop up.
** Head ON**
Usually one does this headless i.e. without the browser appearing, but to allow you to visualise what is happening, let's do one with the head on first.
```
fieryfox = Firefox()
```
Now, go where you want to go on the interwebs, and wait a while before entering the username and password.
```
fieryfox.get('https://www.instagram.com/accounts/login/')
print(fieryfox.title)
login_wait = WebDriverWait(fieryfox, 10)
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='username']")))
elem.send_keys("enter_your_username")
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='password']")))
elem.send_keys("enter_your_password")
```
Now click on the login button
```
fieryfox.find_element_by_xpath("//button[contains(.,'Log in')]").click()
```
Now you are on the main page after login. It's simple to do a quick check.
```
print(fieryfox.title)
```
Look for the search bar and search for anything
```
search = WebDriverWait(fieryfox, 10).until(
EC.visibility_of_element_located(
(By.XPATH, "//input[@placeholder='Search']")
)
)
search.clear()
search.send_keys('#singapore')
time.sleep(3)
search.send_keys(Keys.ENTER)
time.sleep(1)
search.send_keys(Keys.ENTER)
print(fieryfox.title)
```
We click on an image by looking for the elements with a class name 'v1Nh3', and click on the first item we find.
The image pops up, and we look for the button to like the image and click it.
```
time.sleep(20)
image_links = fieryfox.find_elements_by_class_name('v1Nh3')
image_links[0].click()
time.sleep(20)
like_element = fieryfox.find_element_by_xpath("//button/span[@aria-label='Like']")
like_element.click()
```
** Headless **
Now we do the exact same thing, but headless
```
from selenium.webdriver.firefox.options import Options
opts = Options()
opts.set_headless()
assert opts.headless
fieryfoxy = Firefox(options=opts)
#navigate to the page and log in.
fieryfoxy.get('https://www.instagram.com/accounts/login/')
print(fieryfoxy.title)
login_wait = WebDriverWait(fieryfoxy, 10)
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='username']")))
elem.send_keys("enter_your_username")
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='password']")))
elem.send_keys("enter_your_password")
```
Now login, and check the page title
```
fieryfoxy.find_element_by_xpath("//button[contains(.,'Log in')]").click()
print(fieryfoxy.title)
```
Now repeat the search, and check the title again. We won't repeat the part where we like the first post.
```
search = WebDriverWait(fieryfoxy, 10).until(
EC.visibility_of_element_located(
(By.XPATH, "//input[@placeholder='Search']")
)
)
search.clear()
search.send_keys('#singapore')
time.sleep(3)
search.send_keys(Keys.ENTER)
time.sleep(1)
search.send_keys(Keys.ENTER)
print(fieryfoxy.title)
```
|
github_jupyter
|
from selenium.webdriver import Firefox
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
fieryfox = Firefox()
fieryfox.get('https://www.instagram.com/accounts/login/')
print(fieryfox.title)
login_wait = WebDriverWait(fieryfox, 10)
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='username']")))
elem.send_keys("enter_your_username")
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='password']")))
elem.send_keys("enter_your_password")
fieryfox.find_element_by_xpath("//button[contains(.,'Log in')]").click()
print(fieryfox.title)
search = WebDriverWait(fieryfox, 10).until(
EC.visibility_of_element_located(
(By.XPATH, "//input[@placeholder='Search']")
)
)
search.clear()
search.send_keys('#singapore')
time.sleep(3)
search.send_keys(Keys.ENTER)
time.sleep(1)
search.send_keys(Keys.ENTER)
print(fieryfox.title)
time.sleep(20)
image_links = fieryfox.find_elements_by_class_name('v1Nh3')
image_links[0].click()
time.sleep(20)
like_element = fieryfox.find_element_by_xpath("//button/span[@aria-label='Like']")
like_element.click()
from selenium.webdriver.firefox.options import Options
opts = Options()
opts.set_headless()
assert opts.headless
fieryfoxy = Firefox(options=opts)
#navigate to the page and log in.
fieryfoxy.get('https://www.instagram.com/accounts/login/')
print(fieryfoxy.title)
login_wait = WebDriverWait(fieryfoxy, 10)
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='username']")))
elem.send_keys("enter_your_username")
elem = login_wait.until(EC.visibility_of_element_located((By.XPATH, ".//input[@name='password']")))
elem.send_keys("enter_your_password")
fieryfoxy.find_element_by_xpath("//button[contains(.,'Log in')]").click()
print(fieryfoxy.title)
search = WebDriverWait(fieryfoxy, 10).until(
EC.visibility_of_element_located(
(By.XPATH, "//input[@placeholder='Search']")
)
)
search.clear()
search.send_keys('#singapore')
time.sleep(3)
search.send_keys(Keys.ENTER)
time.sleep(1)
search.send_keys(Keys.ENTER)
print(fieryfoxy.title)
| 0.285671 | 0.695454 |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 12: Reinforcement Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 12 Video Material
* Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb)
* Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb)
* Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb)
* **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb)
* Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed.
```
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
```
# Part 12.4: Atari Games with Keras Neural Networks
The Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)).
Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript.
* [Virtual Atari](http://www.virtualatari.org/listP.html)
Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI.
**Figure 12.ATARI: The Atari 2600**

### Actual Atari 2600 Specs
* CPU: 1.19 MHz MOS Technology 6507
* Audio + Video processor: Television Interface Adapter (TIA)
* Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels.
* Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched).
* Ball and missile sprites: 1 x 192 pixels (NTSC).
* Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels.
* 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations.
* 2 channels of 1-bit monaural sound with 4-bit volume control.
### OpenAI Lab Atari Pong
OpenAI Gym can be used with Windows; however, it requires a special [installation procedure](https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30).
This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player.
This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games.
We begin by importing the needed Python packages.
```
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
```
## Hyperparameters
The hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game.
```
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
```
The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train.
## Atari Environment's
You must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the game's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment.
```
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
```
We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user.
```
env.reset()
PIL.Image.fromarray(env.render())
```
We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train.
```
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
```
## Agent
I used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1.
```
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
```
Next, we introduce two hyperparameters that are specific to the neural network we are about to define.
```
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
```
Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure.
The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list.
The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork.
The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed.
Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN agent and reference the Q-network we just created.
```
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
```
## Metrics and Evaluation
There are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes.
```
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
```
## Replay Buffer
DQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data.
```
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
```
## Random Collection
The algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data.
```
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
```
## Training the agent
We are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches.
```
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
```
## Visualization
The notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations.
```
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
```
### Videos
We now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook.
```
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
```
First, we will observe the trained agent play the game.
```
create_policy_eval_video(agent.policy, "trained-agent")
```
For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount.
```
create_policy_eval_video(random_policy, "random-agent")
```
|
github_jupyter
|
try:
from google.colab import drive
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
if COLAB:
!sudo apt-get install -y xvfb ffmpeg
!pip install -q 'gym==0.10.11'
!pip install -q 'imageio==2.4.0'
!pip install -q PILLOW
!pip install -q 'pyglet==1.3.2'
!pip install -q pyvirtualdisplay
!pip install -q --upgrade tensorflow-probability
!pip install -q tf-agents
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym, suite_atari
from tf_agents.environments import tf_py_environment, batched_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step as ts
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
num_iterations = 250000
initial_collect_steps = 200
collect_steps_per_iteration = 10
replay_buffer_max_length = 100000
batch_size = 32
learning_rate = 2.5e-3
log_interval = 5000
num_eval_episodes = 5
eval_interval = 25000
#env_name = 'Breakout-v4'
env_name = 'Pong-v0'
#env_name = 'BreakoutDeterministic-v4'
#env = suite_gym.load(env_name)
# AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2
# frames. We need to account for this when computing things like update
# intervals.
ATARI_FRAME_SKIP = 4
max_episode_frames=108000 # ALE frames
env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
#env = batched_py_environment.BatchedPyEnvironment([env])
env.reset()
PIL.Image.fromarray(env.render())
train_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
eval_py_env = suite_atari.load(
env_name,
max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP,
gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING)
train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)
class AtariQNetwork(q_network.QNetwork):
"""QNetwork subclass that divides observations by 255."""
def call(self,
observation,
step_type=None,
network_state=(),
training=False):
state = tf.cast(observation, tf.float32)
# We divide the grayscale pixel values by 255 here rather than storing
# normalized values beause uint8s are 4x cheaper to store than float32s.
state = state / 255
return super(AtariQNetwork, self).call(
state, step_type=step_type, network_state=network_state,
training=training)
fc_layer_params = (512,)
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1))
q_net = AtariQNetwork(
train_env.observation_spec(),
train_env.action_spec(),
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params)
optimizer = tf.compat.v1.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.95,
momentum=0.0,
epsilon=0.00001,
centered=True)
train_step_counter = tf.Variable(0)
observation_spec = tensor_spec.from_spec(train_env.observation_spec())
time_step_spec = ts.time_step_spec(observation_spec)
action_spec = tensor_spec.from_spec(train_env.action_spec())
target_update_period=32000 # ALE frames
update_period=16 # ALE frames
_update_period = update_period / ATARI_FRAME_SKIP
_global_step = tf.compat.v1.train.get_or_create_global_step()
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
epsilon_greedy=0.01,
n_step_update=1.0,
target_update_tau=1.0,
target_update_period=(
target_update_period / ATARI_FRAME_SKIP / _update_period),
td_errors_loss_fn=common.element_wise_huber_loss,
gamma=0.99,
reward_scale_factor=1.0,
gradient_clipping=None,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=_global_step)
agent.initialize()
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of
# different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length)
# Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec())
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, \
steps=initial_collect_steps)
iterator = iter(dataset)
# (Optional) Optimize by wrapping some of the code in a graph
# using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
for _ in range(collect_steps_per_iteration):
collect_step(train_env, agent.collect_policy, replay_buffer)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, \
num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=10)
def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag)
def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
create_policy_eval_video(agent.policy, "trained-agent")
create_policy_eval_video(random_policy, "random-agent")
| 0.530966 | 0.98045 |
```
# default_exp tabular.interpretation
```
# tabular.interpretation
> Useful interpretation functions for tabular, such as Feature Importance
```
#hide
from nbdev.showdoc import *
#export
from fastai2.tabular.all import *
from scipy.cluster import hierarchy as hc
from sklearn import manifold
#export
def base_error(err, val): return (err-val)/err
#export
@patch
def feature_importance(x:TabularLearner, df=None, dl=None, perm_func=base_error, metric=accuracy, bs=None, reverse=True, plot=True):
"Calculate and plot the Feature Importance based on `df`"
x.df = df
bs = bs if bs is not None else x.dls.bs
if df is not None:
dl = x.dls.test_dl(df, bs=bs)
else:
dl = x.dls[1]
x_names = x.dls.x_names.filter(lambda x: '_na' not in x)
na = x.dls.x_names.filter(lambda x: '_na' in x)
y = x.dls.y_names
orig_metrics = x.metrics[1:]
x.metrics = [metric]
results = _calc_feat_importance(x, dl, x_names, na, perm_func, reverse)
if plot:
_plot_importance(_ord_dic_to_df(results))
x.metrics = orig_metrics
return results
#export
def _measure_col(learn:TabularLearner, dl:TabDataLoader, name:str, na:list):
"Measures change after column permutation"
col = [name]
if f'{name}_na' in na: col.append(name)
orig = dl.items[col].values
perm = np.random.permutation(len(orig))
dl.items[col] = dl.items[col].values[perm]
with learn.no_bar(), learn.no_logging():
metric = learn.validate(dl=dl)[1]
dl.items[col] = orig
return metric
#export
def _calc_feat_importance(learn:TabularLearner, dl:TabDataLoader, x_names:list, na:list, perm_func=base_error, reverse=True):
"Calculates permutation importance by shuffling a column by `perm_func`"
with learn.no_bar(), learn.no_logging():
base_error = learn.validate(dl=dl)[1]
importance = {}
pbar = progress_bar(x_names)
print("Calculating Permutation Importance")
for col in pbar:
importance[col] = _measure_col(learn, dl, col, na)
for key, value in importance.items():
importance[key] = perm_func(base_error, value)
return OrderedDict(sorted(importance.items(), key=lambda kv: kv[1], reverse=True))
#export
def _ord_dic_to_df(dict:OrderedDict): return pd.DataFrame([[k,v] for k,v in dict.items()], columns=['feature','importance'])
#export
def _plot_importance(df:pd.DataFrame, limit=20, asc=False, **kwargs):
"Plot importance with an optional limit to how many variables shown"
df_copy = df.copy()
df_copy['feature'] = df_copy['feature'].str.slice(0,25)
df_copy = df_copy.sort_values(by='importance', ascending=asc)[:limit].sort_values(by='importance', ascending=not(asc))
ax = df_copy.plot.barh(x='feature', y='importance', sort_columns=True, **kwargs)
for p in ax.patches:
ax.annotate(f'{p.get_width():.4f}', ((p.get_width() * 1.005), p.get_y() * 1.005))
show_doc(TabularLearner.feature_importance)
```
We can pass in sections of a `DataFrame`, but not a `DataLoader`. `perm_func` dictates how to calculate our importance, and `reverse` will determine how to sort the output
```
#export
def _get_top_corr(df, matrix, thresh:float=0.8):
corr = np.where(abs(matrix) < thresh,0,matrix)
idxs = []
for i in range(corr.shape[0]):
if (corr[i,:].sum() + corr[:, i].sum() > 2):
idxs.append(i)
cols = df.columns[idxs]
return pd.DataFrame(corr[np.ix_(idxs,idxs)], columns=cols, index=cols)
#export
def _cramers_corrected_stat(cm):
"Calculates Cramers V Statistic for categorical-categorical"
try: chi2 = scipy.stats.chi2_contingency(cm)[0]
except: return 0.0
if chi2 == 0: return 0.0
n = cm.sum().sum()
phi2 = chi2 / n
r,k = cm.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1), (rcorr-1)))
#export
def _get_cramer_v_matr(dl:TabDataLoader):
"Calculate Cramers V statistic on every pair in `df`'s columns'"
df = dl.xs
cols = list(df.columns)
corrM = np.zeros((len(cols), len(cols)))
for col1, col2 in progress_bar(list(itertools.combinations(cols, 2))):
idx1, idx2 = cols.index(col1), cols.index(col2)
corrM[idx1,idx2] = _cramers_corrected_stat(pd.crosstab(df[col1], df[col2]))
corrM[idx2, idx1] = corrM[idx1, idx2]
np.fill_diagonal(corrM, 1.0)
return corrM
#export
def _get_top_corr_dict_corrs(top_corrs):
cols = top_corrs.columns
top_corrs_np = top_corrs.to_numpy()
corr_dict = {}
for i in range(top_corrs_np.shape[0]):
for j in range(i+1, top_corrs_np.shape[0]):
if top_corrs_np[i,j] > 0:
corr_dict[cols[i] + ' vs ' + cols[j]] = np.round(top_corrs_np[i,j],3)
return OrderedDict(sorted(corr_dict.items(), key=lambda kv: abs(kv[1]), reverse=True))
#export
@patch
def get_top_corr_dict(x:TabularLearner, df:pd.DataFrame, thresh:float=0.8):
"Grabs top pairs of correlation with a given correlation matrix on `df` filtered by `thresh`"
dl = x.dls.test_dl(df)
matrix = _get_cramer_v_matr(dl)
top_corrs = _get_top_corr(df, matrix, thresh=thresh)
return _get_top_corr_dict_corrs(top_corrs)
show_doc(TabularLearner.get_top_corr_dict)
```
This along with `plot_dendrogram` and any helper functions along the way are based upon [this](https://github.com/Pak911/fastai2-tabular-interpretation/blob/master/utils.py) by Pack911 on the fastai forums.
```
#export
@patch
def plot_dendrogram(x:TabularLearner, df:pd.DataFrame, figsize=None, leaf_font_size=16):
"Plots dendrogram for a calculated correlation matrix"
dl = x.dls.test_dl(df)
matrix = _get_cramer_v_matr(dl)
if figsize is None:
figsize = (15, 0.02*leaf_font_size*len(dl.x_names))
corr_condensed = hc.distance.squareform(1-matrix)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=figsize)
dendrogram = hc.dendrogram(z, labels=dl.x_names, orientation='left', leaf_font_size = leaf_font_size)
plt.show()
show_doc(TabularLearner.plot_dendrogram)
```
## Example Usage
We'll run an example on the `ADULT_SAMPLE` dataset
```
from fastai2.tabular.all import *
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
splits = RandomSplitter()(range_of(df))
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'
to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, splits=splits)
dls = to.dataloaders()
learn = tabular_learner(dls, layers=[200,100], metrics=accuracy)
learn.fit(3)
```
After fitting, let's first calculate the relative feature importance on the first 1,000 rows:
```
dl = learn.dls.test_dl(df)
fi = learn.feature_importance(df=df)
```
Next we'll calculate the correlation matrix, and then we will plot it's dendrogram:
```
corr_dict = learn.get_top_corr_dict(df, thresh=0.3); corr_dict
learn.plot_dendrogram(df)
```
This allows us to see what family of features are closesly related based on our `thresh`, and also to show (in combination with the feature importance) how our model uses each variable.
|
github_jupyter
|
# default_exp tabular.interpretation
#hide
from nbdev.showdoc import *
#export
from fastai2.tabular.all import *
from scipy.cluster import hierarchy as hc
from sklearn import manifold
#export
def base_error(err, val): return (err-val)/err
#export
@patch
def feature_importance(x:TabularLearner, df=None, dl=None, perm_func=base_error, metric=accuracy, bs=None, reverse=True, plot=True):
"Calculate and plot the Feature Importance based on `df`"
x.df = df
bs = bs if bs is not None else x.dls.bs
if df is not None:
dl = x.dls.test_dl(df, bs=bs)
else:
dl = x.dls[1]
x_names = x.dls.x_names.filter(lambda x: '_na' not in x)
na = x.dls.x_names.filter(lambda x: '_na' in x)
y = x.dls.y_names
orig_metrics = x.metrics[1:]
x.metrics = [metric]
results = _calc_feat_importance(x, dl, x_names, na, perm_func, reverse)
if plot:
_plot_importance(_ord_dic_to_df(results))
x.metrics = orig_metrics
return results
#export
def _measure_col(learn:TabularLearner, dl:TabDataLoader, name:str, na:list):
"Measures change after column permutation"
col = [name]
if f'{name}_na' in na: col.append(name)
orig = dl.items[col].values
perm = np.random.permutation(len(orig))
dl.items[col] = dl.items[col].values[perm]
with learn.no_bar(), learn.no_logging():
metric = learn.validate(dl=dl)[1]
dl.items[col] = orig
return metric
#export
def _calc_feat_importance(learn:TabularLearner, dl:TabDataLoader, x_names:list, na:list, perm_func=base_error, reverse=True):
"Calculates permutation importance by shuffling a column by `perm_func`"
with learn.no_bar(), learn.no_logging():
base_error = learn.validate(dl=dl)[1]
importance = {}
pbar = progress_bar(x_names)
print("Calculating Permutation Importance")
for col in pbar:
importance[col] = _measure_col(learn, dl, col, na)
for key, value in importance.items():
importance[key] = perm_func(base_error, value)
return OrderedDict(sorted(importance.items(), key=lambda kv: kv[1], reverse=True))
#export
def _ord_dic_to_df(dict:OrderedDict): return pd.DataFrame([[k,v] for k,v in dict.items()], columns=['feature','importance'])
#export
def _plot_importance(df:pd.DataFrame, limit=20, asc=False, **kwargs):
"Plot importance with an optional limit to how many variables shown"
df_copy = df.copy()
df_copy['feature'] = df_copy['feature'].str.slice(0,25)
df_copy = df_copy.sort_values(by='importance', ascending=asc)[:limit].sort_values(by='importance', ascending=not(asc))
ax = df_copy.plot.barh(x='feature', y='importance', sort_columns=True, **kwargs)
for p in ax.patches:
ax.annotate(f'{p.get_width():.4f}', ((p.get_width() * 1.005), p.get_y() * 1.005))
show_doc(TabularLearner.feature_importance)
#export
def _get_top_corr(df, matrix, thresh:float=0.8):
corr = np.where(abs(matrix) < thresh,0,matrix)
idxs = []
for i in range(corr.shape[0]):
if (corr[i,:].sum() + corr[:, i].sum() > 2):
idxs.append(i)
cols = df.columns[idxs]
return pd.DataFrame(corr[np.ix_(idxs,idxs)], columns=cols, index=cols)
#export
def _cramers_corrected_stat(cm):
"Calculates Cramers V Statistic for categorical-categorical"
try: chi2 = scipy.stats.chi2_contingency(cm)[0]
except: return 0.0
if chi2 == 0: return 0.0
n = cm.sum().sum()
phi2 = chi2 / n
r,k = cm.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1), (rcorr-1)))
#export
def _get_cramer_v_matr(dl:TabDataLoader):
"Calculate Cramers V statistic on every pair in `df`'s columns'"
df = dl.xs
cols = list(df.columns)
corrM = np.zeros((len(cols), len(cols)))
for col1, col2 in progress_bar(list(itertools.combinations(cols, 2))):
idx1, idx2 = cols.index(col1), cols.index(col2)
corrM[idx1,idx2] = _cramers_corrected_stat(pd.crosstab(df[col1], df[col2]))
corrM[idx2, idx1] = corrM[idx1, idx2]
np.fill_diagonal(corrM, 1.0)
return corrM
#export
def _get_top_corr_dict_corrs(top_corrs):
cols = top_corrs.columns
top_corrs_np = top_corrs.to_numpy()
corr_dict = {}
for i in range(top_corrs_np.shape[0]):
for j in range(i+1, top_corrs_np.shape[0]):
if top_corrs_np[i,j] > 0:
corr_dict[cols[i] + ' vs ' + cols[j]] = np.round(top_corrs_np[i,j],3)
return OrderedDict(sorted(corr_dict.items(), key=lambda kv: abs(kv[1]), reverse=True))
#export
@patch
def get_top_corr_dict(x:TabularLearner, df:pd.DataFrame, thresh:float=0.8):
"Grabs top pairs of correlation with a given correlation matrix on `df` filtered by `thresh`"
dl = x.dls.test_dl(df)
matrix = _get_cramer_v_matr(dl)
top_corrs = _get_top_corr(df, matrix, thresh=thresh)
return _get_top_corr_dict_corrs(top_corrs)
show_doc(TabularLearner.get_top_corr_dict)
#export
@patch
def plot_dendrogram(x:TabularLearner, df:pd.DataFrame, figsize=None, leaf_font_size=16):
"Plots dendrogram for a calculated correlation matrix"
dl = x.dls.test_dl(df)
matrix = _get_cramer_v_matr(dl)
if figsize is None:
figsize = (15, 0.02*leaf_font_size*len(dl.x_names))
corr_condensed = hc.distance.squareform(1-matrix)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=figsize)
dendrogram = hc.dendrogram(z, labels=dl.x_names, orientation='left', leaf_font_size = leaf_font_size)
plt.show()
show_doc(TabularLearner.plot_dendrogram)
from fastai2.tabular.all import *
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
splits = RandomSplitter()(range_of(df))
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'
to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, splits=splits)
dls = to.dataloaders()
learn = tabular_learner(dls, layers=[200,100], metrics=accuracy)
learn.fit(3)
dl = learn.dls.test_dl(df)
fi = learn.feature_importance(df=df)
corr_dict = learn.get_top_corr_dict(df, thresh=0.3); corr_dict
learn.plot_dendrogram(df)
| 0.630116 | 0.841956 |
# Notebook A: Generation of omics data for the wild type (WT) strain
This notebook uses the OMG library to create times series of synthetic "experimental" data (transcriptomics, proteomics, metabolomics, fluxomics, cell density, external metabolites), that will be used to demonstrate the use of ICE and EDD. These data will also be the base for creating similar data for bioengineereed strains.
Tested using **biodesign_3.7** kernel on jprime.lbl.gov (see github repository for kernel details)
## Inputs and outputs
#### Required file to run this notebook:
- A modified E. coli model with the isoprenol pathway added to it (`iJO1366_MVA.json` file in the `../data/models` directory)
#### Files generated by running this notebook for import into EDD:
- `EDD_experiment_description_file_WT.csv`
- `EDD_OD_WT.csv`
- `EDD_external_metabolites_WT.csv`
- `EDD_transcriptomics_WT.csv`
- `EDD_proteomics_WTSM.csv`
- `EDD_metabolomics_WTSM.csv`
- `EDD_fluxomics_WT.csv`
The files are stored in the user defined directory.
## Setup
Clone the git repository with the `OMG` library:
<!-- `git clone https://github.com/JBEI/OMG.git --branch omgforallhosts --single-branch` -->
`git clone https://github.com/JBEI/OMG.git`
or pull the latest version.
Importing needed libraries:
```
import sys
sys.path.insert(1, '../../OMG')
sys.path.append('../')
import omg
from plot_multiomics import *
import cobra
```
## User parameters
```
user_params = {
'host': 'ecoli', # ecoli or ropacus supported
'modelfile': '../data/models/iJO1366_MVA.json', # GSM host model file location
'cerevisiae_modelfile': '../data/models/iMM904.json', # GSM pathway donor model file location
'timestart': 0.0, # Start and end for time in time series
'timestop': 8.0,
'numtimepoints': 9, # Number of time points
'mapping_file': '../mapping/inchikey_to_cid.txt', # Maps of metabolite inchikey to pubchem compound id (cid)
'output_file_path': '../data/omg_output/', # Folder for output files
'edd_omics_file_path': '../data/omg_output/edd/', # Folder for EDD output files
'numreactions': 8, # Number of total reactions to be bioengineered
'ext_metabolites': { # Initial concentrations (in mMol) of external metabolites
'glc__D_e': 22.203,
'nh4_e': 18.695,
'pi_e': 69.454,
'so4_e': 2.0,
'mg2_e': 2.0,
'k_e': 21.883,
'na1_e': 103.7,
'cl_e': 27.25,
'isoprenol_e': 0.0,
'ac_e': 0.0,
'for_e': 0.0,
'lac__D_e': 0.0,
'etoh_e': 0.0
},
'initial_OD': 0.01,
'BIOMASS_REACTION_ID': 'BIOMASS_Ec_iJO1366_core_53p95M' # Biomass reaction in host GSM
}
```
## Using the OMG library to create synthetic multiomics data
### 1) Getting and preparing the metabolic model
First we obtain the metabolic model:
```
file_name = user_params['modelfile']
model = cobra.io.load_json_model(file_name)
```
We now add minimum flux constraints for production of isoprenol and formate, and we limit oxygen intake:
```
iso = 'EX_isoprenol_e'
iso_cons = model.problem.Constraint(model.reactions.EX_isoprenol_e.flux_expression,lb = 0.20)
model.add_cons_vars(iso_cons)
for_cons = model.problem.Constraint(model.reactions.EX_for_e.flux_expression,lb = 0.10)
model.add_cons_vars(for_cons)
o2_cons = model.problem.Constraint(model.reactions.EX_o2_e.flux_expression,lb = -8.0)
model.add_cons_vars(o2_cons)
```
And then we constrain several central carbon metabolism fluxes to more realistic upper and lower bounds:
```
CC_rxn_names = ['ACCOAC','MDH','PTAr','CS','ACACT1r','PPC','PPCK','PFL']
for reaction in CC_rxn_names:
reaction_constraint = model.problem.Constraint(model.reactions.get_by_id(reaction).flux_expression,lb = -1.0,ub = 1.0)
model.add_cons_vars(reaction_constraint)
```
### 2) Obtaining fluxomics times series
First create time grid for simulation:
```
t0 = user_params['timestart']
tf = user_params['timestop']
points = user_params['numtimepoints']
tspan, delt = np.linspace(t0, tf, points, dtype='float64', retstep=True)
grid = (tspan, delt)
```
We then use this model to obtain the times series for fluxes, OD and external metabolites, by solving the model for each time point:
```
solution_TS, model_TS, cell, Emets, Erxn2Emet = \
omg.get_flux_time_series(model, user_params['ext_metabolites'], grid, user_params)
```
These are the external metabolites concentrations as a function of time:
```
Emets
plot_DO_extmets(cell, Emets[['glc__D_e','isoprenol_e','ac_e','for_e','lac__D_e','etoh_e']])
```
### 3) Use fluxomics data to obtain the rest of multiomics data
We now obtain the multiomics data for each time point:
```
proteomics_timeseries = {}
transcriptomics_timeseries = {}
metabolomics_timeseries = {}
metabolomics_oldids_timeseries = {}
fluxomics_timeseries = {}
# By setting the old_ids flag to True, we get two time series for metabolomics data: one with Pubchem CIDs and one with BIGG ids.
# Setting the old_ids flag to False and returns only three dictionaries:proteomics, transcriptomics, metabolomics
for t in tspan:
fluxomics_timeseries[t] = solution_TS[t].fluxes.to_dict()
(proteomics_timeseries[t], transcriptomics_timeseries[t],
metabolomics_timeseries[t], metabolomics_oldids_timeseries[t]) = omg.get_multiomics(model,
solution_TS[t],
user_params['mapping_file'],
old_ids=True)
```
### 4) Write the multiomics, cell concentration and external metabolites data into output files
#### EDD data output
First write the experiment description files needed for input (label indicates a label at the end of the file name):
```
omg.write_experiment_description_file(user_params['edd_omics_file_path'], line_name='WT', label='_WT')
```
Write OD data:
```
omg.write_OD_data(cell, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
```
Write external metabolites:
```
omg.write_external_metabolite(Emets, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
```
Write multiomics data:
```
omg.write_omics_files(fluxomics_timeseries, 'fluxomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(proteomics_timeseries, 'proteomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(transcriptomics_timeseries, 'transcriptomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(metabolomics_timeseries, 'metabolomics', user_params, line_name='WT', label='_WT')
```
We will also write a small version of the multiomics data with a subset of proteins, transcripts and metabolites:
```
genesSM = ['b0180','b2708','b3197','b1094','b2224','b3256','b2316','b3255','b0185','b1101']
proteinsSM = ['P17115','P76461','P0ABD5','P00893','P15639','P0AC44','P0A6I6','P0A9M8']
metabolitesSM = ['CID:1549101','CID:175','CID:164533','CID:15938965','CID:21604863','CID:15939608','CID:27284','CID:1038','CID:16741146','CID:1778309']
transcriptomics_timeseriesSM ={}
proteomics_timeseriesSM ={}
metabolomics_timeseriesSM ={}
for t in tspan:
transcriptomics_timeseriesSM[t] = {gene: transcriptomics_timeseries[t][gene] for gene in genesSM}
proteomics_timeseriesSM[t] = {protein: proteomics_timeseries[t][protein] for protein in proteinsSM}
metabolomics_timeseriesSM[t] = {metab: metabolomics_timeseries[t][metab] for metab in metabolitesSM}
omg.write_omics_files(proteomics_timeseriesSM, 'proteomics' , user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(transcriptomics_timeseriesSM,'transcriptomics', user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(metabolomics_timeseriesSM, 'metabolomics' , user_params, line_name='WT', label='_WTSM')
```
|
github_jupyter
|
import sys
sys.path.insert(1, '../../OMG')
sys.path.append('../')
import omg
from plot_multiomics import *
import cobra
user_params = {
'host': 'ecoli', # ecoli or ropacus supported
'modelfile': '../data/models/iJO1366_MVA.json', # GSM host model file location
'cerevisiae_modelfile': '../data/models/iMM904.json', # GSM pathway donor model file location
'timestart': 0.0, # Start and end for time in time series
'timestop': 8.0,
'numtimepoints': 9, # Number of time points
'mapping_file': '../mapping/inchikey_to_cid.txt', # Maps of metabolite inchikey to pubchem compound id (cid)
'output_file_path': '../data/omg_output/', # Folder for output files
'edd_omics_file_path': '../data/omg_output/edd/', # Folder for EDD output files
'numreactions': 8, # Number of total reactions to be bioengineered
'ext_metabolites': { # Initial concentrations (in mMol) of external metabolites
'glc__D_e': 22.203,
'nh4_e': 18.695,
'pi_e': 69.454,
'so4_e': 2.0,
'mg2_e': 2.0,
'k_e': 21.883,
'na1_e': 103.7,
'cl_e': 27.25,
'isoprenol_e': 0.0,
'ac_e': 0.0,
'for_e': 0.0,
'lac__D_e': 0.0,
'etoh_e': 0.0
},
'initial_OD': 0.01,
'BIOMASS_REACTION_ID': 'BIOMASS_Ec_iJO1366_core_53p95M' # Biomass reaction in host GSM
}
file_name = user_params['modelfile']
model = cobra.io.load_json_model(file_name)
iso = 'EX_isoprenol_e'
iso_cons = model.problem.Constraint(model.reactions.EX_isoprenol_e.flux_expression,lb = 0.20)
model.add_cons_vars(iso_cons)
for_cons = model.problem.Constraint(model.reactions.EX_for_e.flux_expression,lb = 0.10)
model.add_cons_vars(for_cons)
o2_cons = model.problem.Constraint(model.reactions.EX_o2_e.flux_expression,lb = -8.0)
model.add_cons_vars(o2_cons)
CC_rxn_names = ['ACCOAC','MDH','PTAr','CS','ACACT1r','PPC','PPCK','PFL']
for reaction in CC_rxn_names:
reaction_constraint = model.problem.Constraint(model.reactions.get_by_id(reaction).flux_expression,lb = -1.0,ub = 1.0)
model.add_cons_vars(reaction_constraint)
t0 = user_params['timestart']
tf = user_params['timestop']
points = user_params['numtimepoints']
tspan, delt = np.linspace(t0, tf, points, dtype='float64', retstep=True)
grid = (tspan, delt)
solution_TS, model_TS, cell, Emets, Erxn2Emet = \
omg.get_flux_time_series(model, user_params['ext_metabolites'], grid, user_params)
Emets
plot_DO_extmets(cell, Emets[['glc__D_e','isoprenol_e','ac_e','for_e','lac__D_e','etoh_e']])
proteomics_timeseries = {}
transcriptomics_timeseries = {}
metabolomics_timeseries = {}
metabolomics_oldids_timeseries = {}
fluxomics_timeseries = {}
# By setting the old_ids flag to True, we get two time series for metabolomics data: one with Pubchem CIDs and one with BIGG ids.
# Setting the old_ids flag to False and returns only three dictionaries:proteomics, transcriptomics, metabolomics
for t in tspan:
fluxomics_timeseries[t] = solution_TS[t].fluxes.to_dict()
(proteomics_timeseries[t], transcriptomics_timeseries[t],
metabolomics_timeseries[t], metabolomics_oldids_timeseries[t]) = omg.get_multiomics(model,
solution_TS[t],
user_params['mapping_file'],
old_ids=True)
omg.write_experiment_description_file(user_params['edd_omics_file_path'], line_name='WT', label='_WT')
omg.write_OD_data(cell, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
omg.write_external_metabolite(Emets, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
omg.write_omics_files(fluxomics_timeseries, 'fluxomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(proteomics_timeseries, 'proteomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(transcriptomics_timeseries, 'transcriptomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(metabolomics_timeseries, 'metabolomics', user_params, line_name='WT', label='_WT')
genesSM = ['b0180','b2708','b3197','b1094','b2224','b3256','b2316','b3255','b0185','b1101']
proteinsSM = ['P17115','P76461','P0ABD5','P00893','P15639','P0AC44','P0A6I6','P0A9M8']
metabolitesSM = ['CID:1549101','CID:175','CID:164533','CID:15938965','CID:21604863','CID:15939608','CID:27284','CID:1038','CID:16741146','CID:1778309']
transcriptomics_timeseriesSM ={}
proteomics_timeseriesSM ={}
metabolomics_timeseriesSM ={}
for t in tspan:
transcriptomics_timeseriesSM[t] = {gene: transcriptomics_timeseries[t][gene] for gene in genesSM}
proteomics_timeseriesSM[t] = {protein: proteomics_timeseries[t][protein] for protein in proteinsSM}
metabolomics_timeseriesSM[t] = {metab: metabolomics_timeseries[t][metab] for metab in metabolitesSM}
omg.write_omics_files(proteomics_timeseriesSM, 'proteomics' , user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(transcriptomics_timeseriesSM,'transcriptomics', user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(metabolomics_timeseriesSM, 'metabolomics' , user_params, line_name='WT', label='_WTSM')
| 0.352536 | 0.914825 |
```
# Esercizio 1, PESO: 1
#Definire tutte le coppie di numeri (a,b) interi maggiori uguali a 1 tali che la loro somma è minore o uguale
#a 63 ed il loro prodotto è multiplo di 3
coppie = [(a,b) for a in range(1,64) for b in range(1,64) if a+b<=63 and (a*b)%3==0]
# Esercizio2, PESO: 1.5
# definire una funzione che prende in input una lista di numeri interi <lista> e un numero intero <n> e ritorna
# True se esiste una coppia di numeri nella lista che ha somma uguale a <n>, False altrimenti. Se la lista è
# vuota la funzione ritorna False per qualsiasi <n>
#Esempi:
#esercizio2([1,4,5,-2,2],0) -> True (perché -2+2 = 0)
#esercizio2([1,4,5,-2,2],1) -> False (non esiste nessuna coppia di valori che sommati danno 1)
#esercizio2([],3) -> False
def esercizio2(lista,n):
for i in range(len(lista)):
for j in range(i,len(lista)):
if lista[i]+lista[j]==n:
return True
return False
#Esercizio3, PESO: 3
#definire una funzione che prende in input due dizionari (che rappresentano due oggetti).
#I due dizionari hanno entrambi una chiave 'nome' e una chiave 'costo'.
#La prima chiave memorizza una parola (una stringa) e la seconda un numero positivo.
#La funzione deve generare internamente un nuovo dizionario (un nuovo oggetto) dello stesso tipo (ovvero con le stesse chiavi),
# nella chiave 'nome' è memorizzata la concatenazione dei nomi dei due dizionari, e nella chiave 'costo' il prodotto
# dei due interi positivi. La funzione a questo punto ritorna come output una stringa del tipo:
# 'il nuovo oggetto con nome <nome del nuovo oggetto> è molto costoso' se il costo è maggiore uguale a 100,
# 'il nuovo oggetto con nome <nome del nuovo oggetto> è poco costoso' altrimenti.
# Esempi:
#Caso 1
# oggetto1 = {'nome': 'letto', 'costo':20 }
# oggetto2 = {'nome': 'pinocchio', 'costo':10 }
#esercizio3(oggetto1,oggetto2) -> 'il nuovo oggetto con nome lettopinocchio è molto costoso'
#Caso 2
# oggetto1 = {'nome': 'busta', 'costo':1 }
# oggetto2 = {'nome': 'panno', 'costo':10 }
#esercizio3(oggetto1,oggetto2) -> 'il nuovo oggetto con nome bustapanno è poco costoso'
def esercizio3(left,right):
new={'nome':left['nome']+right['nome'],'costo':left['costo']*right['costo']}
if(new['costo'])>=100:
return ('il nuovo oggetto con nome {} è molto costoso'.format(new['nome']))
else:
return ('il nuovo oggetto con nome {} è poco costoso'.format(new['nome']))
oggetto1 = {'nome': 'letto', 'costo':20 }
oggetto2 = {'nome': 'pinocchio', 'costo':10 }
print(esercizio3(oggetto1,oggetto2))
#Esercizio4, PESO: 2
#definire una funzione ricorsiva che implementi la seguente successione:
# a_0 = 1
# a_1 = 1
# a_n = 3*a_{n-1} +2*a_{n-2}
def esercizio4(n):
if n<=1:
return 1
return 3*esercizio4(n-1) +2*esercizio4(n-2)
#Esercizio5, PESO: 2
#definire una funzione che prende in input un numero intero positivo (maggiore di 1) <n> e ritorna una lista di n+1 numeri interi
#[x_0,x_1,x_2,...,x_n] tali che x_i è un numero randomico compreso tra 0 ed i incluso.
#esempi:
#esercizio5(4) -> [0,0,2,1,3]
#esercizio5(4) -> [0,1,2,0,4]
#esercizio5(4) -> [0,1,1,3,2]
#esercizio5(1) -> [0]
import random
def esercizio5(n):
return [random.randint(0,i) for i in range(n+1)]
#Esercizio6, PESO: 2
#definire una funzione che prende in input due liste di interi della medesima lunghezza, e ritorna come output una
# una nuova lista che ha come elemento i-esimo il valore massimo tra i due elementi delle due liste della medesima posizione
#i-esima. Se le liste sono entrambe vuote ritorno la lista vuota.
#Esempi:
#esercizio6([1,4,5,6],[4,1,2,6]) -> [4,4,5,6]
#esercizio6([],[]) -> []
massimo = lambda x,y : x if x>y else y
def esercizio6(left,right):
return [massimo(left[i],right[i]) for i in range(len(left))]
#Esercizio7, PESO: 4
#definire una funzione (esercizio6(left,right)) che prende in input due liste di interi <left> and <right> e ritorna
#la stringa "MINORE" se left è minore di right, "MAGGIORE" se left è maggiore di right, "INCOMPARABILI/UGUALI" altrimenti.
#Dove con minore intendo o che <left> ha meno elementi di <right> (qualunque sia il valore degli elementi),
#oppure se la lunghezza è la medesima tutti gli elementi di <left> devono essere minori strettamente di quelli di <right>
#(il confronto è fatto rispettando la posizione, per esempio [1,2,3] è minore di [2,4,6] perche 1 è minore di 2, 2 di 4 e 3 di 6)
#Con maggiore invece intendiamo che <left> o ha un numero maggiore di elementi di <right> (qualunque sia il loro valore),
#oppure se lunghezza è la medesima tutti gli elementi di <left> devono essere maggiori strettamente di quelli di <right>
#(il confronto è fatto rispettando la posizione).
#In tutti gli altri casi la fuzione deve tornare come output "INCOMPARABILI/UGUALI".
#Esempi:
# esercizio7([1,2,3],[1,1,1,1]) -> "MINORE"
# esercizio7([1,2,3,7],[100,100,100]) -> "MAGGIORE"
# esercizio7([0,1,1,3],[1,2,3,4]) -> "MINORE"
# esercizio7([1,2,3,4],[2,1,1,3]) -> "INCOMPARABILI/UGUALI"
# esercizio7([],[]) -> "INCOMPARABILI/UGUALI"
# esercizio7([1,2,1,4],[0,1,1,3]) -> "INCOMPARABILI/UGUALI"
# esercizio7([1,2,3,4],[0,1,1,3]) -> "MAGGIORE"
def esercizio7(left,right):
if(not left):
return "INCOMPARABILI/UGUALI"
if(len(left)<len(right)):
return "MINORE"
if(len(left)>len(right)):
return "MAGGIORE"
if(left[0]<right[0]):
for i in range(len(left)):
if(left[i]>right[i]):
return "INCOMPARABILI/UGUALI"
return "MINORE"
if(left[0]>right[0]):
for i in range(len(left)):
if(left[i]<right[i]):
return "INCOMPARABILI/UGUALI"
return "MAGGIORE"
```
|
github_jupyter
|
# Esercizio 1, PESO: 1
#Definire tutte le coppie di numeri (a,b) interi maggiori uguali a 1 tali che la loro somma è minore o uguale
#a 63 ed il loro prodotto è multiplo di 3
coppie = [(a,b) for a in range(1,64) for b in range(1,64) if a+b<=63 and (a*b)%3==0]
# Esercizio2, PESO: 1.5
# definire una funzione che prende in input una lista di numeri interi <lista> e un numero intero <n> e ritorna
# True se esiste una coppia di numeri nella lista che ha somma uguale a <n>, False altrimenti. Se la lista è
# vuota la funzione ritorna False per qualsiasi <n>
#Esempi:
#esercizio2([1,4,5,-2,2],0) -> True (perché -2+2 = 0)
#esercizio2([1,4,5,-2,2],1) -> False (non esiste nessuna coppia di valori che sommati danno 1)
#esercizio2([],3) -> False
def esercizio2(lista,n):
for i in range(len(lista)):
for j in range(i,len(lista)):
if lista[i]+lista[j]==n:
return True
return False
#Esercizio3, PESO: 3
#definire una funzione che prende in input due dizionari (che rappresentano due oggetti).
#I due dizionari hanno entrambi una chiave 'nome' e una chiave 'costo'.
#La prima chiave memorizza una parola (una stringa) e la seconda un numero positivo.
#La funzione deve generare internamente un nuovo dizionario (un nuovo oggetto) dello stesso tipo (ovvero con le stesse chiavi),
# nella chiave 'nome' è memorizzata la concatenazione dei nomi dei due dizionari, e nella chiave 'costo' il prodotto
# dei due interi positivi. La funzione a questo punto ritorna come output una stringa del tipo:
# 'il nuovo oggetto con nome <nome del nuovo oggetto> è molto costoso' se il costo è maggiore uguale a 100,
# 'il nuovo oggetto con nome <nome del nuovo oggetto> è poco costoso' altrimenti.
# Esempi:
#Caso 1
# oggetto1 = {'nome': 'letto', 'costo':20 }
# oggetto2 = {'nome': 'pinocchio', 'costo':10 }
#esercizio3(oggetto1,oggetto2) -> 'il nuovo oggetto con nome lettopinocchio è molto costoso'
#Caso 2
# oggetto1 = {'nome': 'busta', 'costo':1 }
# oggetto2 = {'nome': 'panno', 'costo':10 }
#esercizio3(oggetto1,oggetto2) -> 'il nuovo oggetto con nome bustapanno è poco costoso'
def esercizio3(left,right):
new={'nome':left['nome']+right['nome'],'costo':left['costo']*right['costo']}
if(new['costo'])>=100:
return ('il nuovo oggetto con nome {} è molto costoso'.format(new['nome']))
else:
return ('il nuovo oggetto con nome {} è poco costoso'.format(new['nome']))
oggetto1 = {'nome': 'letto', 'costo':20 }
oggetto2 = {'nome': 'pinocchio', 'costo':10 }
print(esercizio3(oggetto1,oggetto2))
#Esercizio4, PESO: 2
#definire una funzione ricorsiva che implementi la seguente successione:
# a_0 = 1
# a_1 = 1
# a_n = 3*a_{n-1} +2*a_{n-2}
def esercizio4(n):
if n<=1:
return 1
return 3*esercizio4(n-1) +2*esercizio4(n-2)
#Esercizio5, PESO: 2
#definire una funzione che prende in input un numero intero positivo (maggiore di 1) <n> e ritorna una lista di n+1 numeri interi
#[x_0,x_1,x_2,...,x_n] tali che x_i è un numero randomico compreso tra 0 ed i incluso.
#esempi:
#esercizio5(4) -> [0,0,2,1,3]
#esercizio5(4) -> [0,1,2,0,4]
#esercizio5(4) -> [0,1,1,3,2]
#esercizio5(1) -> [0]
import random
def esercizio5(n):
return [random.randint(0,i) for i in range(n+1)]
#Esercizio6, PESO: 2
#definire una funzione che prende in input due liste di interi della medesima lunghezza, e ritorna come output una
# una nuova lista che ha come elemento i-esimo il valore massimo tra i due elementi delle due liste della medesima posizione
#i-esima. Se le liste sono entrambe vuote ritorno la lista vuota.
#Esempi:
#esercizio6([1,4,5,6],[4,1,2,6]) -> [4,4,5,6]
#esercizio6([],[]) -> []
massimo = lambda x,y : x if x>y else y
def esercizio6(left,right):
return [massimo(left[i],right[i]) for i in range(len(left))]
#Esercizio7, PESO: 4
#definire una funzione (esercizio6(left,right)) che prende in input due liste di interi <left> and <right> e ritorna
#la stringa "MINORE" se left è minore di right, "MAGGIORE" se left è maggiore di right, "INCOMPARABILI/UGUALI" altrimenti.
#Dove con minore intendo o che <left> ha meno elementi di <right> (qualunque sia il valore degli elementi),
#oppure se la lunghezza è la medesima tutti gli elementi di <left> devono essere minori strettamente di quelli di <right>
#(il confronto è fatto rispettando la posizione, per esempio [1,2,3] è minore di [2,4,6] perche 1 è minore di 2, 2 di 4 e 3 di 6)
#Con maggiore invece intendiamo che <left> o ha un numero maggiore di elementi di <right> (qualunque sia il loro valore),
#oppure se lunghezza è la medesima tutti gli elementi di <left> devono essere maggiori strettamente di quelli di <right>
#(il confronto è fatto rispettando la posizione).
#In tutti gli altri casi la fuzione deve tornare come output "INCOMPARABILI/UGUALI".
#Esempi:
# esercizio7([1,2,3],[1,1,1,1]) -> "MINORE"
# esercizio7([1,2,3,7],[100,100,100]) -> "MAGGIORE"
# esercizio7([0,1,1,3],[1,2,3,4]) -> "MINORE"
# esercizio7([1,2,3,4],[2,1,1,3]) -> "INCOMPARABILI/UGUALI"
# esercizio7([],[]) -> "INCOMPARABILI/UGUALI"
# esercizio7([1,2,1,4],[0,1,1,3]) -> "INCOMPARABILI/UGUALI"
# esercizio7([1,2,3,4],[0,1,1,3]) -> "MAGGIORE"
def esercizio7(left,right):
if(not left):
return "INCOMPARABILI/UGUALI"
if(len(left)<len(right)):
return "MINORE"
if(len(left)>len(right)):
return "MAGGIORE"
if(left[0]<right[0]):
for i in range(len(left)):
if(left[i]>right[i]):
return "INCOMPARABILI/UGUALI"
return "MINORE"
if(left[0]>right[0]):
for i in range(len(left)):
if(left[i]<right[i]):
return "INCOMPARABILI/UGUALI"
return "MAGGIORE"
| 0.155559 | 0.572723 |
# Topic Modeling Assessment Project
A dataset of over 400,000 quora questions that have no labeled cateogry, and attempting to find 20 cateogries to assign these questions to. The .csv file of these text questions can be found underneath the Topic-Modeling folder.
#### Import pandas and read in the quora_questions.csv file.
```
import pandas as pd
import numpy as np
data = pd.read_csv('quora_questions.csv',sep = ',')
data.columns
data.info()
data.head()
```
# Preprocessing
#### Use TF-IDF Vectorization to create a vectorized document term matrix. You may want to explore the max_df and min_df parameters.
```
from sklearn.feature_extraction.text import TfidfVectorizer
tv = TfidfVectorizer(max_df=0.95,min_df=2, stop_words='english')
data_tran = tv.fit_transform(data['Question'])
data_tran
```
# Non-negative Matrix Factorization
#### Using Scikit-Learn create an instance of NMF with 20 expected components. (Use random_state=42)..
```
from sklearn.decomposition import NMF
NMF = NMF(n_components=200, random_state=42)
NMF.fit(data_tran)
```
#### Print our the top 15 most common words for each of the 20 topics.
```
for index, topic in enumerate(NMF.components_):
print(f'\nTHE TOP 15 WORDS FOR TOPIC #{index}')
print([tv.get_feature_names()[i] for i in topic.argsort()[-15:]])
print('\n')
```
#### Add a new column to the original quora dataframe that labels each question into one of the 20 topic categories.
```
data.head()
nmf_tran = NMF.transform(data_tran)
nmf_tran[0].argmax()
topic = nmf_tran.argmax(axis = 1)
topic[:10]
data['Topic'] = topic
data.head()
```
# Trying vector on Topic 11
```
word_list=pd.Series( ['money', 'modi', 'currency', 'economy', 'think', 'government', 'ban', 'banning', 'black', 'indian', 'rupee', 'rs', '1000', 'notes', '500'])
import spacy, nltk
import en_core_web_md
#nltk.download('vader_lexicon')
nlp = en_core_web_md.load()
from scipy import spatial
def exp(c):
return c.apply(lambda x : nlp.vocab[x].vector)
def vector_math(word):
computed_similarities = []
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)
word_vec=exp(word)
new_vec = sum(word_vec)
for word in nlp.vocab:
# Ignore words without vectors and mixed-case words:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vec, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
result = [w[0].text for w in computed_similarities[:10]]
#print(result)
return result
vector_math(word_list)
topicword = []
for topic in NMF.components_[:3]:
data_word =pd.Series([tv.get_feature_names()[i] for i in topic.argsort()[-50:]])
print(data_word)
topicword.append(vector_math(data_word))
topicword
```
|
github_jupyter
|
import pandas as pd
import numpy as np
data = pd.read_csv('quora_questions.csv',sep = ',')
data.columns
data.info()
data.head()
from sklearn.feature_extraction.text import TfidfVectorizer
tv = TfidfVectorizer(max_df=0.95,min_df=2, stop_words='english')
data_tran = tv.fit_transform(data['Question'])
data_tran
from sklearn.decomposition import NMF
NMF = NMF(n_components=200, random_state=42)
NMF.fit(data_tran)
for index, topic in enumerate(NMF.components_):
print(f'\nTHE TOP 15 WORDS FOR TOPIC #{index}')
print([tv.get_feature_names()[i] for i in topic.argsort()[-15:]])
print('\n')
data.head()
nmf_tran = NMF.transform(data_tran)
nmf_tran[0].argmax()
topic = nmf_tran.argmax(axis = 1)
topic[:10]
data['Topic'] = topic
data.head()
word_list=pd.Series( ['money', 'modi', 'currency', 'economy', 'think', 'government', 'ban', 'banning', 'black', 'indian', 'rupee', 'rs', '1000', 'notes', '500'])
import spacy, nltk
import en_core_web_md
#nltk.download('vader_lexicon')
nlp = en_core_web_md.load()
from scipy import spatial
def exp(c):
return c.apply(lambda x : nlp.vocab[x].vector)
def vector_math(word):
computed_similarities = []
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)
word_vec=exp(word)
new_vec = sum(word_vec)
for word in nlp.vocab:
# Ignore words without vectors and mixed-case words:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vec, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
result = [w[0].text for w in computed_similarities[:10]]
#print(result)
return result
vector_math(word_list)
topicword = []
for topic in NMF.components_[:3]:
data_word =pd.Series([tv.get_feature_names()[i] for i in topic.argsort()[-50:]])
print(data_word)
topicword.append(vector_math(data_word))
topicword
| 0.278453 | 0.879665 |
```
%run dataset.ipynb
# The abalone dataset
class AbaloneDataset(Dataset):
def __init__(self):
super(AbaloneDataset, self).__init__('abalone', 'regression')
rows, _ = load_csv("data\\abalone.csv")
xs = np.zeros([len(rows), 10])
ys = np.zeros([len(rows), 1])
for n, row in enumerate(rows):
if row[0] == 'I':
xs[n, 0] = 1
if row[0] == 'M':
xs[n, 1] = 1
if row[0] == 'F':
xs[n, 2] = 1
xs[n, 3:] = row[1:-1]
ys[n, :] = row[-1:]
self.shuffle_data(xs, ys, 0.8)
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
x, est, ans = xs[n], estimates[n], answers[n]
xstr = vector_to_str(x, '%4.2f')
print('{} => 추정 {:4.1f} : 정답 {:4.1f}'.format(xstr, est[0], ans[0]))
# The pulsar dataset
class PulsarDataset(Dataset):
def __init__(self):
super(PulsarDataset, self).__init__('pulsar', 'binary')
rows, _ = load_csv("data\\pulsar_stars.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-1], data[:, -1:], 0.8)
self.target_names = ['별', '펄서']
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
x, est, ans = xs[n], estimates[n], answers[n]
xstr = vector_to_str(x, '%5.1f', 3)
estr = self.target_names[int(round(est[0]))]
astr = self.target_names[int(round(ans[0]))]
rstr = 'O'
if estr != astr:
rstr = 'X'
print('{} => 추정 {}(확률 {:4.2f}) : 정답 {} => {}'.format(xstr, estr, est[0], astr, rstr))
# The steel dataset
class SteelDataset(Dataset):
def __init__(self):
super(SteelDataset, self).__init__('steel', 'select')
rows, headers = load_csv("data\\faults.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-7], data[:, -7:], 0.8)
self.target_names = headers[-7:]
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
show_select_results(estimates, answers, self.target_names)
# The pulsar select dataset
class PulsarSelectDataset(Dataset):
def __init__(self):
super(PulsarSelectDataset, self).__init__('pulsarselect', 'select')
rows, _ = load_csv("data\\pulsar_stars.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-1], onehot(data[:, -1], 2), 0.8)
self.target_names = ['별', '펄서']
def visualize(self, xs, estimates, answers):
show_select_results(estimates, answers, self.target_names)
```
|
github_jupyter
|
%run dataset.ipynb
# The abalone dataset
class AbaloneDataset(Dataset):
def __init__(self):
super(AbaloneDataset, self).__init__('abalone', 'regression')
rows, _ = load_csv("data\\abalone.csv")
xs = np.zeros([len(rows), 10])
ys = np.zeros([len(rows), 1])
for n, row in enumerate(rows):
if row[0] == 'I':
xs[n, 0] = 1
if row[0] == 'M':
xs[n, 1] = 1
if row[0] == 'F':
xs[n, 2] = 1
xs[n, 3:] = row[1:-1]
ys[n, :] = row[-1:]
self.shuffle_data(xs, ys, 0.8)
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
x, est, ans = xs[n], estimates[n], answers[n]
xstr = vector_to_str(x, '%4.2f')
print('{} => 추정 {:4.1f} : 정답 {:4.1f}'.format(xstr, est[0], ans[0]))
# The pulsar dataset
class PulsarDataset(Dataset):
def __init__(self):
super(PulsarDataset, self).__init__('pulsar', 'binary')
rows, _ = load_csv("data\\pulsar_stars.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-1], data[:, -1:], 0.8)
self.target_names = ['별', '펄서']
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
x, est, ans = xs[n], estimates[n], answers[n]
xstr = vector_to_str(x, '%5.1f', 3)
estr = self.target_names[int(round(est[0]))]
astr = self.target_names[int(round(ans[0]))]
rstr = 'O'
if estr != astr:
rstr = 'X'
print('{} => 추정 {}(확률 {:4.2f}) : 정답 {} => {}'.format(xstr, estr, est[0], astr, rstr))
# The steel dataset
class SteelDataset(Dataset):
def __init__(self):
super(SteelDataset, self).__init__('steel', 'select')
rows, headers = load_csv("data\\faults.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-7], data[:, -7:], 0.8)
self.target_names = headers[-7:]
def visualize(self, xs, estimates, answers):
for n in range(len(xs)):
show_select_results(estimates, answers, self.target_names)
# The pulsar select dataset
class PulsarSelectDataset(Dataset):
def __init__(self):
super(PulsarSelectDataset, self).__init__('pulsarselect', 'select')
rows, _ = load_csv("data\\pulsar_stars.csv")
data = np.asarray(rows, dtype='float32')
self.shuffle_data(data[:, :-1], onehot(data[:, -1], 2), 0.8)
self.target_names = ['별', '펄서']
def visualize(self, xs, estimates, answers):
show_select_results(estimates, answers, self.target_names)
| 0.502686 | 0.556038 |
# Measurements in objects in tiled images
For some specific image analysis tasks it might be possible to overcome limitations such as when applying connected component labeling.
For example, when measuring the size of objects and if these objects are limited in size, it is not necessary to combine intermediate image processing results in big images.
We could just measure object properties for all objects in tiles and then combine the result of the quantification.
```
import numpy as np
import dask
import dask.array as da
from skimage.data import cells3d
from skimage.io import imread
import pyclesperanto_prototype as cle
from pyclesperanto_prototype import imshow
```
Our starting point is again a binary image showing segmented objects.
```
image = imread("../../data/blobs.tif") > 128
imshow(image)
```
This time, we would like to measure the size of the objects and visualize that in a parametric image. For demonstratopn purposes, we execute that operation first on the whole example image.
```
def area_map(image):
"""
Label objects in a binary image and produce a pixel-count-map image.
"""
labels = cle.connected_components_labeling_box(image)
result = cle.pixel_count_map(labels)
return np.asarray(result)
reference = area_map(image)
cle.imshow(reference, colorbar=True)
```
If we process the same in tiles, we will get slightly wrong results because of the tiled connected-component-labeling issue demonstated earlier.
```
# tile the image
tiles = da.from_array(image, chunks=(128, 128))
# setup the operation we want to apply
procedure = area_map
# setup the tiling
tile_map = da.map_blocks(procedure, tiles)
# compute result
result = tile_map.compute()
# visualize
imshow(result, colorbar=True)
```
Again, the errors are visible at the border and we can visualize that by direct comparison:
```
absolute_error = cle.absolute_difference(result, reference)
cle.imshow(absolute_error, colorbar=True)
```
To prevent this error, we need to think again about processing the image tiles with an overlap. In this particular example, we are not executing any operation that takes neighboring pixels into account. Hence, we cannot estimate the necessary overlap from such parameters. We need to take the maximum size (diameter) of the objects into account. We could also do this emprically, as before. Therefore, let's compute the mean squared error, first of the two example results above:
```
cle.mean_squared_error(result, reference)
```
And next, we can compute that error in a loop varying the overlap using [dask.array.map_overlay](https://docs.dask.org/en/stable/array-overlap.html) size while processing the image in tiles. Note that we're setting `boundary=0` here, because otherwise objects would extend in the binary image and size measurements would be wrong.
```
for overlap_width in range(0, 30, 5):
print("Overlap width", overlap_width)
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
print("mean squared error", cle.mean_squared_error(result, reference))
print("-----------------------------------")
```
The empirically determined overlap where this error becomes 0 is an optimistic estimation. When using this method in you example, make sure you apply a overlap that's larger than the determined value.
**Note:** The `compute` and `imshow` functions may not work on big datasets as the images may not fit in computer memory. We are using it here for demonstration purposes.
```
overlap_width = 30
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
cle.imshow(tile_map, colorbar=True)
```
|
github_jupyter
|
import numpy as np
import dask
import dask.array as da
from skimage.data import cells3d
from skimage.io import imread
import pyclesperanto_prototype as cle
from pyclesperanto_prototype import imshow
image = imread("../../data/blobs.tif") > 128
imshow(image)
def area_map(image):
"""
Label objects in a binary image and produce a pixel-count-map image.
"""
labels = cle.connected_components_labeling_box(image)
result = cle.pixel_count_map(labels)
return np.asarray(result)
reference = area_map(image)
cle.imshow(reference, colorbar=True)
# tile the image
tiles = da.from_array(image, chunks=(128, 128))
# setup the operation we want to apply
procedure = area_map
# setup the tiling
tile_map = da.map_blocks(procedure, tiles)
# compute result
result = tile_map.compute()
# visualize
imshow(result, colorbar=True)
absolute_error = cle.absolute_difference(result, reference)
cle.imshow(absolute_error, colorbar=True)
cle.mean_squared_error(result, reference)
for overlap_width in range(0, 30, 5):
print("Overlap width", overlap_width)
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
print("mean squared error", cle.mean_squared_error(result, reference))
print("-----------------------------------")
overlap_width = 30
tile_map = da.map_overlap(procedure, tiles, depth=overlap_width, boundary=0)
result = tile_map.compute()
cle.imshow(tile_map, colorbar=True)
| 0.712932 | 0.992379 |
TSG077 - Kibana logs
====================
Steps
-----
### Parameters
```
import re
tail_lines = 500
pod = None # All
container = "kibana"
log_files = [ "/var/log/supervisor/log/kibana*.log" ]
expressions_to_analyze = []
log_analyzer_rules = []
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
print(f"Applying the following {len(log_analyzer_rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(log_analyzer_rules)
hints = 0
if len(log_analyzer_rules) > 0:
for entry in entries_for_analysis:
for rule in log_analyzer_rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(log_analyzer_rules)} rules). {hints} further troubleshooting hints made inline.")
print("Notebook execution is complete.")
```
|
github_jupyter
|
import re
tail_lines = 500
pod = None # All
container = "kibana"
log_files = [ "/var/log/supervisor/log/kibana*.log" ]
expressions_to_analyze = []
log_analyzer_rules = []
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
print(f"Applying the following {len(log_analyzer_rules)} rules to {len(entries_for_analysis)} log entries for analysis, looking for HINTs to further troubleshooting.")
print(log_analyzer_rules)
hints = 0
if len(log_analyzer_rules) > 0:
for entry in entries_for_analysis:
for rule in log_analyzer_rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(log_analyzer_rules)} rules). {hints} further troubleshooting hints made inline.")
print("Notebook execution is complete.")
| 0.302288 | 0.605099 |
```
import numpy as np
import matplotlib as mpl
#mpl.use('pdf')
import matplotlib.pyplot as plt
import numpy as np
plt.rc('font', family='serif', serif='Times')
plt.rc('text', usetex=True)
plt.rc('xtick', labelsize=6)
plt.rc('ytick', labelsize=6)
plt.rc('axes', labelsize=6)
#axes.linewidth : 0.5
plt.rc('axes', linewidth=0.5)
#ytick.major.width : 0.5
plt.rc('ytick.major', width=0.5)
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.direction'] = 'in'
plt.rc('ytick.minor', visible=True)
#plt.style.use(r"..\..\styles\infocom.mplstyle") # Insert your save location here
# width as measured in inkscape
fig_width = 3.487
#height = width / 1.618 / 2
fig_height = fig_width / 1.3 / 2
#cc_folder_list = ["SF_new_results/", "capacity_results/", "BF_new_results/"]
nc_folder_list = ["SF_new_results_NC/", "capacity_resultsNC/", "BF_new_results_NC/"]
#cc_folder_list = ["failure20stages-new-rounding/" + e for e in cc_folder_list]
nc_folder_list = ["num-reconfig/" + e for e in nc_folder_list]
file_list = ["LimitedReconfig120.csv", "Any-reconfig120.csv"]
#print(cc_folder_list)
print(nc_folder_list)
nc_node_data = np.full((3, 3), 0)
max_stage = 20
selected_stage = 20
for i in range(3):
for j in range(2):
with open(nc_folder_list[i]+file_list[j], "r") as f:
if j != 0:
f1 = f.readlines()
start_line = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
#print(start_line)
#print(len(f1))
line = f1[selected_stage+start_line]
line = line.split(",")
#print(line)
nc_node_data[2, i] = float(line[4])
else:
f1 = f.readlines()
start_line = 0
start_line1 = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
for index in range(start_line+max_stage+1, len(f1)):
if f1[index].find("%Stage") >= 0:
start_line1 = index
print("OK")
break
else:
start_line1 = start_line1 + 1
print(start_line, start_line1)
line = f1[selected_stage+start_line]
line = line.split(",")
print(line)
nc_node_data[0, i] = float(line[4])
#mesh3data[2, index] = int(line[1])
line = f1[selected_stage+start_line1]
line = line.split(",")
print(line)
nc_node_data[1, i] = float(line[4])
print(nc_node_data)
nc_dc_data = np.full((3, 3), 0)
max_stage = 20
selected_stage = 20
for i in range(3):
for j in range(2):
with open(nc_folder_list[i]+file_list[j], "r") as f:
if j != 0:
f1 = f.readlines()
start_line = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
#print(start_line)
#print(len(f1))
line = f1[selected_stage+start_line]
line = line.split(",")
nc_dc_data[2, i] = float(line[6])
else:
f1 = f.readlines()
start_line = 0
start_line1 = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
for index in range(start_line+max_stage+1, len(f1)):
if f1[index].find("%Stage") >= 0:
start_line1 = index
break
else:
start_line1 = start_line1 + 1
line = f1[selected_stage+start_line]
line = line.split(",")
nc_dc_data[0, i] = float(line[6])
#mesh3data[2, index] = int(line[1])
line = f1[selected_stage+start_line1]
print(start_line, start_line1)
line = line.split(",")
nc_dc_data[1, i] = float(line[6])
print(nc_dc_data)
import numpy as np
N = 3
ind = np.arange(N)
width = 1 / 4
x = [0, '20', '30', '40']
x_tick_label_list = ['20', '30', '40']
#colors = ['green', 'red', 'purple']
colors = ['C2', 'C3', 'C4']
fig, (ax1, ax2) = plt.subplots(1, 2)
#ax1.bar(x, objective)
#ax1.bar(x, objective[0])
#label_list = ['Lim-rec(5, 0)', 'Lim-rec(5, 2)', 'Any-rec']
label_list = ['Any-rem(5, 0)', 'Any-rem(5, 2)', 'Any-rem']
patterns = ('////////','\\\\\\\\','----', 'ooo', 'xxx', '\\', '\\\\','++', '*', 'O', '.')
plt.rcParams['hatch.linewidth'] = 0.25 # previous pdf hatch linewidth
#plt.rcParams['hatch.linewidth'] = 1.0 # previous svg hatch linewidth
#plt.rcParams['hatch.color'] = 'r'
for i in range(3):
ax1.bar(ind + width * (i-1), nc_node_data[i], width, label=label_list[i],
#alpha=0.7)
color=colors[i],
hatch=patterns[i+2], alpha=0.7)
#yerr=error[i], ecolor='black', capsize=1)
ax1.grid(lw = 0.25)
ax2.grid(lw = 0.25)
ax1.set_xticklabels(x)
ax1.set_ylabel('Reconfigurations of \n normal nodes')
ax1.set_xlabel('Percentage of substrate failures (\%)')
#ax1.set_ylabel('Objective value')
#ax1.set_xlabel('Recovery Scenarios')
ax1.xaxis.set_label_coords(0.5,-0.17)
ax1.yaxis.set_label_coords(-0.17,0.5)
#ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
# ncol=3, fancybox=True, shadow=True, fontsize='small')
for i in range(3):
ax2.bar(ind + width * (i-1), nc_dc_data[i], width, label=label_list[i],
color=colors[i],
#alpha=0.7)
hatch=patterns[i+2], alpha=0.7)
ax2.set_xticklabels(x)
ax2.set_ylabel('Reconfigurations of \n DC nodes')
ax2.set_xlabel('Percentage of substrate failures (\%)')
ax2.xaxis.set_label_coords(0.5,-0.17)
ax2.yaxis.set_label_coords(-0.17,0.5)
ax1.legend(loc='upper center', bbox_to_anchor=(1.16, 1.2),
ncol=5, prop={'size': 5}, handletextpad=0.2)
fig.set_size_inches(fig_width, fig_height)
mpl.pyplot.subplots_adjust(wspace = 0.35)
#fig.subplots_adjust(left=.125, bottom=.235, right=.975, top=.88)
#fig.subplots_adjust(left=.125, bottom=.235, right=.97, top=.85)
#ax1.grid(color='b', ls = '-.', lw = 0.25)
ax1.set_title('(a)', y=-0.45, fontsize=7)
ax2.set_title('(b)', y=-0.45, fontsize=7)
fig.subplots_adjust(left=.10, bottom=.235, right=.97, top=.85)
plt.show()
fig.savefig('test-heuristic-num-reconfig.pdf')
```
|
github_jupyter
|
import numpy as np
import matplotlib as mpl
#mpl.use('pdf')
import matplotlib.pyplot as plt
import numpy as np
plt.rc('font', family='serif', serif='Times')
plt.rc('text', usetex=True)
plt.rc('xtick', labelsize=6)
plt.rc('ytick', labelsize=6)
plt.rc('axes', labelsize=6)
#axes.linewidth : 0.5
plt.rc('axes', linewidth=0.5)
#ytick.major.width : 0.5
plt.rc('ytick.major', width=0.5)
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.direction'] = 'in'
plt.rc('ytick.minor', visible=True)
#plt.style.use(r"..\..\styles\infocom.mplstyle") # Insert your save location here
# width as measured in inkscape
fig_width = 3.487
#height = width / 1.618 / 2
fig_height = fig_width / 1.3 / 2
#cc_folder_list = ["SF_new_results/", "capacity_results/", "BF_new_results/"]
nc_folder_list = ["SF_new_results_NC/", "capacity_resultsNC/", "BF_new_results_NC/"]
#cc_folder_list = ["failure20stages-new-rounding/" + e for e in cc_folder_list]
nc_folder_list = ["num-reconfig/" + e for e in nc_folder_list]
file_list = ["LimitedReconfig120.csv", "Any-reconfig120.csv"]
#print(cc_folder_list)
print(nc_folder_list)
nc_node_data = np.full((3, 3), 0)
max_stage = 20
selected_stage = 20
for i in range(3):
for j in range(2):
with open(nc_folder_list[i]+file_list[j], "r") as f:
if j != 0:
f1 = f.readlines()
start_line = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
#print(start_line)
#print(len(f1))
line = f1[selected_stage+start_line]
line = line.split(",")
#print(line)
nc_node_data[2, i] = float(line[4])
else:
f1 = f.readlines()
start_line = 0
start_line1 = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
for index in range(start_line+max_stage+1, len(f1)):
if f1[index].find("%Stage") >= 0:
start_line1 = index
print("OK")
break
else:
start_line1 = start_line1 + 1
print(start_line, start_line1)
line = f1[selected_stage+start_line]
line = line.split(",")
print(line)
nc_node_data[0, i] = float(line[4])
#mesh3data[2, index] = int(line[1])
line = f1[selected_stage+start_line1]
line = line.split(",")
print(line)
nc_node_data[1, i] = float(line[4])
print(nc_node_data)
nc_dc_data = np.full((3, 3), 0)
max_stage = 20
selected_stage = 20
for i in range(3):
for j in range(2):
with open(nc_folder_list[i]+file_list[j], "r") as f:
if j != 0:
f1 = f.readlines()
start_line = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
#print(start_line)
#print(len(f1))
line = f1[selected_stage+start_line]
line = line.split(",")
nc_dc_data[2, i] = float(line[6])
else:
f1 = f.readlines()
start_line = 0
start_line1 = 0
for line in f1:
if line.find("%Stage") >= 0:
break
else:
start_line = start_line + 1
for index in range(start_line+max_stage+1, len(f1)):
if f1[index].find("%Stage") >= 0:
start_line1 = index
break
else:
start_line1 = start_line1 + 1
line = f1[selected_stage+start_line]
line = line.split(",")
nc_dc_data[0, i] = float(line[6])
#mesh3data[2, index] = int(line[1])
line = f1[selected_stage+start_line1]
print(start_line, start_line1)
line = line.split(",")
nc_dc_data[1, i] = float(line[6])
print(nc_dc_data)
import numpy as np
N = 3
ind = np.arange(N)
width = 1 / 4
x = [0, '20', '30', '40']
x_tick_label_list = ['20', '30', '40']
#colors = ['green', 'red', 'purple']
colors = ['C2', 'C3', 'C4']
fig, (ax1, ax2) = plt.subplots(1, 2)
#ax1.bar(x, objective)
#ax1.bar(x, objective[0])
#label_list = ['Lim-rec(5, 0)', 'Lim-rec(5, 2)', 'Any-rec']
label_list = ['Any-rem(5, 0)', 'Any-rem(5, 2)', 'Any-rem']
patterns = ('////////','\\\\\\\\','----', 'ooo', 'xxx', '\\', '\\\\','++', '*', 'O', '.')
plt.rcParams['hatch.linewidth'] = 0.25 # previous pdf hatch linewidth
#plt.rcParams['hatch.linewidth'] = 1.0 # previous svg hatch linewidth
#plt.rcParams['hatch.color'] = 'r'
for i in range(3):
ax1.bar(ind + width * (i-1), nc_node_data[i], width, label=label_list[i],
#alpha=0.7)
color=colors[i],
hatch=patterns[i+2], alpha=0.7)
#yerr=error[i], ecolor='black', capsize=1)
ax1.grid(lw = 0.25)
ax2.grid(lw = 0.25)
ax1.set_xticklabels(x)
ax1.set_ylabel('Reconfigurations of \n normal nodes')
ax1.set_xlabel('Percentage of substrate failures (\%)')
#ax1.set_ylabel('Objective value')
#ax1.set_xlabel('Recovery Scenarios')
ax1.xaxis.set_label_coords(0.5,-0.17)
ax1.yaxis.set_label_coords(-0.17,0.5)
#ax1.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
# ncol=3, fancybox=True, shadow=True, fontsize='small')
for i in range(3):
ax2.bar(ind + width * (i-1), nc_dc_data[i], width, label=label_list[i],
color=colors[i],
#alpha=0.7)
hatch=patterns[i+2], alpha=0.7)
ax2.set_xticklabels(x)
ax2.set_ylabel('Reconfigurations of \n DC nodes')
ax2.set_xlabel('Percentage of substrate failures (\%)')
ax2.xaxis.set_label_coords(0.5,-0.17)
ax2.yaxis.set_label_coords(-0.17,0.5)
ax1.legend(loc='upper center', bbox_to_anchor=(1.16, 1.2),
ncol=5, prop={'size': 5}, handletextpad=0.2)
fig.set_size_inches(fig_width, fig_height)
mpl.pyplot.subplots_adjust(wspace = 0.35)
#fig.subplots_adjust(left=.125, bottom=.235, right=.975, top=.88)
#fig.subplots_adjust(left=.125, bottom=.235, right=.97, top=.85)
#ax1.grid(color='b', ls = '-.', lw = 0.25)
ax1.set_title('(a)', y=-0.45, fontsize=7)
ax2.set_title('(b)', y=-0.45, fontsize=7)
fig.subplots_adjust(left=.10, bottom=.235, right=.97, top=.85)
plt.show()
fig.savefig('test-heuristic-num-reconfig.pdf')
| 0.174938 | 0.398699 |
```
import os
from os import listdir, makedirs
from os.path import join, exists, expanduser
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import tensorflow as tf
```
## Download the datset.
### The orginal dataset can be downloaded from https://github.com/Horea94/Fruit-Images-Dataset
### Reference: Horea Muresan, Mihai Oltean, Fruit recognition from images using deep learning, Acta Univ. Sapientiae, Informatica Vol. 10, Issue 1, pp. 26-42, 2018.
```
!wget https://www.dropbox.com/s/l1525goi53teden/fruits-360.zip?dl=0
!mv fruits-360.zip\?dl\=0 fruits-360.zip
!unzip fruits-360.zip
!rm fruits-360.zip
# dimensions of our images.
img_width, img_height = 224, 224
train_data_dir = './train/'
validation_data_dir = './valid/'
nb_train_samples = 31688
nb_validation_samples = 10657
batch_size = 64
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
```
### Alternative way to to split train data folder into train and validation is given below.
### This is useful when you just have two folders for Train and Test.
```
# total_datagen = ImageDataGenerator(
# rescale=1. / 255,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True,
# validation_split=0.2)
# test_datagen = ImageDataGenerator(rescale=1. / 255)
# train_generator = total_datagen.flow_from_directory(
# train_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='categorical',
# subset="training")
# validation_generator = total_datagen.flow_from_directory(
# validation_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='categorical',
# subset="validation")
```
## Create the ResNet50 Model for transfer learning
```
inception_base = applications.ResNet50(weights='imagenet', include_top=False)
```
### We load the pre-trained ResNet50 network from disk. Do notice how we have
### included the parameter include_top=False – supplying this value indicates
### that the final fully- connected layers should not be included in the architecture.
### Therefore, when forward propagating an image through the network, we’ll obtain the
### feature values after the final POOL layer rather than the probabilities produced by
### the softmax classifier in the FC layers.
```
x = inception_base.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(81, activation='softmax')(x)
inception_transfer = Model(inputs=inception_base.input, outputs=predictions)
inception_transfer.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
inception_transfer.fit_generator(
train_generator,
steps_per_epoch=33125 // 64,
epochs=5, shuffle = True, verbose = 1,
max_queue_size=10,
validation_data=validation_generator,
validation_steps=8197 // 64)
```
|
github_jupyter
|
import os
from os import listdir, makedirs
from os.path import join, exists, expanduser
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import tensorflow as tf
!wget https://www.dropbox.com/s/l1525goi53teden/fruits-360.zip?dl=0
!mv fruits-360.zip\?dl\=0 fruits-360.zip
!unzip fruits-360.zip
!rm fruits-360.zip
# dimensions of our images.
img_width, img_height = 224, 224
train_data_dir = './train/'
validation_data_dir = './valid/'
nb_train_samples = 31688
nb_validation_samples = 10657
batch_size = 64
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical')
# total_datagen = ImageDataGenerator(
# rescale=1. / 255,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True,
# validation_split=0.2)
# test_datagen = ImageDataGenerator(rescale=1. / 255)
# train_generator = total_datagen.flow_from_directory(
# train_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='categorical',
# subset="training")
# validation_generator = total_datagen.flow_from_directory(
# validation_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='categorical',
# subset="validation")
inception_base = applications.ResNet50(weights='imagenet', include_top=False)
x = inception_base.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(81, activation='softmax')(x)
inception_transfer = Model(inputs=inception_base.input, outputs=predictions)
inception_transfer.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
inception_transfer.fit_generator(
train_generator,
steps_per_epoch=33125 // 64,
epochs=5, shuffle = True, verbose = 1,
max_queue_size=10,
validation_data=validation_generator,
validation_steps=8197 // 64)
| 0.638046 | 0.702989 |
# Convolutional Neural Networks for Visual Recognition
## What do we want?

In the previous lecture, we covered how we can calculate the scores functions, SVM Loss and data loss + regularization.
Now we want to find the parameters W that corresponds to our lowest loss.
**Why?**
It is because we want to minimize the loss function, because we have a preference for simpler models for better generalization.
We can achieve this with **optimiztation**
We also can compute this gradient with
- numerical gradient method
- slow, approximate, easy to write
- analytic gradient
- fast, exact, error-prone
In practice, derive analytic gradient then check the implementation with numerical gradient
## Computational graphs

**What is a computation graph?**
We can use this kind of graph in order to represent any function where the nodes of the graph are steps of computation that we go through.
노드의 연산 단계를 나타냅니다.
이 예제는 linear 선형 classifier 입니다.
inputs: X and W
multiplication node: represents the matrix multiplier (행렬 곱셈)
vector of scores: multiplication of parameters W and data X
[hinge loss](https://www.notion.so/modulabs/Lec03-1-Loss-Function-5e775d9c809a433eb243956d23bd379b#2839e4efd7854a9399f56e0f0fec5bb2): data loss term.
Total Loss: Sum of regularization term and the data term
**Advantage**
we can call back propagation!
- gradient를 얻기위해 computational graph 내부의 모든 변수에 대해 chain rule을 재귀적으로 사용합니다
- really useful wehen working with complex functions
- Convolutional network (AlexNet)
- Neural Turing Machine
이걸 직접 계산하는 거는 정신나간 짓이다.
## Back propogation
### example1

backpropagation은 chain rule의 재귀적인 응용입니다.
chiain rule에 의해 우리는 뒤에서부터 시작하기 때문에 뒤에서 부터 gradient 계산을 합니다.
y 와 f 는 직접 연결 되어 있지 않아서 chain rule을 사용한다.
y에 대한 f의 미분은 q에 대한 f의 미분과
y에 대한 q의 미분의 곱으로 나타낼 수 있습니다.
### example2

If we take a loot at what we did in a different perspective as nodes, we see that we have the L(LOSS) value coming back as back propogation. We use the chain rule to multiply hte local gradient and upstream gradient coming down in order to get the gradient respect to the input.
### example3

We define the computational nodes into any granularity we want to.
In practice, we can group some of the nodes together as long as we can write down the local gradient for the function.
For example, we can use the sigmoid function to shorten the nodes.
**Trade off**
how much math for simpler graphs vs how simple you want your gradients to be
## patterns in backward flow

## Gradients for vectorized code
The equation would stay the same with the only difference being that this is now a jocobian matrix. derivative of each element z with respect to derivative of each element x.


## implementation



## summary
- neural nets will be very large: impractical to write down gradient formula by hand for all parameters
- backpropagation = recursive application of the chain rule along a computational graph to compute the gradients of all inputs/parameters/intermediates
- implementations maintain a graph structure, where the nodes implement the forward() / backward() API
- forward: compute result of an operation and save any intermediates needed for gradient computation in memory
- backward: apply the chain rule to compute the gradient of the loss function with respect to the inputs
# neural networks

## activate functions

## summary
- We arrange neurons into fully-connected layers
- The abstraction of a layer has the nice property that it allows us to use efficient vectorized code (e.g. matrix multiplies)
- Neural networks are not really neural
- Next time: Convolutional Neural Networks
|
github_jupyter
|
# Convolutional Neural Networks for Visual Recognition
## What do we want?

In the previous lecture, we covered how we can calculate the scores functions, SVM Loss and data loss + regularization.
Now we want to find the parameters W that corresponds to our lowest loss.
**Why?**
It is because we want to minimize the loss function, because we have a preference for simpler models for better generalization.
We can achieve this with **optimiztation**
We also can compute this gradient with
- numerical gradient method
- slow, approximate, easy to write
- analytic gradient
- fast, exact, error-prone
In practice, derive analytic gradient then check the implementation with numerical gradient
## Computational graphs

**What is a computation graph?**
We can use this kind of graph in order to represent any function where the nodes of the graph are steps of computation that we go through.
노드의 연산 단계를 나타냅니다.
이 예제는 linear 선형 classifier 입니다.
inputs: X and W
multiplication node: represents the matrix multiplier (행렬 곱셈)
vector of scores: multiplication of parameters W and data X
[hinge loss](https://www.notion.so/modulabs/Lec03-1-Loss-Function-5e775d9c809a433eb243956d23bd379b#2839e4efd7854a9399f56e0f0fec5bb2): data loss term.
Total Loss: Sum of regularization term and the data term
**Advantage**
we can call back propagation!
- gradient를 얻기위해 computational graph 내부의 모든 변수에 대해 chain rule을 재귀적으로 사용합니다
- really useful wehen working with complex functions
- Convolutional network (AlexNet)
- Neural Turing Machine
이걸 직접 계산하는 거는 정신나간 짓이다.
## Back propogation
### example1

backpropagation은 chain rule의 재귀적인 응용입니다.
chiain rule에 의해 우리는 뒤에서부터 시작하기 때문에 뒤에서 부터 gradient 계산을 합니다.
y 와 f 는 직접 연결 되어 있지 않아서 chain rule을 사용한다.
y에 대한 f의 미분은 q에 대한 f의 미분과
y에 대한 q의 미분의 곱으로 나타낼 수 있습니다.
### example2

If we take a loot at what we did in a different perspective as nodes, we see that we have the L(LOSS) value coming back as back propogation. We use the chain rule to multiply hte local gradient and upstream gradient coming down in order to get the gradient respect to the input.
### example3

We define the computational nodes into any granularity we want to.
In practice, we can group some of the nodes together as long as we can write down the local gradient for the function.
For example, we can use the sigmoid function to shorten the nodes.
**Trade off**
how much math for simpler graphs vs how simple you want your gradients to be
## patterns in backward flow

## Gradients for vectorized code
The equation would stay the same with the only difference being that this is now a jocobian matrix. derivative of each element z with respect to derivative of each element x.


## implementation



## summary
- neural nets will be very large: impractical to write down gradient formula by hand for all parameters
- backpropagation = recursive application of the chain rule along a computational graph to compute the gradients of all inputs/parameters/intermediates
- implementations maintain a graph structure, where the nodes implement the forward() / backward() API
- forward: compute result of an operation and save any intermediates needed for gradient computation in memory
- backward: apply the chain rule to compute the gradient of the loss function with respect to the inputs
# neural networks

## activate functions

## summary
- We arrange neurons into fully-connected layers
- The abstraction of a layer has the nice property that it allows us to use efficient vectorized code (e.g. matrix multiplies)
- Neural networks are not really neural
- Next time: Convolutional Neural Networks
| 0.896574 | 0.965706 |
<b>Why do we need to analyze the data ?</b>
- to understand our client's behaviour
- to make business decisions based on data
- to validate our business decisions
<b>What tools are you using to analyze the data ?</b>
- excel, sql, programming languages (java, python, etc), big data tools
<b>What does Big Data mean for you ?</b>
- Big Data can means a large volume of data, which cannot be stored and processed efficiently by traditional data management tools.
<b>Why SQL on Big Data ? </b>
- SQL is one of the most common skill. Almost any developer knows how to write a simple SQL query.
- If a big data framework would support SQL, all of a sudden, everyone could do analysis on big data ! <br>
There are many SQL tools for big data. Amazon Athena, redshift from Amazon, BigQuery from google. Hive, Spark SQL as open source tools.
<img src='https://github.com/tlapusan/itdays-2019/blob/master/bigdata/resources/images/spark_logo.png?raw=true' />
Most of the time we are used to code/work on a single machine (laptop). But there are moments in our developers' life when a single machine is not powerful enough, especially when we are dealing with processing of a large volume of data. <br>
One idea would be to use a cluster of machines and use all their resources (CPU, RAM, HDD). If we are talking about a cluster of machines to comunicate between each other, that means that we need networking, multitreading, etc skills...scary.
An ideal scenario would be to have a framework to handle this hard work and to give us the impression that we are still working on a single machine. This is what Apache Spark does !
Apache Spark is a distributed computing engine. It is able to process a large volume data, for tasks like batch or streaming processing, SQL, machine learning, graph processing. wow !
How can we deploy/use it ? <br>
Even if Spark looks like a very big framework, we can install it easily on our laptops and just start coding.
The best part...the code that we write on our laptops can be deployed and run on a clusters with hundreds of servers, whitout any changes ;) <br>
Supported programming languages : Java, Scala, Python, R.
# Install Spark
We can install Spark for Python (pyspark) using pip package manager.
```
#!pip install pyspark
```
# Imports
```
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
import matplotlib.pyplot as plt
```
# Init SparkSession
SparkSession is the entry point for each Spark application. <br>
When we instantiate a SparkSession, we create a driver process from where we can execute user-defined code on our big datasets.
<img src='https://github.com/tlapusan/itdays-2019/blob/master/bigdata/resources/images/spark_application_architecture.png?raw=true'/>
```
spark = SparkSession.builder.\
master("local[4]").\
appName("Spark-SQL").\
getOrCreate()
spark
```
# Read data
Spark can read/write data from a variaty of data formats, like csv, json, parquet, jdbc
Dataset description : The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. <br>
```
client_bank = spark.read.parquet("../resources/data/parquet/client_bank/")
client_campaign = spark.read.parquet("../resources/data/parquet/client_campaing/")
type(client_bank)
```
# Dataframe
Dataframe is an immutable distributed table-like collection of data. It has a schema which defines the column names and data types.
<img src='https://github.com/tlapusan/itdays-2019/blob/master/bigdata/resources/images/dataframe_structure.png?raw=true' width='70%'/>
TODO
- the role of immutability for dataframe ?
- penalty of using python in spark
```
client_bank.show(10)
client_bank.schema
client_bank.printSchema()
```
Data related with bank client information : <br>
<b>id</b> - phone call id <br>
<b>age</b> - client age <br>
<b>job</b> - client job <br>
<b>marital</b> - marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed) <br>
<b>education</b> - client education 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown' <br>
<b>default</b> - has credit in default? (categorical: 'no','yes','unknown'), default is failure to meet the legal obligations (or conditions) of a loan <br>
<b>housing</b> - has housing loan? (categorical: 'no','yes','unknown') <br>
<b>loan</b> - has personal loan? (categorical: 'no','yes','unknown') <br>
<b>subscribed</b> - if the client subscribed to the bank term deposit (categorical: 'no','yes')
```
# Data related with the phone call contact
client_campaign.show(10)
client_campaign.printSchema()
```
<b>id</b> phone call id <br>
<b>contact</b> - contact communication type (categorical: 'cellular','telephone') <br>
<b>month</b> - contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec') <br>
<b>day_of_week</b> - contact day of the week (categorical: 'mon','tue','wed','thu','fri')<br>
<b>duration</b> - contact duration, in seconds (numeric). <br>
# Data analysis
Spark offers multiple ways to analyze data. The most commonly used are DataFrame API and Spark SQL.
## Dataframe API
### Column rename
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.withColumnRenamed
```
client_bank = client_bank.\
withColumnRenamed("default", "default_credit").\
withColumnRenamed("housing", "housing_loan").\
withColumnRenamed("loan", "personal_loan")
client_bank.printSchema()
# Exercise
# rename column 'contact' into 'contact_type' for client_campaing dataframe
client_campaign = client_campaign.withColumnRenamed("contact", "contact_type")
client_campaign.printSchema()
```
### select columns
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.select
```
client_bank.\
select("*").\
show()
client_bank.\
select(["age", "job", "education", "subscribed"]).\
show()
```
### where
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.where
```
client_bank.\
where(F.col("age") == 23).\
show()
client_bank.\
where((F.col("age") == 23) & (F.col("subscribed") == "yes")).\
show()
```
### group by
```
# how many phone calls were successful/unsuccessful ?
client_bank.\
groupBy(F.col("subscribed")).\
count().\
show()
client_bank.\
groupBy(F.col("subscribed")).\
agg(
F.count("*").alias("call_count"),
F.min("age").alias("min_age"),
F.max("age").alias("max_age"),
F.round(F.mean("age"),2).alias("mean_age")
).\
show()
```
### order by
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.orderBy
```
client_bank.\
orderBy(F.col("age"), ascending=True).\
show()
```
### join
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.join
```
client_campaign.printSchema()
client_bank.\
join(client_campaign, ["id"], how="inner").\
show(10)
client_bank.\
join(client_campaign, ["id"], how="inner").\
select(["age", "education", "contact", "month"]).\
where(F.col("contact") == "cellular").\
show()
```
## Spark SQL
To run any type of SQL query, we need first to create a temporary view
```
client_bank.createOrReplaceTempView("client_bank")
client_campaign.createOrReplaceTempView("client_campaign")
```
### select
```
spark.sql("""SELECT * FROM client_bank""").show()
spark.sql("""SELECT age, job FROM client_bank""").show()
```
### where
```
spark.sql("""SELECT *
FROM client_bank
WHERE age == 34 AND housing_loan == 'yes'""").show()
```
### group by
```
spark.sql("""SELECT job, count(*) AS count
FROM client_bank
WHERE AGE == 21
GROUP BY job""").show()
spark.sql("""SELECT job, count(*) AS count, round(mean(age), 3) AS mean_age
FROM client_bank
WHERE subscribed == 'no'
GROUP BY job""").show()
```
### order by
```
spark.sql("""SELECT *
FROM client_bank
ORDER BY age ASC""").show()
```
### join
```
spark.sql("""SELECT cb.id, cc.id, cb.age, cb.education, cc.contact
FROM client_bank AS cb INNER JOIN client_campaign AS cc ON cb.id == cc.id""").show()
```
### subselect
```
spark.sql("""SELECT age, education, count(*) as count
FROM (
SELECT cb.id, cc.id, cb.age, cb.education, cc.contact
FROM client_bank AS cb INNER JOIN client_campaign AS cc ON cb.id == cc.id
)
GROUP BY age, education
ORDER BY age DESC""").show()
```
# Combine SQL with Dataframe API
```
spark.sql("""SELECT age, job, education
FROM client_bank
WHERE job == 'services'""").\
where(F.col("age").between(20,50)).\
where(F.col("education") == "high.school").show()
```
# Spark SQL clients
https://spark.apache.org/docs/latest/sql-distributed-sql-engine.html
- SparkSession
- JDBC/ODBC ([BI tools](https://docs.databricks.com/bi/index.html))
- command-line
# How Spark can run SQL code ?
An SQL query declares our intentions, but it does not tell the exact logic flow to run. Spark needs to convert the SQL in a query plan, which is a set of steps of executions. <br>
This process was not invented by Spark, it happens in all SQL servers.
<img src='https://github.com/tlapusan/itdays-2019/blob/master/bigdata/resources/images/sql_catalyst.png?raw=true'/>
<b>Parsed logical</b> plan checks code syntax. <br>
<b>Analyzed logical plan</b> checks for tables, columns validity. <br>
<b>Optimized logical plan</b> tries to apply optimisations on the logical plan, like pushing down predicates or column selections. <br>
<b>Physical plan</b> specifies exactly how the plan will be executed on the cluster.
<img src='https://github.com/tlapusan/itdays-2019/blob/master/bigdata/resources/images/query_plan_states.png?raw=true'/>
```
# https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.streaming.StreamingQuery.explain
spark.sql("SELECT id, age, education FROM client_bank").explain(extended=True)
```
## check for parsed logical plan
```
spark.sql("SELECT id, age, education FROM2 client_bank").explain()
```
## check for analyzed logical plan
```
spark.sql("SELECT id2, age, education FROM client_bank").explain()
```
## optimized logical plan
```
spark.sql("""SELECT id, age, education
FROM client_bank
WHERE age == 3""").explain(extended=True)
```
## API vs SQL query plan
```
client_bank.select(["age","job","marital","education"]).\
where(F.col("age") > 30).\
where(F.col("job") == "management").explain()
spark.sql("""
SELECT age, job, marital, education
FROM client_bank
WHERE age > 30 AND job =='management'""").explain()
```
# Spark SQL vs RDBMS
Why can't we use databases with lots of disks and CPUs to do large scale analytics ? <br>
The answer comes from another trend in disk drives : seek time is improving more slowly than transfer rate.
If the data access pattern is dominated by seeks, it will take longer to write/read the data then streaming through it, and vice versa.
In many ways, Spark SQL and RDBMS complement each other. Spark SQL is very good for analysing the whole dataset (ad hoc queries) and RDBMS is good for point queries or updates.
# Spark operations
In Spark we have two types of operations : transformations and actions. <br>
Transformations are those operations used to express the business logic of a spark application. <br>
Actions are those operations used to trigger a pipeline of transformations.
Lazy evaluation : spark will compute all the transformations, only in the last minute, when you actually call an action on them. In this way, Spark can look at the whole set of transformation and will try to apply optimisations on it.
```
# tranformation
%timeit -r1 -n1 print(client_bank.\
where(F.col("age")==24))
# action
%timeit -r1 -n1 print(client_bank.\
where(F.col("age")==24).\
count())
```
# Spark SQL functions
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions
# Spark SQL resources
Book - https://www.amazon.com/Spark-Definitive-Guide-Processing-Simple/dp/1491912219 <br>
Videos :
- https://databricks.com/sparkaisummit/north-america/sessions
- https://databricks.com/session/from-basic-to-advanced-aggregate-operators-in-apache-spark-sql-2-2-by-examples-and-their-catalyst-optimizations-continues
project push down, filter push down, and partition pruning : https://drill.apache.org/docs/parquet-filter-pushdown/
|
github_jupyter
|
#!pip install pyspark
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
import matplotlib.pyplot as plt
spark = SparkSession.builder.\
master("local[4]").\
appName("Spark-SQL").\
getOrCreate()
spark
client_bank = spark.read.parquet("../resources/data/parquet/client_bank/")
client_campaign = spark.read.parquet("../resources/data/parquet/client_campaing/")
type(client_bank)
client_bank.show(10)
client_bank.schema
client_bank.printSchema()
# Data related with the phone call contact
client_campaign.show(10)
client_campaign.printSchema()
client_bank = client_bank.\
withColumnRenamed("default", "default_credit").\
withColumnRenamed("housing", "housing_loan").\
withColumnRenamed("loan", "personal_loan")
client_bank.printSchema()
# Exercise
# rename column 'contact' into 'contact_type' for client_campaing dataframe
client_campaign = client_campaign.withColumnRenamed("contact", "contact_type")
client_campaign.printSchema()
client_bank.\
select("*").\
show()
client_bank.\
select(["age", "job", "education", "subscribed"]).\
show()
client_bank.\
where(F.col("age") == 23).\
show()
client_bank.\
where((F.col("age") == 23) & (F.col("subscribed") == "yes")).\
show()
# how many phone calls were successful/unsuccessful ?
client_bank.\
groupBy(F.col("subscribed")).\
count().\
show()
client_bank.\
groupBy(F.col("subscribed")).\
agg(
F.count("*").alias("call_count"),
F.min("age").alias("min_age"),
F.max("age").alias("max_age"),
F.round(F.mean("age"),2).alias("mean_age")
).\
show()
client_bank.\
orderBy(F.col("age"), ascending=True).\
show()
client_campaign.printSchema()
client_bank.\
join(client_campaign, ["id"], how="inner").\
show(10)
client_bank.\
join(client_campaign, ["id"], how="inner").\
select(["age", "education", "contact", "month"]).\
where(F.col("contact") == "cellular").\
show()
client_bank.createOrReplaceTempView("client_bank")
client_campaign.createOrReplaceTempView("client_campaign")
spark.sql("""SELECT * FROM client_bank""").show()
spark.sql("""SELECT age, job FROM client_bank""").show()
spark.sql("""SELECT *
FROM client_bank
WHERE age == 34 AND housing_loan == 'yes'""").show()
spark.sql("""SELECT job, count(*) AS count
FROM client_bank
WHERE AGE == 21
GROUP BY job""").show()
spark.sql("""SELECT job, count(*) AS count, round(mean(age), 3) AS mean_age
FROM client_bank
WHERE subscribed == 'no'
GROUP BY job""").show()
spark.sql("""SELECT *
FROM client_bank
ORDER BY age ASC""").show()
spark.sql("""SELECT cb.id, cc.id, cb.age, cb.education, cc.contact
FROM client_bank AS cb INNER JOIN client_campaign AS cc ON cb.id == cc.id""").show()
spark.sql("""SELECT age, education, count(*) as count
FROM (
SELECT cb.id, cc.id, cb.age, cb.education, cc.contact
FROM client_bank AS cb INNER JOIN client_campaign AS cc ON cb.id == cc.id
)
GROUP BY age, education
ORDER BY age DESC""").show()
spark.sql("""SELECT age, job, education
FROM client_bank
WHERE job == 'services'""").\
where(F.col("age").between(20,50)).\
where(F.col("education") == "high.school").show()
# https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.streaming.StreamingQuery.explain
spark.sql("SELECT id, age, education FROM client_bank").explain(extended=True)
spark.sql("SELECT id, age, education FROM2 client_bank").explain()
spark.sql("SELECT id2, age, education FROM client_bank").explain()
spark.sql("""SELECT id, age, education
FROM client_bank
WHERE age == 3""").explain(extended=True)
client_bank.select(["age","job","marital","education"]).\
where(F.col("age") > 30).\
where(F.col("job") == "management").explain()
spark.sql("""
SELECT age, job, marital, education
FROM client_bank
WHERE age > 30 AND job =='management'""").explain()
# tranformation
%timeit -r1 -n1 print(client_bank.\
where(F.col("age")==24))
# action
%timeit -r1 -n1 print(client_bank.\
where(F.col("age")==24).\
count())
| 0.390592 | 0.969469 |
## Extracts app related data from the app page on Google Play store
```
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
import pandas as pd
delimiter = "\t"
data = pd.read_csv("app_details.csv", sep = "\t")
def write_app_details(text):
app_urls = open("app_details_final.csv","a+")
app_urls.write(text)
app_urls.close()
def get_app_details_row(output):
return (output["app_id"] + delimiter + output["app_title"] + delimiter + output["rating"] +
delimiter + output["price"] + delimiter +
output["no_of_ratings"] + delimiter + output["size"] + delimiter + output["in_app_products"] +
delimiter + output["installs"] + delimiter + output["ratings_distribution_5"] + delimiter +
output["ratings_distribution_4"] + delimiter + output["ratings_distribution_3"] + delimiter +
output["ratings_distribution_2"] + delimiter + output["ratings_distribution_1"] + delimiter +
output["is_editors_choice"] + delimiter + output["genre"] + delimiter + output["age_rating"] +
delimiter + output["app_url"] + "\n")
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
def get_app_details(row):
driver = webdriver.Chrome(options = chrome_options)
driver.get(data.iloc[row]["app_url"])
assert "Google Play" in driver.title
html = driver.page_source
soup = bs(html, 'html.parser')
ratings_div_list = []
for div in soup.find_all("div", class_ = "mMF0fd"):
ratings_div_list.append(div)
ratings_distribution = {"1" : "0", "2" : "0", "3" : "0", "4" : "0", "5" : "0"}
for div in ratings_div_list:
rating_no = 0
current_no_of_ratings = 0
for span in div.find_all("span", class_ = "Gn2mNd"):
rating_no = span.text
for span in div.find_all("span", class_ = "L2o20d"):
span_text = str(span)
title_value = span_text[span_text.index("title=\"") + 7 :]
title_value = title_value[:title_value.index("\">")]
current_no_of_ratings = title_value
ratings_distribution[rating_no] = current_no_of_ratings
genre = ""
for a in soup.find_all("a", class_ = "hrTbp R8zArc"):
if("genre" in str(a)):
genre = a.text
break
output = {}
output["app_id"] = str(data.iloc[row]["app_id"])
output["app_title"] = str(data.iloc[row]["app_title"])
output["rating"] = str(data.iloc[row]["rating"])
output["price"] = str(data.iloc[row]["price"])
output["no_of_ratings"] = str(data.iloc[row]["no_of_ratings"])
output["size"] = str(data.iloc[row]["size"])
output["in_app_products"] = str(data.iloc[row]["in_app_products"])
output["installs"] = str(data.iloc[row]["installs"])
output["ratings_distribution_5"] = str(ratings_distribution["5"])
output["ratings_distribution_4"] = str(ratings_distribution["4"])
output["ratings_distribution_3"] = str(ratings_distribution["3"])
output["ratings_distribution_2"] = str(ratings_distribution["2"])
output["ratings_distribution_1"] = str(ratings_distribution["1"])
output["is_editors_choice"] = str(data.iloc[row]["is_editors_choice"])
output["genre"] = str(genre)
output["age_rating"] = str(data.iloc[row]["age_rating"])
output["app_url"] = str(data.iloc[row]["app_url"])
write_app_details(get_app_details_row(output))
driver.quit()
return output
## Example
url_counter = 6851
for row in range(6851, len(data)):
print(data.iloc[row]["app_url"])
print("Getting details for url #" + str(url_counter + 1))
url_counter = url_counter + 1
try:
get_app_details(row)
except Exception:
continue
```
|
github_jupyter
|
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
import pandas as pd
delimiter = "\t"
data = pd.read_csv("app_details.csv", sep = "\t")
def write_app_details(text):
app_urls = open("app_details_final.csv","a+")
app_urls.write(text)
app_urls.close()
def get_app_details_row(output):
return (output["app_id"] + delimiter + output["app_title"] + delimiter + output["rating"] +
delimiter + output["price"] + delimiter +
output["no_of_ratings"] + delimiter + output["size"] + delimiter + output["in_app_products"] +
delimiter + output["installs"] + delimiter + output["ratings_distribution_5"] + delimiter +
output["ratings_distribution_4"] + delimiter + output["ratings_distribution_3"] + delimiter +
output["ratings_distribution_2"] + delimiter + output["ratings_distribution_1"] + delimiter +
output["is_editors_choice"] + delimiter + output["genre"] + delimiter + output["age_rating"] +
delimiter + output["app_url"] + "\n")
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
def get_app_details(row):
driver = webdriver.Chrome(options = chrome_options)
driver.get(data.iloc[row]["app_url"])
assert "Google Play" in driver.title
html = driver.page_source
soup = bs(html, 'html.parser')
ratings_div_list = []
for div in soup.find_all("div", class_ = "mMF0fd"):
ratings_div_list.append(div)
ratings_distribution = {"1" : "0", "2" : "0", "3" : "0", "4" : "0", "5" : "0"}
for div in ratings_div_list:
rating_no = 0
current_no_of_ratings = 0
for span in div.find_all("span", class_ = "Gn2mNd"):
rating_no = span.text
for span in div.find_all("span", class_ = "L2o20d"):
span_text = str(span)
title_value = span_text[span_text.index("title=\"") + 7 :]
title_value = title_value[:title_value.index("\">")]
current_no_of_ratings = title_value
ratings_distribution[rating_no] = current_no_of_ratings
genre = ""
for a in soup.find_all("a", class_ = "hrTbp R8zArc"):
if("genre" in str(a)):
genre = a.text
break
output = {}
output["app_id"] = str(data.iloc[row]["app_id"])
output["app_title"] = str(data.iloc[row]["app_title"])
output["rating"] = str(data.iloc[row]["rating"])
output["price"] = str(data.iloc[row]["price"])
output["no_of_ratings"] = str(data.iloc[row]["no_of_ratings"])
output["size"] = str(data.iloc[row]["size"])
output["in_app_products"] = str(data.iloc[row]["in_app_products"])
output["installs"] = str(data.iloc[row]["installs"])
output["ratings_distribution_5"] = str(ratings_distribution["5"])
output["ratings_distribution_4"] = str(ratings_distribution["4"])
output["ratings_distribution_3"] = str(ratings_distribution["3"])
output["ratings_distribution_2"] = str(ratings_distribution["2"])
output["ratings_distribution_1"] = str(ratings_distribution["1"])
output["is_editors_choice"] = str(data.iloc[row]["is_editors_choice"])
output["genre"] = str(genre)
output["age_rating"] = str(data.iloc[row]["age_rating"])
output["app_url"] = str(data.iloc[row]["app_url"])
write_app_details(get_app_details_row(output))
driver.quit()
return output
## Example
url_counter = 6851
for row in range(6851, len(data)):
print(data.iloc[row]["app_url"])
print("Getting details for url #" + str(url_counter + 1))
url_counter = url_counter + 1
try:
get_app_details(row)
except Exception:
continue
| 0.202443 | 0.326808 |
## Deep Learning Platforms in Python
1- Keras
2- Tensorflow
3- Pytorch
4- Caffe
5- Theano
6- CNTK
7- MXNET
## Why we use Keras in DS 2.2 ?
- A focus on user experience, easy to build and train a deep learning model
- Easy to learn and easy to use
- Large adoption in the industry and research community
- Multi-backend, multi-platform
- Easy productization of models
<img src="why_keras.png" width="300" height="300">
## Keras has two API Styles
### The Sequential API
- Dead simple
- Only for single-input, single-output, sequential layer stacks
- Good for 70+% of use cases
<img src="keras_sequential_api_2.png" width="500" height="500">
### The functional API
- Like playing with Lego bricks
- Multi-input, multi-output, arbitrary static graph topologies
- Good for 95% of use cases
- Great if we want to have acess to hidden layers or if we want to do branching
<img src="keras_functional_api_2.png" width="500" height="500">
## Activity: Apply NN with Keras on iris data
- Use Sequential API for Keras
- Use 70 percent of data for train
- Use one-hot encoding for labels with `from keras.utils import np_utils`
- Define two layers fully connected network with 16 neurons as hidden layer
- Define `categorical_crossentropy` as the loss (cost) function
```
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=0);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
```
## Acitivity: Remove the hidden layer and train the new model with iris data
```
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
model = Sequential()
model.add(Dense(3, input_shape=(4,)))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=1);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
```
## Activity: Apply NN with Keras on iris data with Functional API
```
from keras.layers import Input, Dense
from keras.models import Model
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
inp = Input(shape=(4,))
x = Dense(16, activation='sigmoid')(inp)
out = Dense(3, activation='softmax')(x)
model = Model(inputs=inp, outputs= out)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=0);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
```
## Appropriate Loss Function
- When we have two class calssification problem
- The loss function should be `binary_crossentropy`
- We need one output neuron
- The activation function of last layer would be `sigmoid`
- When we have multi-class calssification problem
- The loss function should be `categorical_crossentropy`
- We need N output neuron where N is the number of classes we have
- The activation function of last layer would be `softmax`
- When we have regression problem
- The loss function should be `mse` or `mae`
- We need one output neuron
- The activation function of last layer would be `linear`
|
github_jupyter
|
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('sigmoid'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=0);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
model = Sequential()
model.add(Dense(3, input_shape=(4,)))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=1);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
from keras.layers import Input, Dense
from keras.models import Model
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
y_train_one_hot = np_utils.to_categorical(y_train)
y_test_one_hot = np_utils.to_categorical(y_test)
# print(y_one_hot)
inp = Input(shape=(4,))
x = Dense(16, activation='sigmoid')(inp)
out = Dense(3, activation='softmax')(x)
model = Model(inputs=inp, outputs= out)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, y_train_one_hot, epochs=100, batch_size=1, verbose=0);
loss, accuracy = model.evaluate(X_test, y_test_one_hot, verbose=0)
print("Accuracy = {:.2f}".format(accuracy))
| 0.765593 | 0.980148 |
# Training Neural Networks
The network we built in the previous part isn't so smart, it doesn't know anything about our handwritten digits. Neural networks with non-linear activations work like universal function approximators. There is some function that maps your input to the output. For example, images of handwritten digits to class probabilities. The power of neural networks is that we can train them to approximate this function, and basically any function given enough data and compute time.
<img src="assets/function_approx.png" width=500px>
At first the network is naive, it doesn't know the function mapping the inputs to the outputs. We train the network by showing it examples of real data, then adjusting the network parameters such that it approximates this function.
To find these parameters, we need to know how poorly the network is predicting the real outputs. For this we calculate a **loss function** (also called the cost), a measure of our prediction error. For example, the mean squared loss is often used in regression and binary classification problems
$$
\large \ell = \frac{1}{2n}\sum_i^n{\left(y_i - \hat{y}_i\right)^2}
$$
where $n$ is the number of training examples, $y_i$ are the true labels, and $\hat{y}_i$ are the predicted labels.
By minimizing this loss with respect to the network parameters, we can find configurations where the loss is at a minimum and the network is able to predict the correct labels with high accuracy. We find this minimum using a process called **gradient descent**. The gradient is the slope of the loss function and points in the direction of fastest change. To get to the minimum in the least amount of time, we then want to follow the gradient (downwards). You can think of this like descending a mountain by following the steepest slope to the base.
<img src='assets/gradient_descent.png' width=350px>
## Backpropagation
For single layer networks, gradient descent is straightforward to implement. However, it's more complicated for deeper, multilayer neural networks like the one we've built. Complicated enough that it took about 30 years before researchers figured out how to train multilayer networks.
Training multilayer networks is done through **backpropagation** which is really just an application of the chain rule from calculus. It's easiest to understand if we convert a two layer network into a graph representation.
<img src='assets/backprop_diagram.png' width=550px>
In the forward pass through the network, our data and operations go from bottom to top here. We pass the input $x$ through a linear transformation $L_1$ with weights $W_1$ and biases $b_1$. The output then goes through the sigmoid operation $S$ and another linear transformation $L_2$. Finally we calculate the loss $\ell$. We use the loss as a measure of how bad the network's predictions are. The goal then is to adjust the weights and biases to minimize the loss.
To train the weights with gradient descent, we propagate the gradient of the loss backwards through the network. Each operation has some gradient between the inputs and outputs. As we send the gradients backwards, we multiply the incoming gradient with the gradient for the operation. Mathematically, this is really just calculating the gradient of the loss with respect to the weights using the chain rule.
$$
\large \frac{\partial \ell}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial S}{\partial L_1} \frac{\partial L_2}{\partial S} \frac{\partial \ell}{\partial L_2}
$$
**Note:** I'm glossing over a few details here that require some knowledge of vector calculus, but they aren't necessary to understand what's going on.
We update our weights using this gradient with some learning rate $\alpha$.
$$
\large W^\prime_1 = W_1 - \alpha \frac{\partial \ell}{\partial W_1}
$$
The learning rate $\alpha$ is set such that the weight update steps are small enough that the iterative method settles in a minimum.
## Losses in PyTorch
Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.
Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),
> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.
>
> The input is expected to contain scores for each class.
This means we need to pass in the raw output of our network into the loss, not the output of the softmax function. This raw output is usually called the *logits* or *scores*. We use the logits because softmax gives you probabilities which will often be very close to zero or one but floating-point numbers can't accurately represent values near zero or one ([read more here](https://docs.python.org/3/tutorial/floatingpoint.html)). It's usually best to avoid doing calculations with probabilities, typically we use log-probabilities.
```
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
```
In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilites by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).
>**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss.
```
## Solution
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# Define the loss
criterion = nn.NLLLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our log-probabilities
logps = model(images)
# Calculate the loss with the logps and the labels
loss = criterion(logps, labels)
print(loss)
```
## Autograd
Now that we know how to calculate a loss, how do we use it to perform backpropagation? Torch provides a module, `autograd`, for automatically calculating the gradients of tensors. We can use it to calculate the gradients of all our parameters with respect to the loss. Autograd works by keeping track of operations performed on tensors, then going backwards through those operations, calculating gradients along the way. To make sure PyTorch keeps track of operations on a tensor and calculates the gradients, you need to set `requires_grad = True` on a tensor. You can do this at creation with the `requires_grad` keyword, or at any time with `x.requires_grad_(True)`.
You can turn off gradients for a block of code with the `torch.no_grad()` content:
```python
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
```
Also, you can turn on or off gradients altogether with `torch.set_grad_enabled(True|False)`.
The gradients are computed with respect to some variable `z` with `z.backward()`. This does a backward pass through the operations that created `z`.
```
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
```
Below we can see the operation that created `y`, a power operation `PowBackward0`.
```
## grad_fn shows the function that generated this variable
print(y.grad_fn)
```
The autograd module keeps track of these operations and knows how to calculate the gradient for each one. In this way, it's able to calculate the gradients for a chain of operations, with respect to any one tensor. Let's reduce the tensor `y` to a scalar value, the mean.
```
z = y.mean()
print(z)
```
You can check the gradients for `x` and `y` but they are empty currently.
```
print(x.grad)
```
To calculate the gradients, you need to run the `.backward` method on a Variable, `z` for example. This will calculate the gradient for `z` with respect to `x`
$$
\frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\left[\frac{1}{n}\sum_i^n x_i^2\right] = \frac{x}{2}
$$
```
z.backward()
print(x.grad)
print(x/2)
```
These gradients calculations are particularly useful for neural networks. For training we need the gradients of the weights with respect to the cost. With PyTorch, we run data forward through the network to calculate the loss, then, go backwards to calculate the gradients with respect to the loss. Once we have the gradients we can make a gradient descent step.
## Loss and Autograd together
When we create a network with PyTorch, all of the parameters are initialized with `requires_grad = True`. This means that when we calculate the loss and call `loss.backward()`, the gradients for the parameters are calculated. These gradients are used to update the weights with gradient descent. Below you can see an example of calculating the gradients using a backwards pass.
```
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logps = model(images)
loss = criterion(logps, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
```
## Training the network!
There's one last piece we need to start training, an optimizer that we'll use to update the weights with the gradients. We get these from PyTorch's [`optim` package](https://pytorch.org/docs/stable/optim.html). For example we can use stochastic gradient descent with `optim.SGD`. You can see how to define an optimizer below.
```
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
Now we know how to use all the individual parts so it's time to see how they work together. Let's consider just one learning step before looping through all the data. The general process with PyTorch:
* Make a forward pass through the network
* Use the network output to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
Below I'll go through one training step and print out the weights and gradients so you can see how it changes. Note that I have a line of code `optimizer.zero_grad()`. When you do multiple backwards passes with the same parameters, the gradients are accumulated. This means that you need to zero the gradients on each training pass or you'll retain gradients from previous training batches.
```
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
```
### Training for real
Now we'll put this algorithm into a loop so we can go through all the images. Some nomenclature, one pass through the entire dataset is called an *epoch*. So here we're going to loop through `trainloader` to get our training batches. For each batch, we'll doing a training pass where we calculate the loss, do a backwards pass, and update the weights.
> **Exercise: ** Implement the training pass for our network. If you implemented it correctly, you should see the training loss drop with each epoch.
```
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
```
With the network trained, we can check out it's predictions.
```
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
```
Now our network is brilliant. It can accurately predict the digits in our images. Next up you'll write the code for training a neural network on a more complex dataset.
|
github_jupyter
|
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10))
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our logits
logits = model(images)
# Calculate the loss with the logits and the labels
loss = criterion(logits, labels)
print(loss)
## Solution
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# Define the loss
criterion = nn.NLLLoss()
# Get our data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# Forward pass, get our log-probabilities
logps = model(images)
# Calculate the loss with the logps and the labels
loss = criterion(logps, labels)
print(loss)
x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
... y = x * 2
>>> y.requires_grad
False
x = torch.randn(2,2, requires_grad=True)
print(x)
y = x**2
print(y)
## grad_fn shows the function that generated this variable
print(y.grad_fn)
z = y.mean()
print(z)
print(x.grad)
z.backward()
print(x.grad)
print(x/2)
# Build a feed-forward network
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)
logps = model(images)
loss = criterion(logps, labels)
print('Before backward pass: \n', model[0].weight.grad)
loss.backward()
print('After backward pass: \n', model[0].weight.grad)
from torch import optim
# Optimizers require the parameters to optimize and a learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
print('Initial weights - ', model[0].weight)
images, labels = next(iter(trainloader))
images.resize_(64, 784)
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Forward pass, then backward pass, then update weights
output = model(images)
loss = criterion(output, labels)
loss.backward()
print('Gradient -', model[0].weight.grad)
# Take an update step and few the new weights
optimizer.step()
print('Updated weights - ', model[0].weight)
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
import helper
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
helper.view_classify(img.view(1, 28, 28), ps)
| 0.914837 | 0.991522 |
Before reading this Jupyter Notebook, it may be helpful to review [spherical and cylindrical coordinates](Spherical_Cylindrical_Coordinates.ipynb) and the concept of a [midplane](Midplane.ipynb).
# Physical Properties of Synestias
You'll notice that synestias are very large planetary objects. The synestias shown here and throughout the notebooks are Earth-mass synestias. Their widths span about 200,000 km (124,000 miles) across -- that's almost 16 Earths (or 2,000,000 soccer fields)!
The synestia shown in this chapter was formed as a result of a potential-Moon-forming giant impact (see B.1 Synestia Case 1 in [Synestia_Moon_Ex_Cases.ipynb](Synestia_Moon_Ex_Cases.ipynb)). Giant impacts deposit enough energy into the impacted material to vaporize rock. In a giant impact between two impactors with a total mass equal to that of Earth, the heat energy ([Carter et al., 2020](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019JE006042)) is comparable to the amount of energy required to power a house for every inhabitant on Earth for the next 70 billion years ([U.S. Energy Information Administration, 2015](https://www.eia.gov/consumption/residential/data/2015/c&e/pdf/ce1.1.pdf)). Synestias have a large (10-20\% of total mass) vapor component which makes them very hot, extended and flared.
## Temperature Profile of a Synestia
What does "very hot" mean? What is the hottest phenomena you can think of and how does it compare to the maximum temperature within the interior of the Earth-mass synestia in the temperature plots below? What are the temperature ranges for each portion of this synestia (e.g. disk-like, mantle, and core)? How does temperature change between the planet-disk and core-mantle boundaries?
### How to Use These Interactive Plots
You may have to be patient while the plots load. If there is output but no images, re-run the notebook. If the plots do not load after 1-2 minutes, restart the notebook.
Use the sliders to explore the thermal structure of a synestia via spatial slices of various parameter profiles (temperature, pressure, and density): the middle value (0) on both sliders is a slice of the profile at the center the synestia. The slider values indicate the distance from the center of the synestia (higher values = greater distance from the center). The sign indicates the direction from the center. Positive distance from the rotational axis (+y) is closer to the observer while negative distance from the rotational axis (-y) is farther away from the observer (other side of the center). Distance from the midplane is positive (+z) when the slice is at the top of the synestia and negative (-z) when the slice is at the bottom. The 3D orientation of the slice (cross-section of a synestia) is shown to the right. The rotational axis lies along the line where x = 0 and y = 0, and the midplane is a plane at z = 0. There are two plots with different views of the example synestia: side (slice is a y plane; midplane appears as a line at z = 0) and bird's eye (slice is a z plane; looking down along the rotational axis which is a point x = 0, y = 0 at the center).
As you slide back and forth, notice when your slice enters the disk-like region (no planet-disk boundary shown on plot as black, dashed ellipse) versus the planet-like region. The dashed, black line indicates the boundary between the planet-like region and the disk-like region (where the planet-like region is interior to the planet-disk boundary). When your slice cuts through the planet-like region, you should be able to notice when your slice only cuts through the mantle (no core-mantle boundary shown on plot in red). The solid red line indicates the boundary between the mantle and core within the planet-like region (where the core is interior to the core-mantle boundary).
The plots shown in this notebook use data output from giant impact simulations, which model continuous fluids in synestias using particles with fixed mass but varying size and density to approximate reality. It's easier to get a sense of the whole structure when looking at how the individual particles behave. Think of these particles as having volume -- like a blob of gas. The overlap between particles is smoothed to accommodate tremendous density differences between the particles, hence the name for this type of computer modeling, <i>smoothed-particle hydrodynamics</i> (SPH).
### Temperatures in an Earth-mass Synestia
```{margin} Running a code cell
Access interactive features by 'Launch CoLab' or 'Launch Binder' from the rocket logo at the top of the page. When the interactive environment is ready, place your cursor in the code cell and press shift-return to execute the code. If using CoLab (loads faster), you need to edit the code cell as directed to gather data files.
```
Click the + symbol to see the code that generates the next interactive feature.
```
# Dear Reader, if you are using this notebook in CoLab, you need to fetch data files from github
# uncomment the lines below and shift-return to execute this cell
# you can check that the files downloaded by hitting the folder refresh button in co-lab
#import os
#os.system('wget https://github.com/ststewart/synestiabook2/blob/master/synestia-book/docs/TE_Example01_Cool05_snapshot_4096_long?raw=true -O TE_Example01_Cool05_snapshot_4096_long')
#os.system('wget https://github.com/ststewart/synestiabook2/blob/master/synestia-book/docs/TE_Example03_Cool01_snapshot_10500_long?raw=true -O TE_Example03_Cool01_snapshot_10500_long')
# STSM modified to remove use of module syndef and embed necessary functions into this cell
# from syndef import synfits
import numpy as np
import struct
import urllib.request
G=6.674e-11 #SI
class GadgetHeader:
"""Class for Gadget snapshot header."""
def __init__(self, t=0, nfiles=1, ent=1):
self.npart = np.zeros(6)
self.mass = np.zeros(6)
self.time = t
self.redshift = 0
self.flag_sfr = 0
self.flagfeedbacktp = 0
self.npartTotal = np.zeros(6)
self.flag_cooling = 0
self.num_files = nfiles
self.BoxSize = 0
self.Omega0 = 0
self.OmegaLambda = 0
self.HubbleParam = 1
self.flag_stellarage = 0
self.flag_metals = 0
self.nallhw = np.zeros(6)
self.flag_entr_ics = ent
#
class Snapshot:
"""Gadget snapshot class
Includes header and gas particle data, with functions for reading and writing snapshots.
load() -- load Gadget snapshot data
remove() -- remove particle from snapshot
write() -- save snapshot
identify() -- determine material types
calc_vap_frac() -- calculate vapour fractions of particles
#GOH 01/15/2020
-- fit midplane density profile
-- fit midplane entropy profile
-- fit midplane pressure profile
-- fit midplane temperature profile
-- fit midplane velocity profile
-- fit midplane sound speed profile
-- fit scale height for density
-- fit scale height for entropy
-- fit scale height for pressure
-- fit scale height for temperature
-- fit scale height for velocity profile
-- fit scale height for sound speed profile
"""
def __init__(self):
self.header = GadgetHeader()
self.N = 0
self.pos = np.zeros(3)
self.vel = np.zeros(3)
self.id = 0
self.m = 0
self.S = 0
self.rho = 0
self.hsml = 0
self.pot = 0
self.P = 0
self.T = 0
self.U = 0
self.cs = 0
#self.accel = 0
#self.dt = 0
#self.vapfrac = 0
self.omg_z = 0
self.J2Ma2 = 0
self.g = 0
self.ind_outer_mid_spl = 0
self.pmidfit = 0
self.rhomidfit = 0,0,0
#
def load(self, fname, thermo=False):
f = open(fname, 'rb')
struct.unpack('i', f.read(4))
#HEADER
self.header.npart = np.array(struct.unpack('iiiiii', f.read(24)))
self.header.mass = np.array(struct.unpack('dddddd', f.read(48)))
(self.header.time, self.header.redshift, self.header.flag_sfr,
self.header.flag_feedbacktp) = struct.unpack('ddii', f.read(24))
self.header.npartTotal = np.array(struct.unpack('iiiiii', f.read(24)))
(self.header.flag_cooling, self.header.num_files, self.header.Boxsize,
self.header.Omega0, self.header.OmegaLambda, self.header.HubbleParam,
self.header.flag_stellarage,
self.flag_metals) = struct.unpack('iiddddii', f.read(48))
#print(self.header.Boxsize,self.header.flag_stellarage,self.flag_metals)
self.header.nallhw = np.array(struct.unpack('iiiiii', f.read(24)))
self.header.flag_entr_ics = struct.unpack('i', f.read(4))
struct.unpack('60x', f.read(60))
struct.unpack('i', f.read(4))
if self.header.num_files != 1:
print("WARNING! Number of files:", self.header.num_files,
", not currently supported.\n")
self.N = self.header.npart[0]
count=str(self.N)
count3=str(3*self.N)
#PARTICLE DATA
struct.unpack('i', f.read(4))
self.pos = struct.unpack(count3 + 'f', f.read(3*self.N*4))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.vel = struct.unpack(count3 + 'f', f.read(3*self.N*4))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.id = np.array(struct.unpack(count + 'i', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.m = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.S = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.rho = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.hsml = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.pot = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
if thermo:
struct.unpack('i', f.read(4))
self.P = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
struct.unpack('i', f.read(4))
self.T = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
if len(f.read(4)) == 4:
self.U = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
if len(f.read(4)) == 4:
self.cs = np.array(struct.unpack(count + 'f', f.read(self.N*4)))
struct.unpack('i', f.read(4))
f.close()
#REARRANGE
self.pos = np.array(self.pos).reshape((self.N, 3))*(1e-2) #m
self.x = self.pos.T[0]
self.y = self.pos.T[1]
self.z = self.pos.T[2]
self.vel = np.array(self.vel).reshape((self.N, 3))*(1e-2) #m/s
self.vx = self.vel.T[0]
self.vy = self.vel.T[1]
self.vz = self.vel.T[2]
#print("Read %d" % self.N, "particles from %s" % fname)
#CALCULATE CENTER OF MASS
N=25
temp=np.argsort(self.pot)
xcm=np.mean(self.x[temp[0:N]])
ycm=np.mean(self.y[temp[0:N]])
zcm=np.mean(self.z[temp[0:N]])
vxcm=np.mean(self.vx[temp[0:N]])
vycm=np.mean(self.vy[temp[0:N]])
vzcm=np.mean(self.vz[temp[0:N]])
#MOVE ONTO CENTRAL FRAME
self.x=self.x-xcm
self.y=self.y-ycm
self.z=self.z-zcm
self.vx=self.vx-vxcm
self.vy=self.vy-vycm
self.vz=self.vz-vzcm
#CALCULATE BOUND MASS
self.m = self.m*(1e-3) #kg
#bndm=self.m[temp[0]]
#G=6.67408E-11 #mks
#oldm=bndm/10.
#tol=1E-5
#while (np.abs(oldm-bndm)>tol):
# oldm=bndm
# v2=np.add(np.add(np.power(self.vx,2.0),np.power(self.vy,2.0))np.power(self.vz,2.0))
# r=np.sqrt(np.add(np.add(np.power(self.x,2.0),np.power(self.y,2.0))np.power(self.z,2.0)))
# KE=0.5*np.multiply(self.m,v2)
# PE=-G*bndm*np.divide(self.m,r)
# bndm=np.sum(self.m[np.where((KE+PE)<0.)[0]])
#CONVERT REST OF UNITS TO MKS
self.rho = self.rho*(1e3) #kg/m3
self.P = self.P*1e9 #Pa
self.S = self.S*(1e-4) #J/K/kg
self.pot = self.pot*(1e-4) #J/kg
self.U = self.U*(1e-4) #J/kg
self.cs = self.cs*(1e-2) #m/s
self.rxy = np.add(np.power(self.x, 2), np.power(self.y, 2)) #m2
radius2 = np.add(self.rxy,np.power(self.z,2)) #m2
self.rxy = np.sqrt(self.rxy) #m
self.omg_z = (self.vx**2 + self.vy**2)**0.5/self.rxy
self.J2Ma2 = -np.sum(0.5*np.multiply(self.m,radius2)*(3.0*np.divide(np.power(self.z,2),radius2) - 1.0)) #kg m2
self.g = np.zeros((self.N, 3))
self.g_x = self.g.T[0]
self.g_y = self.g.T[1]
self.g_z = self.g.T[2]
self.g_x = (G*np.sum(self.m)*self.x/((np.sqrt(self.rxy**2 + self.z**2))**3)) - (3.*G*self.J2Ma2*((self.rxy**2 + self.z**2)**-2.5)*self.x*(2.5*((self.z**2)/(self.rxy**2 + self.z**2)) - 1.5))
self.g_y = (G*np.sum(self.m)*self.y/((np.sqrt(self.rxy**2 + self.z**2))**3)) - (3.*G*self.J2Ma2*((self.rxy**2 + self.z**2)**-2.5)*self.y*(2.5*((self.z**2)/(self.rxy**2 + self.z**2)) - 1.5))
self.g_z = (G*np.sum(self.m)*self.z/((np.sqrt(self.rxy**2 + self.z**2))**3)) - (3.*G*self.J2Ma2*((self.rxy**2 + self.z**2)**-2.5)*self.z*(2.5*((self.z**2)/(self.rxy**2 + self.z**2)) - 1.5))
#print("Centered bound mass.\n")
#
def indices(self,zmid,zmax,rxymin,rxymax,rxymida,rxymidb):
#DETERMINE OUTER REGION PARTICLES (truncated at rxymin and rxymax)
self.ind_outer=np.where((self.rxy >= rxymin) & (self.rxy <= rxymax) & (np.abs(self.z) <= zmax))
self.ind_outer_1=np.where((self.rxy >= rxymin) & (self.rxy < rxymida) & (np.abs(self.z) <= zmax))
self.ind_outer_2=np.where((self.rxy > rxymidb) & (self.rxy <= rxymax) & (np.abs(self.z) <= zmax))
self.ind_outer_S=np.where(self.rxy >= rxymin)
#DETERMINE MIDPLANE OUTER REGION PARTICLES
self.ind_outer_mid=np.where((self.rxy >= rxymida) & (np.abs(self.z) <= zmid) & (self.rxy <= rxymidb))
self.ind_outer_mid_spl = np.where((np.abs(self.z) <= zmid) & (self.rxy <= rxymax) & (self.rxy >= rxymin))
self.ind_outer_mid_lsq=np.where((np.abs(self.z) <= zmid) & (self.rxy >= 9.4e6))
#DETERMINE MIDPLANE PARTICLES
self.ind_mid=np.where(np.abs(self.z) <= zmid)
#
def fit_Pmid(self,knots,extra=None):
#DETERMINE SPLINE FIT TO MIDPLANE PRESSURE CURVE
ind_outer_mid_spl=np.where((np.abs(SNAP.z) <= zmid) & (SNAP.rxy <= rxymax) & (SNAP.rxy >= rxymin))
indsort=np.argsort(SNAP.rxy[ind_outer_mid_spl])
SPHrxyMm = SNAP.rxy[ind_outer_mid_spl][indsort]/1e6
SPHplog = np.log10(SNAP.P[ind_outer_mid_spl][indsort])
pknots=[*knots]
self.pLSQUSPL = LSQUnivariateSpline(SPHrxyMm, SPHplog, t=pknots, k=3)
if extra:
print('knots for midplane pressure curve are rxy = {}'.format(pLSQUSPL.get_knots()))
print('coefficients for midplane pressure curve are {}'.format(pLSQUSPL.get_coeffs()))
def fit_rhomid(self,extra=None):
#DETERMINE LEAST-SQUARES FIT TO RESIDUAL OF MIDPLANE RHO S-CURVE 1
params_guess=np.ones(4)
res_lsq = least_squares(resfunc, params_guess, loss='soft_l1', f_scale=0.001,
args=(np.log10(self.rxy[self.ind_outer_2]/1e6), np.log10(self.rho[self.ind_outer_2])))
#DETERMINE LEAST-SQUARES FIT TO RESIDUAL OF MIDPLANE RHO S-CURVE 2
params_guess_spl=np.array([150.,1.4,16.,-5.7])
res_lsq_spl = least_squares(resfuncspl, params_guess_spl, loss='soft_l1', f_scale=0.001,
args=(np.log10(self.rxy[self.ind_outer_mid]/1e6), np.log10(self.rho[self.ind_outer_mid])))
#DETERMINE LEAST-SQUARES FIT TO RESIDUAL OF MIDPLANE RHO LINE
params_guess_lin=np.ones(2)
res_lsq_lin = least_squares(resfunclin, params_guess_lin, loss='soft_l1', f_scale=0.001,
args=(np.log10(self.rxy[self.ind_outer_1]/1e6), np.log10(self.rho[self.ind_outer_1])))
if extra:
print('Least Squares Fit to Midplane Density - S-curve \n')
print(res_lsq)
print('\n Least Squares Fit to Midplane Density - Spline \n')
print(res_lsq_spl)
print('\n Least Squares Fit to Midplane Density - Linear \n')
print(res_lsq_lin)
print('\n Params for midplane density:\n fit 0 {}\n fit 1 {}\n fit 2 {}\n Linear interpolation points are (x1_lim, y1_lim) = ({}, {}) and (x2_lim, y2_lim) = ({}, {})'.format(res_lsq_lin.x,res_lsq_spl.x,res_lsq.x,x1int,y1int,x2int,y2int))
self.rhomidfit = res_lsq_lin.x,res_lsq_spl.x,res_lsq.x
#
def fit_Tmid(self,extra=None):
params_guess_T=np.asarray([4.e12,-1.66,2.5])
res_lsq_pow = least_squares(resfuncpow, params_guess_T, ftol=1e-10, xtol=1e-11, loss='soft_l1',
args=(SNAP.rxy[ind_outer_mid_lsq], SNAP.T[ind_outer_mid_lsq]/1.e3))
if extra:
print('\n Least Squares Fit to Midplane Temperature - Power Law \n')
print(res_lsq_pow)
#
def fit_smid(self,extra=None):
params_guess_S = np.ones(5)
res_lsq_lp = least_squares(resfunclinpiece, params_guess_S, ftol=1e-8, xtol=1e-8, loss='soft_l1',
args=(SNAP.rxy[ind_outer_mid_spl]/1e6, SNAP.S[ind_outer_mid_spl]))
if extra:
print('\n Least Squares Fit to Midplane Entropy - Linear Piecewise \n')
print(res_lsq_lp)
#
def fit_zs_rho(self,extra=None):
#SCALE HEIGHT FIT
#bin by rxy, fit each bin's rho(z) profile and find z where rho/rho_mid=1/e
ind_outer_offmid = np.where((SNAP.rxy >= 7.e6) & (np.abs(SNAP.z) > 1.e6))
bins = np.arange(7.e6,np.amax(SNAP.rxy[ind_outer_offmid])+1.e6,1.e6)
#bins_S = np.arange(7.e6,np.amax(SNAP.rxy[ind_outer_S])+1.e6,1.e6)
ind_bin = np.digitize(SNAP.rxy[ind_outer_offmid],bins,right=False)
#ind_bin_S = np.digitize(SNAP.rxy[ind_outer_S],bins_S,right=False)
bins=(bins/1e6)-0.5 #convert to Mm
#bins_S=(bins_S/1e6)-0.5
params_guess_exp = 1.
#def resfuncpieceS(params,x,y,z):
#x is rxy, y is z, z is S
# f1 = params[0]
# f2 = params[1]
# f3 = lambda y: params[2]*y**2 + params[3]*y + params[4]
# return np.select([(y>params[5]),(y<=params[5])*(y>=params[6]),(y<params[6])*(x<10.)], [f1,f2,f3]) - z
#params_guess_Spiece = np.asarray([4500.,8000.,1.,1.,4000.,15.,1.])
res_lsq_exp = []
#res_lsq_Spiece = []
for i in range(len(bins)):
ind_rxy = np.where(ind_bin == i)
SNAP_rhodiv_offmid = np.log(SNAP.rho[ind_outer_offmid][ind_rxy]*(
10**(-piece(np.log10(SNAP.rxy[ind_outer_offmid][ind_rxy]/1.e6),res_lsq_lin.x,res_lsq_spl.x,res_lsq.x))))
reslsqexp = least_squares(resfuncexp, params_guess_exp, bounds=(1,np.inf), loss='soft_l1', f_scale=0.001,
args=(np.abs(SNAP.z[ind_outer_offmid][ind_rxy]/1e6),SNAP_rhodiv_offmid))
if reslsqexp.active_mask[0] == -1:
res_lsq_exp.append(np.nan)
else:
res_lsq_exp.append(reslsqexp.x[0])
#for i in range(len(bins_S)):
# ind_rxy_S = np.where(ind_bin_S == i)
# print(ind_rxy_S.active_mask)
# if ind_rxy_S.active_mask == -1:
# res_lsq_Spiece.append(np.nan)
# else:
# reslsqSpiece = least_squares(resfuncpieceS, params_guess_Spiece, loss='soft_l1', f_scale=0.001, args=(SNAP.rxy[ind_outer_S][ind_rxy_S]/1.e6,np.abs(SNAP.z[ind_outer_S][ind_rxy_S])/1e6,SNAP.S[ind_outer_S][ind_rxy_S]))
# res_lsq_Spiece.append(reslsqSpiece.x)
res_lsq_exp = np.asarray(res_lsq_exp) #Mm
#res_lsq_Spiece = np.asarray(res_lsq_Spiece)
print('\n Binned Rxy Scale Height Fits')
print(res_lsq_exp)
#print(res_lsq_Spiece)
#MASKING SCALE HEIGHT FITS FOR NAN'S AND Z_S > 100 Mm
res_lsq_exp_mask = np.ma.fix_invalid(res_lsq_exp)
res_lsq_exp_compress = res_lsq_exp_mask.compressed()
bins_mask = np.ma.masked_array(bins, mask=res_lsq_exp_mask.mask)
bins_compress = bins_mask.compressed()
res_lsq_exp_compress_mask = np.ma.masked_greater(res_lsq_exp_compress,100.)
res_lsq_exp_compress2 = res_lsq_exp_compress_mask.compressed()
bins_compress_mask = np.ma.masked_array(bins_compress, mask=res_lsq_exp_compress_mask.mask)
bins_compress2 = bins_compress_mask.compressed()
print('\n Masked Rxy Scale Heights')
print(zip(bins_compress2,res_lsq_exp_compress2))
knots=[25.5,30.5,32.5,39.5,45.5,57.5]
LSQUSPL=LSQUnivariateSpline(bins_compress2, res_lsq_exp_compress2, t=knots, k=3)
if extra:
bknot = LSQUSPL.get_knots()
bcoef = LSQUSPL.get_coeffs()
print('\n LSQ Univariate Spline Fit to Scale Heights \n')
print('knots are rxy = {} (Mm)'.format(bknot))
print('coefficients are {}'.format(bcoef))
SNAP_CukStewart=Snapshot()
SNAP_CukStewart.load('TE_Example03_Cool01_snapshot_10500_long',thermo=True) #Cuk & Stewart 2012 style giant impact
#import numpy as np # loaded above
from scipy.spatial import cKDTree
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from ipywidgets import interact,FloatSlider,fixed
import warnings
warnings.catch_warnings()
warnings.simplefilter("ignore")
#define gridded arrays where want x,y,z points on plots
n = 256 #number of gridded points for xyz arrays
x_absmax = 30e6 #m
z_absmax = 20e6 #m
#semi-major and -minor axes of ellipsoid defining planetary boundaries
#planet-like region boundary with disk-like region
a_mantle = 10000. #km #a_mantle=b_mantle axisymmetric
c_mantle = 7000. #km
#core-mantle boundary within planet-like region
a_core = 3900. #km #a_core=b_core axisymmetric
c_core = 3500. #km
#assign particle information to single variable & convert units
# original gohollyday
#T_unfilt=synfits.SNAP_CukStewart.T #K
#x_unfilt=synfits.SNAP_CukStewart.x/1e3 #km
#y_unfilt=synfits.SNAP_CukStewart.y/1e3 #km
#z_unfilt=synfits.SNAP_CukStewart.z/1e3 #km
#S_unfilt=synfits.SNAP_CukStewart.S #J/K/kg
# stsm modified
T_unfilt=SNAP_CukStewart.T #K
x_unfilt=SNAP_CukStewart.x/1e3 #km
y_unfilt=SNAP_CukStewart.y/1e3 #km
z_unfilt=SNAP_CukStewart.z/1e3 #km
S_unfilt=SNAP_CukStewart.S #J/K/kg
filt=~((np.abs(z_unfilt)>30000.*(T_unfilt-5000.)**(-1./12.))&(T_unfilt>5000.))
T=T_unfilt[filt]
x=x_unfilt[filt]
y=y_unfilt[filt]
z=z_unfilt[filt]
S=S_unfilt[filt]
#sort data into kdtree
xyz = np.vstack((x,y,z)).transpose() #km
tree = cKDTree(xyz) #make tree, sort particles into leafs
#create x,y,z arrays and turn them into 2-D arrays
xarr = np.linspace(-x_absmax,x_absmax,n)/1e3 #km
zarr = np.linspace(-z_absmax,z_absmax,n)/1e3 #km
Xarr,Zarr = np.meshgrid(xarr,zarr) #km, km
yarr = np.linspace(-x_absmax,x_absmax,n)/1e3 #km
Xarr2,Yarr = np.meshgrid(xarr,yarr) #km, km
#function that gets nearest neighbor information for gridded points
#and plots their physical property (temperature) value using pcolormesh
#slice through synestia showing side view
def temperature_xz(yvalue,Xarr,Zarr,T):
Yarr = np.ones_like(Xarr)*yvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
temp = T[ind].reshape(Xarr.shape)
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = (((18./11.)*T[indtest[:,0]] + (9./11.)*T[indtest[:,1]] + (6./11.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
#ellipses (surface of ellipsoids) defining planetary boundaries
v = np.linspace(0.,2.*np.pi,80) #radians
u_mantle = np.arcsin(yvalue/(a_mantle*np.sin(v))) #radians
x_mantle = a_mantle*np.cos(u_mantle)*np.sin(v) #km
z_mantle = c_mantle*np.cos(v) #km
u_core = np.arcsin(yvalue/(a_core*np.sin(v))) #radians
x_core = a_core*np.cos(u_core)*np.sin(v) #km
z_core = c_core*np.cos(v) #km
#u = np.linspace(0.,2.5*np.pi,25) #radians
#x_ep = a_ep*np.cos(u)*np.sin(v) #km
#y_ep = a_ep*np.sin(u)*np.sin(v) #km
#z_ep = c_ep*np.cos(v) #km
#y_ep2 = a_ep*np.sin(u)*np.sin(v2) #km
#X_ep_temp, Y_ep_temp = np.meshgrid(x_ep, y_ep) #km
#Z_ep_temp = c_ep*(1. - (X_ep_temp**2)/(a_ep**2) - (Y_ep_temp**2)/(b_ep**2))**0.5 #km
#X_ep = np.tile(X_ep_temp, 2)
#Y_ep = np.tile(Y_ep_temp, 2)
#Z_ep = np.tile(Z_ep_temp, 2)
#zlen = np.shape(Z_ep_temp)[0]
#Z_ep[:,zlen:] *= -1
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3) #km
zarr = np.linspace(-20000,20000,3) #km
xarr2d, zarr2d = np.meshgrid(xarr, zarr) #km
yarr2d = np.ones_like(xarr2d)*yvalue #km
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Temperature Profile: Side View')
plt.xlabel('x (km)')
plt.ylabel('z (km)')
ax.set_aspect(aspect=1, adjustable='box', anchor='C')
plt.pcolormesh(Xarr,Zarr,temp,vmin=np.amin(T),vmax=15000)
#plt.pcolormesh(Xarr,Zarr,temp_k,vmin=np.amin(T),vmax=15000)
plt.colorbar(label='temperature (K)')
plt.plot(x_mantle,z_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,z_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
#ax2.plot(x_mantle,z_mantle,zs=yvalue,zdir='y',color='white')
#ax2.plot(x_core,z_core,zs=yvalue,zdir='y',color='r')
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
plt.show()
plt.close()
#function that gets nearest neighbor information for gridded points
#and plots their physical property (temperature) value using pcolormesh
#slice through synestia showing bird's eye view
def temperature_xy(zvalue,Xarr,Yarr,T):
warnings.catch_warnings()
warnings.simplefilter("ignore") #hide warning for clipped (not real) pressure values
Zarr = np.ones_like(Xarr)*zvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = ((T[indtest[:,0]] + 0.5*T[indtest[:,1]] + (1./3.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
temp = T[ind].reshape(Xarr.shape)
#ellipses (surface of ellipsoids) defining planetary boundaries
u = np.linspace(0.,2.*np.pi,80) #radians
v_mantle = np.arccos(zvalue/c_mantle) #radians
x_mantle = a_mantle*np.cos(u)*np.sin(v_mantle) #km
y_mantle = a_mantle*np.sin(u)*np.sin(v_mantle) #km
v_core = np.arccos(zvalue/c_core) #radians
x_core = a_core*np.cos(u)*np.sin(v_core) #km
y_core = a_core*np.sin(u)*np.sin(v_core) #km
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3)
yarr = np.linspace(-30000,30000,3)
xarr2d, yarr2d = np.meshgrid(xarr, yarr)
zarr2d = np.ones_like(xarr2d)*zvalue
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Temperature Profile: Bird\'s Eye View')
plt.xlabel('x (km)')
plt.ylabel('y (km)')
plt.axis('equal')
plt.pcolormesh(Xarr,Yarr,temp,vmin=np.amin(T),vmax=15000)
plt.colorbar(label='temperature (K)')
plt.plot(x_mantle,y_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,y_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
plt.show()
plt.close()
style = {'description_width': 'initial'}
layout = {'width': '400px'}
interact(temperature_xz,yvalue = FloatSlider(value=0, min=-30e3, max=30e3, step=2e3, description='Distance from Rotational Axis (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr), Zarr=fixed(Zarr), T=fixed(T)
)
interact(temperature_xy,zvalue = FloatSlider(value=0, min=-20e3, max=20e3, step=2e3, description='Distance from Midplane (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr2), Yarr=fixed(Yarr), T=fixed(T)
)
```
<i>Caption</i>. Temperatures are high everywhere in a synestia. The minimum temperature is 2,000 K, but temperatures are as high as 15,000 K. Set the distances to zero. The outer yellow temperature contour (x = 10,000 km) represents the transition between the disk-like region and the planet-like region. The layers of the planet-like region, from the center outwards, are inner core (green, x $<$ 3,000 km), outer core (yellow, 3,000 km $<$ x $<$ 4,000 km), lower mantle (blue-purple, 4,000 km $<$ x $<$ 5,000 km), and upper mantle (green, 5,000 km $<$ x $<$ 10,000 km). The upper mantle and outer core are much hotter than their adjacent interior layers (lower mantle and inner core, respectively).
"Very hot" is pretty darn hot! If teleportation is ever discovered, don't ask to be sent into a synestia. There is no solid surface upon which we could stand, but if we somehow were floating in the moon-forming region of a synestia, it would feel like a (burning) hot, heavy atmospheric blanket. Imagine a volcanic eruption, but 2 to 15 times hotter.
The temperature profile of a synestia is different from that of a planet.
A planet, Earth for example, is mostly a solid body, and so temperature increases with depth towards the center of the planet. Pressure also increases with depth in a planet. If a planet can be approximated by a series of nested spherical shells (or layers) with a given mass, then the weight of the outer layers presses the inner layers towards the center of the body. The core is at the highest pressures because it has the most layers on top. Thus, as pressures increase, so must temperature.
In contrast, a synestia has multiple temperature inversions in the planet-like region where temperature decreases with depth in the mantle then sharply increases where the core (iron) meets the mantle (silicate). Both iron and silicate in a synestia's planet-like region are in their liquid phases and can be redistributed within the interior. Since iron is denser than silicate, the iron settles into a sphere at the center of the body under the influence of gravity. The silicate liquid forms a layer on top of the iron. Under the influence of gravity, the colder, denser liquid silicate and iron in the mantle and the core, respectively, settle to deeper depths while the hotter, less dense liquid buoys up to shallower depths. In the plot above, the outermost yellow ring indicates hot vapor (like an atmosphere) against the boundary of the mantle, while the innermost yellow ring marks the outer core. This is most easily seen when the distances are set to zero.
## Pressure Profile of a Synestia
Synestias are flared due to the large volume of gas in their disk-like regions. As a result, pressures in the moon-forming region are higher than expected for an equivalent traditional planet-disk system. What are the pressures at the moon-forming region (r$_{xy}$ = 20,000 km, z = 0 km) in the Earth-mass synestia in the pressure plots below? What is the range of pressures in the disk-like region in this Earth-mass synestia? How does it compare to Earth's present-day atmosphere? Do you notice a difference in the magnitude of pressures in the disk-like region versus the planet-like region?
Click the + symbol to see the code that generates the next interactive feature.
```
#do same thing for pressure
#P=synfits.SNAP_CukStewart.P #Pa
P=SNAP_CukStewart.P #Pa
#function that gets nearest neighbor information for gridded points
#and plots their physical property (pressure) value using pcolormesh
#slice through synestia showing side view
def pressure_xz(yvalue,Xarr,Zarr,P):
#yvalue,Xarr,Zarr are in km; P is in Pa
Yarr = np.ones_like(Xarr)*yvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = ((T[indtest[:,0]] + 0.5*T[indtest[:,1]] + (1./3.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
press = np.log10(P[ind].reshape(Xarr.shape)/101325.) #atm
#ellipses (surface of ellipsoid) defining planetary boundaries
v = np.linspace(0.,2.*np.pi,80) #radians
u_mantle = np.arcsin(yvalue/(a_mantle*np.sin(v))) #radians
x_mantle = a_mantle*np.cos(u_mantle)*np.sin(v) #km
z_mantle = c_mantle*np.cos(v) #km
u_core = np.arcsin(yvalue/(a_core*np.sin(v))) #radians
x_core = a_core*np.cos(u_core)*np.sin(v) #km
z_core = c_core*np.cos(v) #km
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3)
zarr = np.linspace(-20000,20000,3)
xarr2d, zarr2d = np.meshgrid(xarr, zarr)
yarr2d = np.ones_like(xarr2d)*yvalue
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Pressure Profile: Side View')
plt.xlabel('x (km)')
plt.ylabel('z (km)')
ax.set_aspect(aspect=1, adjustable='box', anchor='C')
plt.pcolormesh(Xarr,Zarr,press,vmin=np.amin(np.log10(P/101325.)),vmax=np.amax(np.log10(P/101325.)))
plt.colorbar(label='log$_{10}$(pressure) (atm)')
plt.plot(x_mantle,z_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,z_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
#ax2.dist = 10.5
plt.show()
plt.close()
#function that gets nearest neighbor information for gridded points
#and plots their physical property (pressure) value using pcolormesh
#slice through synestia showing bird's eye view
def pressure_xy(zvalue,Xarr,Yarr,P):
#zvalue,Xarr,Yarr are in km; P is in Pa
Zarr = np.ones_like(Xarr)*zvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = ((T[indtest[:,0]] + 0.5*T[indtest[:,1]] + (1./3.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
press = np.log10(P[ind].reshape(Xarr.shape)/101325.) #atm
#ellipses (surface of ellipsoids) defining planetary boundaries
u = np.linspace(0.,2.*np.pi,80) #radians
v_mantle = np.arccos(zvalue/c_mantle) #radians
x_mantle = a_mantle*np.cos(u)*np.sin(v_mantle) #km
y_mantle = a_mantle*np.sin(u)*np.sin(v_mantle) #km
v_core = np.arccos(zvalue/c_core) #radians
x_core = a_core*np.cos(u)*np.sin(v_core) #km
y_core = a_core*np.sin(u)*np.sin(v_core) #km
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3)
yarr = np.linspace(-30000,30000,3)
xarr2d, yarr2d = np.meshgrid(xarr, yarr)
zarr2d = np.ones_like(xarr2d)*zvalue
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Pressure Profile: Bird\'s Eye View')
plt.xlabel('x (km)')
plt.ylabel('y (km)')
plt.axis('equal')
plt.pcolormesh(Xarr,Yarr,press,vmin=np.amin(np.log10(P/101325.)),vmax=np.amax(np.log10(P/101325.)))
plt.colorbar(label='log$_{10}$(pressure) (atm)')
plt.plot(x_mantle,y_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,y_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
plt.show()
plt.close()
interact(pressure_xz,yvalue = FloatSlider(value=0, min=-30e3, max=30e3, step=2e3, description='Distance from Rotational Axis (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr), Zarr=fixed(Zarr), P=fixed(P)
)
interact(pressure_xy,zvalue = FloatSlider(value=0, min=-20e3, max=20e3, step=2e3, description='Distance from Midplane (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr2), Yarr=fixed(Yarr), P=fixed(P)
)
```
<i>Caption</i>. Pressures tend to be higher along the midplane and near the center of the body (x, y, z) = (0, 0, 0). There is high pressure (100 atm) gas at large radii in the midplane (e.g. the moon-forming region). Extremely high pressures (10$^4$ to 10$^6$ atm) exist in the planet-like region while a more broad range of pressures (10$^{-4}$ to 10$^4$) exists in the disk-like region.
Our atmosphere is 1 atm, so imagine how it would feel to have the weight of 10's or 100's of Earth's atmosphere surrounding you! That is what the pressure of gas in a synestia's moon-forming region would feel like.
If you have ever been swimming, the pressure you feel (most noticeably in your ears) at a depth of 10 meters (33 feet) of water is 2 atm. The pressure felt by a synestia's vapor in its moon-forming region will be at least five times that. For the synestia in the pressure plots above, the pressure at the boundary of the moon-forming region is 100 atm, but it can be as low as 10 atm for other synestias.
In the mantle of this synestia, the pressure can be about 10$^4$ atm, give or take an order of magnitude. In the core of a synestia, pressures range from 10$^5$ atm to 10$^6$ atm. That's a tremendous amount of pressure!
The high pressures in a synestia are the reason why gas has such an effect on the dynamics within a synestia. Pressure acts as an additional significant force that allows a synestia to be very flared and large and emplaces more material in the moon-forming region than a traditional moon-forming disk. High pressures also facilitate chemical equilibration of material (making rocky material in a synestia as "homogeneous" as possible) from which the moon can form. There is a dynamic range of pressures in a synestia. In this case, the pressures range anywhere from 0.0001 atm (essentially leaking out into vacuum of space) to millions of atm (core).
## Density Profile of a Synestia
Now that you have an idea of how heavy a synestia's gas would feel, how thick would the gas be in various parts of a synestia (e.g. disk-like region, mantle, and core)? In other words, how dense would it be? For comparison, under standard conditions (at sea level and 15$^{\circ}$C), our air's density is 1.225 kg/m$^3$, liquid water on Earth has a density of about 1,000 kg/m$^3$, and solid iron has a density of about 10,000 kg/m$^3$.
Click the + symbol to see the code that generates the next interactive feature.
```
#do same thing for density
#rho=synfits.SNAP_CukStewart.rho #kg/m^3
rho=SNAP_CukStewart.rho #kg/m^3
#function that gets nearest neighbor information for gridded points
#and plots their physical property (density) value using pcolormesh
#slice through synestia showing side view
def density_xz(yvalue,Xarr,Zarr,rho):
#yvalue,Xarr,Zarr are in km; rho is in kg/m^3
Yarr = np.ones_like(Xarr)*yvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = ((T[indtest[:,0]] + 0.5*T[indtest[:,1]] + (1./3.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
dens = np.log10(rho[ind].reshape(Xarr.shape)) #kg/m^3
#ellipses (surface of ellipsoid) defining planetary boundaries
v = np.linspace(0.,2.*np.pi,80) #radians
u_mantle = np.arcsin(yvalue/(a_mantle*np.sin(v))) #radians
x_mantle = a_mantle*np.cos(u_mantle)*np.sin(v) #km
z_mantle = c_mantle*np.cos(v) #km
u_core = np.arcsin(yvalue/(a_core*np.sin(v))) #radians
x_core = a_core*np.cos(u_core)*np.sin(v) #km
z_core = c_core*np.cos(v) #km
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3)
zarr = np.linspace(-20000,20000,3)
xarr2d, zarr2d = np.meshgrid(xarr, zarr)
yarr2d = np.ones_like(xarr2d)*yvalue
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Density Profile: Side View')
plt.xlabel('x (km)')
plt.ylabel('z (km)')
plt.axis('equal')
ax.set_aspect(aspect=1, adjustable='box', anchor='C')
plt.pcolormesh(Xarr,Zarr,dens,vmin=np.amin(np.log10(rho)),vmax=np.amax(np.log10(rho)))
plt.colorbar(label='log$_{10}$(density) (kg/m$^3$)')
plt.plot(x_mantle,z_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,z_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
#ax2.dist = 10.5
plt.show()
plt.close()
#function that gets nearest neighbor information for gridded points
#and plots their physical property (density) value using pcolormesh
#slice through synestia showing bird's eye view
def density_xy(zvalue,Xarr,Yarr,rho):
#zvalue,Xarr,Yarr are in km; rho is in kg/m^3
Zarr = np.ones_like(Xarr)*zvalue #km
XYZ = np.vstack((Xarr.flatten(),Yarr.flatten(),Zarr.flatten())).transpose() #km
d,ind = tree.query(XYZ) #find nearest neighbor to use for temperature at X,Y,Z points
#dtest,indtest = tree.query(XYZ,k=3) #find nearest 3 neighbors
#temp_k = ((T[indtest[:,0]] + 0.5*T[indtest[:,1]] + (1./3.)*T[indtest[:,2]])/3).reshape(Xarr.shape) #weighted mean
dens = np.log10(rho[ind].reshape(Xarr.shape)) #kg/m^3
#ellipses (surface of ellipsoids) defining planetary boundaries
u = np.linspace(0.,2.*np.pi,80) #radians
v_mantle = np.arccos(zvalue/c_mantle) #radians
x_mantle = a_mantle*np.cos(u)*np.sin(v_mantle) #km
y_mantle = a_mantle*np.sin(u)*np.sin(v_mantle) #km
v_core = np.arccos(zvalue/c_core) #radians
x_core = a_core*np.cos(u)*np.sin(v_core) #km
y_core = a_core*np.sin(u)*np.sin(v_core) #km
#arrays for plane showing slice through synestia in 3D
xarr = np.linspace(-30000,30000,3)
yarr = np.linspace(-30000,30000,3)
xarr2d, yarr2d = np.meshgrid(xarr, yarr)
zarr2d = np.ones_like(xarr2d)*zvalue
fig = plt.figure(figsize=(13.5,5))
ax = fig.add_subplot(121)
plt.title('Density Profile: Bird\'s Eye View')
plt.xlabel('x (km)')
plt.ylabel('y (km)')
plt.axis('equal')
plt.pcolormesh(Xarr,Yarr,dens,vmin=np.amin(np.log10(rho)),vmax=np.amax(np.log10(rho)))
plt.colorbar(label='log$_{10}$(density) (kg/m$^3$)')
plt.plot(x_mantle,y_mantle,ls='--',lw=2,color='k',label='Planet-Disk')
plt.plot(x_core,y_core,lw=2,color='r',label='Core-Mantle')
plt.legend(loc=3)
ax2 = fig.add_subplot(122, projection='3d')
plt.title('Position of Slice in 3D')
ax2.plot_surface(xarr2d, yarr2d, zarr2d)
plt.xlabel('x (km)')
plt.xlim([-30000, 30000])
ax2.tick_params(axis='x', labelsize=8)
plt.ylabel('y (km)')
plt.ylim([-30000, 30000])
ax2.tick_params(axis='y', labelsize=8)
ax2.set_zlabel('z (km)')
ax2.set_zlim(-20000, 20000)
ax2.tick_params(axis='z', labelsize=8)
plt.show()
plt.close()
interact(density_xz,yvalue = FloatSlider(value=0, min=-30e3, max=30e3, step=2e3, description='Distance from Rotational Axis (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr), Zarr=fixed(Zarr), rho=fixed(rho)
)
interact(density_xy,zvalue = FloatSlider(value=0, min=-20e3, max=20e3, step=2e3, description='Distance from Midplane (km)',
continuous_update=True, readout=True, readout_format='.1e', style=style, layout=layout),
Xarr=fixed(Xarr2), Yarr=fixed(Yarr), rho=fixed(rho)
)
```
<i>Caption</i>. Densities tend to be higher along the midplane and near the center of the body (x, y, z) = (0, 0, 0). Lower densities typically indicate regions with higher gas concentrations, whereas higher densities typically indicate increasingly-liquid-dominated regions (near the center). Densities in the planet-like region (10$^{3}$ to 10$^{4}$ kg/m$^3$) are higher than those in the disk-like region (10$^{-4}$ to 10$^{2}$ kg/m$^3$).
Density varies considerably within a synestia. In the density plots above, the vapor at the farthest radii is the least dense, with densities as low as 0.0001 kg/m$^3$, 1/10,000th the density of air at standard conditions (1.225 kg/m$^3$). The core is the most dense, with densities up to 10,000 kg/m$^3$, about as dense as iron or 10 times the density of liquid water at standard conditions. Density ranges from 10$^{-4}$ to 10$^2$ kg/m$^3$ in the disk-like region and from 10$^{3}$ to 10$^{4}$ kg/m$^3$ in the planet-like region. Since the disk-like region has a higher proportion of vapor than the planet-like region, it is natural that densities in the planet-like region be at least an order of magnitude higher than in the disk-like region.
The density profile within a synestia is dependent on the temperature and pressure regime and the mass distribution. You have explored the temperature and pressure profiles of this Earth-mass synestia in the previous plots. Notice how the separate layers within the mantle (hotter at shallower depths, colder at deeper depths) and the boundary between the disk-like region and the planet-like region are easier to distinguish in the density and pressure profiles compared to the temperature profile.
Let's take a look at the gravity field to get a better sense of the mass distribution within a synestia.
## Gravity Field of a Synestia
The gravity field within a body represents the spatial distribution of mass within a body. Physicists and geologists think about the gravity inside a body at a given location as resulting from the sum of the body's individual parts surrounding that point. So, if we were to break up a planet into lots of chunks, these chunks each have an individual mass and position relative to the center of the body. If each chunk has the same mass, a chunk closer to the center of the planet contributes more to the magnitude of the gravity field outside the planet than a chunk farther from the center of the planet. If a chunk has more mass because the material of the chunk is more dense, then the gravity at the location of that chunk is stronger than it would be for a less dense chunk at the same location.
If we apply this thought process to a synestia, then we can represent a synestia, a continuous body of fluid (gas in particular), as a cloud of gas particles, where each particle has its own mass, size, position, and velocity. Let's take a look at the plot below, where we compare an Earth-mass synestia with a uniform sphere of radius R$_{Earth}$ of the same total mass as the synestia.
Click the + symbol to see the code that generates the next interactive feature.
```
# stsm load canup 2012 results
SNAP_Canup=Snapshot()
SNAP_Canup.load('TE_Example01_Cool05_snapshot_4096_long',thermo=True) #Canup 2012 style giant impact
G = 6.67408e-11 #mks #gravitational constant
#M_Earth = np.sum(synfits.SNAP_Canup.m) #kg #Earth-mass synestia
#U_syn = synfits.SNAP_Canup.pot/1e3 #kJ #gravitational potential energy of synestia point cloud
#r_syn = np.sqrt(synfits.SNAP_Canup.x**2 + synfits.SNAP_Canup.y**2 + synfits.SNAP_Canup.z**2) #m
# stsm
M_Earth = np.sum(SNAP_Canup.m) #kg #Earth-mass synestia
U_syn = SNAP_Canup.pot/1e3 #kJ #gravitational potential energy of synestia point cloud
r_syn = np.sqrt(SNAP_Canup.x**2 + SNAP_Canup.y**2 + SNAP_Canup.z**2) #m
R_Earth = 6378137. #m #equatorial radius of present-day Earth
n = len(r_syn)
U_sphere = np.empty(n)
for i in range(n):
if r_syn[i] < R_Earth:
U_sphere[i] = 0.5*.001*G*M_Earth*(r_syn[i]**2 - 3.*(R_Earth**2))/(R_Earth**3) #kJ
else:
U_sphere[i] = -G*M_Earth/(r_syn[i]*1e3) #kJ
U_diff = U_sphere - U_syn #kJ
fig = plt.figure(figsize=(8,10))
ax = fig.add_subplot(211)
plt.plot(r_syn/1e3, U_syn, 'c.', markersize=1, label='Example Earth-mass Synestia')
plt.plot(r_syn/1e3, U_sphere, 'k.', markersize=1, label='Uniform Sphere with Radius R$_{Earth}$')
plt.ylabel('Gravitational Potential U (kJ)', fontsize=15)
plt.ylim(ymax=0)
plt.xlim([0, 6e4])
ax.tick_params(axis='x', bottom=False, labelbottom=False)
plt.legend(loc=0, fontsize=12, markerscale=10)
plt.grid()
ax2 = fig.add_subplot(212, sharex = ax)
plt.plot(r_syn/1e3, U_diff, 'r.', markersize=1)
plt.xlabel('Radius r (km)', fontsize=15)
plt.ylabel('U Difference (Sphere - Synestia) (kJ)', fontsize=15)
plt.ylim([-8e3, 8e3])
plt.grid()
plt.subplots_adjust(hspace=0)
plt.show()
plt.close()
```
<i>Caption</i>. The difference in the gravitational potential energy profile between an Earth-mass synestia (cyan) and a perfect sphere of radius R$_{Earth}$ (black) with the same mass. A synestia has lower (less negative) gravitational potential energy than a uniform sphere at greater radii (r $>$ 3,000 km or outside the synestia's core), while the converse is true at lower radii.
You'll notice there are subtle deviations in the gravitational potential field of a synestia from that a sphere outside a spherical body and the planet-like region of a synestia. In the disk-like region of a synestia, gravitational potential energy is slightly greater (more negative) for a perfect sphere than it is for a synestia at larger radii. The disk-like region, where the angular velocity profile is no longer constant with radius, has a radius r $>$ 10,000 km in this example case. A synestia has mass distributed throughout its disk-like region, whereas an equivalent sphere does not. All the mass in a sphere is interior (between the observed location and the center of the body) to the disk-like region because this is a region located outside the body of an equivalent sphere. At a location in the disk-like region, there is an inward gravitational attraction towards the center of the sphere where all its mass concentrates. In a synestia, there is both inward- (toward the center of the body) and outward- (away from the center of the body) directed gravitational attraction to the surrounding mass at a location in the disk-like region. A point in the disk-like region of a synestia is still within the body of a synestia; there is mass interior and exterior (the observed location is between mass and the center of the body) to the point. There is less exterior mass, so the net gravitational force points inwards. There is nevertheless individual gravitational attraction to mass away from the center and there are sparser concentrations of mass in the disk-like region, so the magnitude of gravitational potential energy is lower than near the center of a synestia.
In the planet-like region of a synestia, the gravitational potential energy of a perfect sphere is weaker (less negative) than a synestia at smaller radii spanning a synestia's core (r $<$ 3,000 km in the plot above). There is a substantial difference in the gravitational potential energy at the center of the bodies. A synestia's dense core has greater gravitational pull than that of an equivalent sphere. Within the planet-like region of a synestia, at larger radii spanning the mantle region (3,000 km $>$ r $>$ 10,000 km in the plot above), a sphere's gravitational potential energy is greater (more negative) than that of a synestia. The difference is substantial at the transition from a synestia's upper mantle to its lower mantle (5,000 km in the plot above). The mantle of a synestia is oblate due to a synestia's rapid rotation, so it is less dense. Due to sparser concentrations of mass in the mantle of a synestia, there is not as strong of a gravitational pull in a synestia's mantle compared to the same region in a sphere.
For a synestia, gravitational acceleration is much stronger along the midplane than it is off the midplane. The extra gravitational pull arises from the oblate structure of a synestia, which places more mass on the midplane than at at the poles. A synestia is very flared and has a disk-like shape away from its center. Its interior is spinning very rapidly, causing mass along the midplane to bulge out. The contribution this bulge, or oblateness, has on the gravitational field is what is called the second-order gravity term, or <i>J$_2$ term</i>.
The first-order term (1/r) of the gravitational acceleration that a particular body exerts on other objects is largely affected by the radius (how far the mass extends out from its center) of that body. At any point inside the sphere, the gravitational pull depends on how much mass is at larger radii compared to how much mass is at smaller radii. However, the second-order term (1/r$^3$) is largely affected by the mass distribution within that body, namely, along the midplane (z = 0, i.e. equator if you're thinking about Earth). Let's take a look below at the equation for the gravitational potential energy U of a synestia:
$$U(r) = -\frac{GM}{r}\left(1 - \frac{J_2a_{eq}^2}{2r^2}\left(\frac{3z^2}{r^2} - 1\right)\right)$$
For comparison, for a uniform sphere of radius R and mass M, the gravitational potential energy U is:
$$U(r) =
\begin{cases}
-\frac{GM}{r}, & r\geq R\\
-\frac{GM}{2R^3}(r^2 - 3R^2), & r < R
\end{cases}
$$
The J$_2$ term will be strongest near the midplane (z = 0) and at cylindrical radii where r$_{xy}$/z $> \sqrt{2}$. G is the gravitational constant, M is the total mass of the body, r is the distance between the origin and the position of a particle in 3-D (xyz) space, a$_{eq}$ is the equatorial radius of the body, and J$_2$ is a unitless number from 0 to 1 that depends on how spherical the mass distribution is. The more spherical the body is, the smaller J$_2$ will be. If the body is more squashed, bulging, or oblate, J$_2$ will be larger. For reference, Earth is nearly spherical with a J$_2$ of 0.001083; Earth's equatorial radius is ever so slightly larger than its polar radius due to its rotation.
However, changing the mass distribution within a body will also affect J$_2$. It is possible to have two bodies with the same oblateness, but different J$_2$ values (see image below). Say there is one body of uniform density and the other body is split into a denser central region (like a core) and a less dense outer region (like a mantle). The uniform density spheroid will have a larger J$_2$ than the body with a varying density distribution (more mass concentrated near the center of the body).

<i>Caption</i>. The J$_2$ gravity term affects both the oblateness and mass distribution of a synestia. A body with a given mass, rotation, and mean density can be oblate but either have most of its mass concentrated at its center (ex. dense core and low density atmosphere) with a low J$_2$ (left) or have a uniform density throughout so that extended parts of the body have more mass, or higher J$_2$ (right). Credit: G. O. Hollyday.
## Takeaways
Synestias exist in a temperature, pressure, and density regime that we are unfamiliar with. Synestias are very hot and their liquid-vapor interiors experience a wide range of pressures and densities. Rocky materials in this extreme thermal regime will behave as continuous fluids and experience tremendous pressure gas drag as liquids or support as vapor. The thermodynamics of a planet-disk system do not apply to a synestia. The hot, turbulent thermal history of a synestia will contribute to the evolution of the interior dynamics of its resultant planet.
Due to the rapid rotation of the planet-like region within a synestia, a synestia is very oblate. A synestia is axis-symmetric about its rotational axis. A synestia's mass distribution is different from that of a planet; more mass exists at the equator far from the center of the synestia (less mass is concentrated towards the center of the body). This aids moon formation in a synestia as it supplies more material to the moon-forming region. The oblate gravity field of a synestia also affects the orbits of rain and moonlets inside a synestia, which can aid or hurt lunar accretion (to be explored in Jupyter Notebook 5: Forces Acting Within Synestias).
## References
Carter, P. J., Lock, S. J., & Stewart, S. T. (2020). The Energy Budgets of Giant Impacts. <i>Journal of Geophysical Research: Planets (American Geophysical Union)</i>, 125 (1), 1-18.
U.S. Energy Information Administration. (2015). <i>Consumption and Expenditures Table 1.1 Summary consumption and expenditures in the U.S. - totals and intensities</i>. 2015 Residential Energy Consumption Survey Data. Retrieved from https://www.eia.gov/consumption/residential/data/2015/c&e/pdf/ce1.1.pdf (EIA)
|
github_jupyter
|
Click the + symbol to see the code that generates the next interactive feature.
<i>Caption</i>. Temperatures are high everywhere in a synestia. The minimum temperature is 2,000 K, but temperatures are as high as 15,000 K. Set the distances to zero. The outer yellow temperature contour (x = 10,000 km) represents the transition between the disk-like region and the planet-like region. The layers of the planet-like region, from the center outwards, are inner core (green, x $<$ 3,000 km), outer core (yellow, 3,000 km $<$ x $<$ 4,000 km), lower mantle (blue-purple, 4,000 km $<$ x $<$ 5,000 km), and upper mantle (green, 5,000 km $<$ x $<$ 10,000 km). The upper mantle and outer core are much hotter than their adjacent interior layers (lower mantle and inner core, respectively).
"Very hot" is pretty darn hot! If teleportation is ever discovered, don't ask to be sent into a synestia. There is no solid surface upon which we could stand, but if we somehow were floating in the moon-forming region of a synestia, it would feel like a (burning) hot, heavy atmospheric blanket. Imagine a volcanic eruption, but 2 to 15 times hotter.
The temperature profile of a synestia is different from that of a planet.
A planet, Earth for example, is mostly a solid body, and so temperature increases with depth towards the center of the planet. Pressure also increases with depth in a planet. If a planet can be approximated by a series of nested spherical shells (or layers) with a given mass, then the weight of the outer layers presses the inner layers towards the center of the body. The core is at the highest pressures because it has the most layers on top. Thus, as pressures increase, so must temperature.
In contrast, a synestia has multiple temperature inversions in the planet-like region where temperature decreases with depth in the mantle then sharply increases where the core (iron) meets the mantle (silicate). Both iron and silicate in a synestia's planet-like region are in their liquid phases and can be redistributed within the interior. Since iron is denser than silicate, the iron settles into a sphere at the center of the body under the influence of gravity. The silicate liquid forms a layer on top of the iron. Under the influence of gravity, the colder, denser liquid silicate and iron in the mantle and the core, respectively, settle to deeper depths while the hotter, less dense liquid buoys up to shallower depths. In the plot above, the outermost yellow ring indicates hot vapor (like an atmosphere) against the boundary of the mantle, while the innermost yellow ring marks the outer core. This is most easily seen when the distances are set to zero.
## Pressure Profile of a Synestia
Synestias are flared due to the large volume of gas in their disk-like regions. As a result, pressures in the moon-forming region are higher than expected for an equivalent traditional planet-disk system. What are the pressures at the moon-forming region (r$_{xy}$ = 20,000 km, z = 0 km) in the Earth-mass synestia in the pressure plots below? What is the range of pressures in the disk-like region in this Earth-mass synestia? How does it compare to Earth's present-day atmosphere? Do you notice a difference in the magnitude of pressures in the disk-like region versus the planet-like region?
Click the + symbol to see the code that generates the next interactive feature.
<i>Caption</i>. Pressures tend to be higher along the midplane and near the center of the body (x, y, z) = (0, 0, 0). There is high pressure (100 atm) gas at large radii in the midplane (e.g. the moon-forming region). Extremely high pressures (10$^4$ to 10$^6$ atm) exist in the planet-like region while a more broad range of pressures (10$^{-4}$ to 10$^4$) exists in the disk-like region.
Our atmosphere is 1 atm, so imagine how it would feel to have the weight of 10's or 100's of Earth's atmosphere surrounding you! That is what the pressure of gas in a synestia's moon-forming region would feel like.
If you have ever been swimming, the pressure you feel (most noticeably in your ears) at a depth of 10 meters (33 feet) of water is 2 atm. The pressure felt by a synestia's vapor in its moon-forming region will be at least five times that. For the synestia in the pressure plots above, the pressure at the boundary of the moon-forming region is 100 atm, but it can be as low as 10 atm for other synestias.
In the mantle of this synestia, the pressure can be about 10$^4$ atm, give or take an order of magnitude. In the core of a synestia, pressures range from 10$^5$ atm to 10$^6$ atm. That's a tremendous amount of pressure!
The high pressures in a synestia are the reason why gas has such an effect on the dynamics within a synestia. Pressure acts as an additional significant force that allows a synestia to be very flared and large and emplaces more material in the moon-forming region than a traditional moon-forming disk. High pressures also facilitate chemical equilibration of material (making rocky material in a synestia as "homogeneous" as possible) from which the moon can form. There is a dynamic range of pressures in a synestia. In this case, the pressures range anywhere from 0.0001 atm (essentially leaking out into vacuum of space) to millions of atm (core).
## Density Profile of a Synestia
Now that you have an idea of how heavy a synestia's gas would feel, how thick would the gas be in various parts of a synestia (e.g. disk-like region, mantle, and core)? In other words, how dense would it be? For comparison, under standard conditions (at sea level and 15$^{\circ}$C), our air's density is 1.225 kg/m$^3$, liquid water on Earth has a density of about 1,000 kg/m$^3$, and solid iron has a density of about 10,000 kg/m$^3$.
Click the + symbol to see the code that generates the next interactive feature.
<i>Caption</i>. Densities tend to be higher along the midplane and near the center of the body (x, y, z) = (0, 0, 0). Lower densities typically indicate regions with higher gas concentrations, whereas higher densities typically indicate increasingly-liquid-dominated regions (near the center). Densities in the planet-like region (10$^{3}$ to 10$^{4}$ kg/m$^3$) are higher than those in the disk-like region (10$^{-4}$ to 10$^{2}$ kg/m$^3$).
Density varies considerably within a synestia. In the density plots above, the vapor at the farthest radii is the least dense, with densities as low as 0.0001 kg/m$^3$, 1/10,000th the density of air at standard conditions (1.225 kg/m$^3$). The core is the most dense, with densities up to 10,000 kg/m$^3$, about as dense as iron or 10 times the density of liquid water at standard conditions. Density ranges from 10$^{-4}$ to 10$^2$ kg/m$^3$ in the disk-like region and from 10$^{3}$ to 10$^{4}$ kg/m$^3$ in the planet-like region. Since the disk-like region has a higher proportion of vapor than the planet-like region, it is natural that densities in the planet-like region be at least an order of magnitude higher than in the disk-like region.
The density profile within a synestia is dependent on the temperature and pressure regime and the mass distribution. You have explored the temperature and pressure profiles of this Earth-mass synestia in the previous plots. Notice how the separate layers within the mantle (hotter at shallower depths, colder at deeper depths) and the boundary between the disk-like region and the planet-like region are easier to distinguish in the density and pressure profiles compared to the temperature profile.
Let's take a look at the gravity field to get a better sense of the mass distribution within a synestia.
## Gravity Field of a Synestia
The gravity field within a body represents the spatial distribution of mass within a body. Physicists and geologists think about the gravity inside a body at a given location as resulting from the sum of the body's individual parts surrounding that point. So, if we were to break up a planet into lots of chunks, these chunks each have an individual mass and position relative to the center of the body. If each chunk has the same mass, a chunk closer to the center of the planet contributes more to the magnitude of the gravity field outside the planet than a chunk farther from the center of the planet. If a chunk has more mass because the material of the chunk is more dense, then the gravity at the location of that chunk is stronger than it would be for a less dense chunk at the same location.
If we apply this thought process to a synestia, then we can represent a synestia, a continuous body of fluid (gas in particular), as a cloud of gas particles, where each particle has its own mass, size, position, and velocity. Let's take a look at the plot below, where we compare an Earth-mass synestia with a uniform sphere of radius R$_{Earth}$ of the same total mass as the synestia.
Click the + symbol to see the code that generates the next interactive feature.
| 0.8119 | 0.979919 |
# Chinese Word Analysis
## Introduction
Learning a language is a laborious task as there are too many words to learn. However, a (frequent) set of words cover a high percentage of the text used.
In this document, we want to analyze how much this percentage increase as we add words to our knowledge.
For this analysis, the Chinese language has been selected. It is a particularly interesting language for this study because it gives us the possibility to analyze the language not only using the words but also using Chinese characters. Words are formed by one or more characters, and each character can belong to one or more words.
## The data
The data has been obtained from https://invokeit.wordpress.com/frequency-word-lists/. It contains a list of words and the number of occurrences. The words are ordered from the most frequents to the least.
Executing the following script we download the data and clean it (remove non-Chinese words). The last command process the data to obtain a list of occurrences of characters (not words). For doing that, every word is split and the number of occurrences is recalculated.
```
options(repr.plot.width=15, repr.plot.height=10)
#unzip data
system('sh get_preprocess_data.sh')
```
We have now two data sets:
* zh_cn_clean.txt: Frequency list of chinese words, in descending order.
* zh_cn_characters.txt: Frequency list of chinese characters, in descending order.
We are loading these datasets in order to perform our analysis.
```
#load data
data.words<-read.table('data/zh_cn_clean.txt')
data.chars<-read.table('data/zh_cn_characters.txt',quote="")
```
## Analysis
In this document, we are going to analyse the number of words needed to cover the Chinese language.
First, we calculate the probability (in percentage) of each word/character appearing in the Chinese text. Then we perform a cumulative summation and plot it.
```
frequencies.words<-data.words[,2]
total.words<-sum(frequencies.words)
percent.words<-(frequencies.words/total.words)*100
acumPercent.words<-cumsum(percent.words)
frequencies.chars<-data.chars[,2]
total.chars<-sum(frequencies.chars)
percent.chars<-(frequencies.chars/total.chars)*100
acumPercent.chars<-cumsum(percent.chars)
#plot acumulates
par(mfrow=c(2,1))
plot(acumPercent.words,ylim=c(0,100),type="l",xlab="Words known",ylab="Percentage covered")
plot(acumPercent.chars,ylim=c(0,100),col="orange",type="l",xlab="Characters known",ylab="Percentage covered")
```
The plots represent the amounts of words (black line) and characters (orange line) needed to cover a percentage of chinese language.
As we can see, the first most frequent characters covers almost 90% of chinese language. It means that a few words are necessary to learn in order to understand the most part of the texts.
We may want to plot first 3000 words (black line) and first 3000 characters (orange line).
```
plot(acumPercent.words[1:3000],ylim=c(0,100),type="l",xlab="Words (black)/Characters (orange)",ylab="Percentage covered")
lines(acumPercent.chars[1:3000],col="orange")
```
Now we have an idea of the different amount of words needed to understand a chinese text.
## Summary
As summary, we remark the following:
* About the words:
```
print( paste("100 words cover",toString(round(acumPercent.words[100],1)),"% of the language" ,sep=" ") )
print( paste("500 words cover",toString(round(acumPercent.words[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 words cover",toString(round(acumPercent.words[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 words cover",toString(round(acumPercent.words[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 words cover",toString(round(acumPercent.words[5000],1)),"% of the language" ,sep=" ") )
```
* About the characters:
```
print( paste("100 characters cover",toString(round(acumPercent.chars[100],1)),"% of the language" ,sep=" ") )
print( paste("500 characters cover",toString(round(acumPercent.chars[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 characters cover",toString(round(acumPercent.chars[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 characters cover",toString(round(acumPercent.chars[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 characters cover",toString(round(acumPercent.chars[5000],1)),"% of the language" ,sep=" ") )
```
* Words/character needed:
```
print( paste("To cover 50% of the language",toString(which.min(abs(acumPercent.words - 50))),"words are needed (or",which.min(abs(acumPercent.chars - 50)),"characters)." ,sep=" ") )
print( paste("To cover 60% of the language",toString(which.min(abs(acumPercent.words - 60))),"words are needed (or",which.min(abs(acumPercent.chars - 60)),"characters)." ,sep=" ") )
print( paste("To cover 80% of the language",toString(which.min(abs(acumPercent.words - 80))),"words are needed (or",which.min(abs(acumPercent.chars - 80)),"characters)." ,sep=" ") )
print( paste("To cover 90% of the language",toString(which.min(abs(acumPercent.words - 90))),"words are needed (or",which.min(abs(acumPercent.chars - 90)),"characters)." ,sep=" ") )
```
Similar patterns are also true for other languages.
|
github_jupyter
|
options(repr.plot.width=15, repr.plot.height=10)
#unzip data
system('sh get_preprocess_data.sh')
#load data
data.words<-read.table('data/zh_cn_clean.txt')
data.chars<-read.table('data/zh_cn_characters.txt',quote="")
frequencies.words<-data.words[,2]
total.words<-sum(frequencies.words)
percent.words<-(frequencies.words/total.words)*100
acumPercent.words<-cumsum(percent.words)
frequencies.chars<-data.chars[,2]
total.chars<-sum(frequencies.chars)
percent.chars<-(frequencies.chars/total.chars)*100
acumPercent.chars<-cumsum(percent.chars)
#plot acumulates
par(mfrow=c(2,1))
plot(acumPercent.words,ylim=c(0,100),type="l",xlab="Words known",ylab="Percentage covered")
plot(acumPercent.chars,ylim=c(0,100),col="orange",type="l",xlab="Characters known",ylab="Percentage covered")
plot(acumPercent.words[1:3000],ylim=c(0,100),type="l",xlab="Words (black)/Characters (orange)",ylab="Percentage covered")
lines(acumPercent.chars[1:3000],col="orange")
print( paste("100 words cover",toString(round(acumPercent.words[100],1)),"% of the language" ,sep=" ") )
print( paste("500 words cover",toString(round(acumPercent.words[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 words cover",toString(round(acumPercent.words[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 words cover",toString(round(acumPercent.words[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 words cover",toString(round(acumPercent.words[5000],1)),"% of the language" ,sep=" ") )
print( paste("100 characters cover",toString(round(acumPercent.chars[100],1)),"% of the language" ,sep=" ") )
print( paste("500 characters cover",toString(round(acumPercent.chars[500],1)),"% of the language" ,sep=" ") )
print( paste("1000 characters cover",toString(round(acumPercent.chars[1000],1)),"% of the language" ,sep=" ") )
print( paste("3000 characters cover",toString(round(acumPercent.chars[3000],1)),"% of the language" ,sep=" ") )
print( paste("5000 characters cover",toString(round(acumPercent.chars[5000],1)),"% of the language" ,sep=" ") )
print( paste("To cover 50% of the language",toString(which.min(abs(acumPercent.words - 50))),"words are needed (or",which.min(abs(acumPercent.chars - 50)),"characters)." ,sep=" ") )
print( paste("To cover 60% of the language",toString(which.min(abs(acumPercent.words - 60))),"words are needed (or",which.min(abs(acumPercent.chars - 60)),"characters)." ,sep=" ") )
print( paste("To cover 80% of the language",toString(which.min(abs(acumPercent.words - 80))),"words are needed (or",which.min(abs(acumPercent.chars - 80)),"characters)." ,sep=" ") )
print( paste("To cover 90% of the language",toString(which.min(abs(acumPercent.words - 90))),"words are needed (or",which.min(abs(acumPercent.chars - 90)),"characters)." ,sep=" ") )
| 0.328314 | 0.987876 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation
from keras.layers.normalization import BatchNormalization
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
sns.set(style='white', context='notebook', palette='deep')
# Load data
train = pd.read_csv(r'E:\Open Source Dataset Code\Dataset\MNIST\train.csv')
test = pd.read_csv(r'E:\Open Source Dataset Code\Dataset\MNIST\test.csv')
Y_train = train['label']
# Drop 'label' column
X_train = train.drop(labels=['label'], axis=1)
# free some space
del train
g = sns.countplot(Y_train)
Y_train.value_counts()
# Check the data
X_train.isnull().any().describe()
test.isnull().any().describe()
# Normalization
X_train /= 255.0
test /= 255.0
X_train = X_train.values.reshape(-1, 28, 28, 1)
test = test.values.reshape(-1, 28, 28, 1)
# Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes=10)
# Set the random seed
random_seed = 2
# Split the train and the validation set for the fitting
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.1, random_state=random_seed)
# Some examples
g = plt.imshow(X_train[0][:,:,0])
model = Sequential()
model.add(Conv2D(32, (5, 5), padding='Same', input_shape=(28, 28, 1)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(32, (5, 5), padding='same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='Same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3), padding='Same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPool2D((2, 2), (2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# Compile the optimizer
optimizer = RMSprop()
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
lr_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.00001)
epochs = 30
batch_size = 128
# With data augmentation to prevent overfitting
data_gen = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=10,
zoom_range=0.1,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=False,
vertical_flip=False
)
data_gen.fit(X_train)
# Fit the model
history = model.fit_generator(data_gen.flow(X_train, Y_train, batch_size=batch_size),
epochs=epochs, validation_data=(X_val, Y_val),
verbose=2, steps_per_epoch=X_train.shape[0]//batch_size,
callbacks=[lr_reduction])
```
# Evaluate the model
## Training and validation curves
```
# Plot the loss and accuracy curves for training and validation
fig, ax = plt.subplots(2,1)
ax[0].plot(history.history['loss'], color='b', label="Training loss")
ax[0].plot(history.history['val_loss'], color='r', label="validation loss",axes =ax[0])
legend0 = ax[0].legend(loc='best', shadow=True)
ax[1].plot(history.history['acc'], color='b', label="Training accuracy")
ax[1].plot(history.history['val_acc'], color='r',label="Validation accuracy")
legend1 = ax[1].legend(loc='best', shadow=True)
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float')/cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max()/2
for i,j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j, i, cm[i,j],
horizontalalignment='center',
color='white' if cm[i,j] > thresh else 'black'
)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis=1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val, axis=1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes=range(10))
Y_pred.shape
Y_true[0]
# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)
# Select those wrong predictions
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
Y_pred_errors[0]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
""" This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
X_train.shape
# Predict results
results = model.predict(test)
results = np.argmax(results, axis=1)
results = pd.Series(results, name='Label')
submission = pd.concat([pd.Series(range(1, 28001), name='ImageId'), results], axis=1)
submission.to_csv('LeNet_MNIST_datagen.csv', index=False)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation
from keras.layers.normalization import BatchNormalization
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
sns.set(style='white', context='notebook', palette='deep')
# Load data
train = pd.read_csv(r'E:\Open Source Dataset Code\Dataset\MNIST\train.csv')
test = pd.read_csv(r'E:\Open Source Dataset Code\Dataset\MNIST\test.csv')
Y_train = train['label']
# Drop 'label' column
X_train = train.drop(labels=['label'], axis=1)
# free some space
del train
g = sns.countplot(Y_train)
Y_train.value_counts()
# Check the data
X_train.isnull().any().describe()
test.isnull().any().describe()
# Normalization
X_train /= 255.0
test /= 255.0
X_train = X_train.values.reshape(-1, 28, 28, 1)
test = test.values.reshape(-1, 28, 28, 1)
# Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes=10)
# Set the random seed
random_seed = 2
# Split the train and the validation set for the fitting
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.1, random_state=random_seed)
# Some examples
g = plt.imshow(X_train[0][:,:,0])
model = Sequential()
model.add(Conv2D(32, (5, 5), padding='Same', input_shape=(28, 28, 1)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(32, (5, 5), padding='same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='Same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3), padding='Same'))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPool2D((2, 2), (2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# Compile the optimizer
optimizer = RMSprop()
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
lr_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.00001)
epochs = 30
batch_size = 128
# With data augmentation to prevent overfitting
data_gen = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=10,
zoom_range=0.1,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=False,
vertical_flip=False
)
data_gen.fit(X_train)
# Fit the model
history = model.fit_generator(data_gen.flow(X_train, Y_train, batch_size=batch_size),
epochs=epochs, validation_data=(X_val, Y_val),
verbose=2, steps_per_epoch=X_train.shape[0]//batch_size,
callbacks=[lr_reduction])
# Plot the loss and accuracy curves for training and validation
fig, ax = plt.subplots(2,1)
ax[0].plot(history.history['loss'], color='b', label="Training loss")
ax[0].plot(history.history['val_loss'], color='r', label="validation loss",axes =ax[0])
legend0 = ax[0].legend(loc='best', shadow=True)
ax[1].plot(history.history['acc'], color='b', label="Training accuracy")
ax[1].plot(history.history['val_acc'], color='r',label="Validation accuracy")
legend1 = ax[1].legend(loc='best', shadow=True)
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float')/cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max()/2
for i,j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(
j, i, cm[i,j],
horizontalalignment='center',
color='white' if cm[i,j] > thresh else 'black'
)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred, axis=1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val, axis=1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes=range(10))
Y_pred.shape
Y_true[0]
# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)
# Select those wrong predictions
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
Y_pred_errors[0]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
""" This function shows 6 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
X_train.shape
# Predict results
results = model.predict(test)
results = np.argmax(results, axis=1)
results = pd.Series(results, name='Label')
submission = pd.concat([pd.Series(range(1, 28001), name='ImageId'), results], axis=1)
submission.to_csv('LeNet_MNIST_datagen.csv', index=False)
| 0.908208 | 0.667215 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.