path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Alban repo/Homework 2/Homework 2 submission files/concrete_compressive_strength_data_curation.ipynb
|
###Markdown
2.s986, Wed 17 September, 2020, Alban Cobi Useful resource on learning jupyter notebooks functionality
###Code
%%HTML
<iframe src="https://www.youtube.com/watch?v=HW29067qVWk" title="Jupyter Notebook Tutorial: Introduction, Setup, and Walkthrough"></iframe>
###Output
_____no_output_____
###Markdown
Notes to self:To run bash commands, prefix a bash command with an exclamation mark in a code cell: !pwd \To list magic commands, type %lsmagic \To run commands in different computer languages you can use the magic cell commands \To view user defined variables type magic commans %who \
###Code
!pwd
%lsmagic
%%latex
\noindent
\newline
\bf{Hello World}
###Output
_____no_output_____
###Markdown
a) Code below imports packages/libraries for managing the data. pandas library is for 'working' with data sets
###Code
import pandas as pd #package for working with data sets
from matplotlib import pyplot as plt #'from library import function', package for plotting/visualizing data
import numpy as np #package for scientific/numerical computing
import seaborn as sb #package for statistical graphics
from sklearn import preprocessing #for normalizing dataset
###Output
_____no_output_____
###Markdown
b) Code below reads data and stores it in a data type called 'data'
###Code
data = pd.read_excel('Concrete_Data.xls') #This line creates a DataFrame object and stores it in variable 'data'
type(data) #type() spits out the data type such as string, list, etc.
###Output
_____no_output_____
###Markdown
Code below displays the columns of the data
###Code
data.columns
###Output
_____no_output_____
###Markdown
Code below to check number(length) of rows and columns
###Code
len(data) #number of rows
len(data.columns) #number of columns
###Output
_____no_output_____
###Markdown
c) Code below checks how many columns are input variables and output variables
###Code
%%bash
ls
cat Concrete_Readme.txt | grep 'input\|output'
###Output
concrete_compressive_strength_data_curation.ipynb
Concrete_Data.xls
Concrete_Readme.txt
Attribute breakdown: 8 quantitative input variables, and 1 quantitative output variable
###Markdown
d) Code below eliminates 'Fine Aggregate' from data set
###Code
#del data["Fine Aggregate (component 7)(kg in a m^3 mixture)"]
#data.pop('Fine Aggregate (component 7)(kg in a m^3 mixture)')
###Output
_____no_output_____
###Markdown
I couldn't get the above commands to work so I tried a different way, below.
###Code
data_columns = ['Cement (component 1)(kg in a m^3 mixture)',
'Blast Furnace Slag (component 2)(kg in a m^3 mixture)',
'Fly Ash (component 3)(kg in a m^3 mixture)',
'Water (component 4)(kg in a m^3 mixture)',
'Superplasticizer (component 5)(kg in a m^3 mixture)',
'Coarse Aggregate (component 6)(kg in a m^3 mixture)',
'Fine Aggregate (component 7)(kg in a m^3 mixture)', 'Age (day)',
'Concrete compressive strength(MPa, megapascals) ']
del data_columns[7]
print(data.columns)
###Output
Index(['Cement (component 1)(kg in a m^3 mixture)',
'Blast Furnace Slag (component 2)(kg in a m^3 mixture)',
'Fly Ash (component 3)(kg in a m^3 mixture)',
'Water (component 4)(kg in a m^3 mixture)',
'Superplasticizer (component 5)(kg in a m^3 mixture)',
'Coarse Aggregate (component 6)(kg in a m^3 mixture)', 'Age (day)',
'Concrete compressive strength(MPa, megapascals) '],
dtype='object')
###Markdown
e) Normalize data with mean 0 and standard deviation 1 and check that it's normalized
###Code
%who
data.head() # similar to command: print(data_normalized)
data_normalized = preprocessing.scale(data) # Function normalizes data by columb
data_normalized[:10] # similar to bash command head but only displays number of rows specified after ":"
###Output
_____no_output_____
###Markdown
Everything from here on is for my own intellectual enlightenment Visualizing the data before and after normalization, was throwing errors at me
###Code
type(data)
type(data_normalized)
plt.title("Data Visualization Demo")
plt.xlabel("input")
plt.ylabel("output")
plt.xlim(-1,1)
plt.ylim(-1,1)
plt.plot(data_normalized[1],data_normalized[len(data_normalized)])
plt.show()
###Output
_____no_output_____
###Markdown
I used the source code from class materials to create scatter plots of the:raw data
###Code
fs = 20
n_col = 4 # num of columns per row in the figure
y_indx = -1 #Compressive Strength [MPa] # Choose which column is the response variable
color_indx = -2 #Age [days] # Choose which column is shown as the superimposed color
column_names = data.columns.values
cmap = plt.get_cmap('coolwarm', 10)
print("The color is for variable", column_names[color_indx], "with red as the highest and blue as the lowest")
for n in np.arange(0, 8, n_col):
fig,axes = plt.subplots(1, n_col, figsize=(18, 3.5), sharey = False)
for i in np.arange(n_col):
#print(n)
if n< len(column_names)-1:
im = axes[i].scatter(data.iloc[:,n],data.iloc[:,y_indx],
c=data.iloc[:,color_indx], s = 20, cmap=cmap, alpha =0.5, edgecolors = 'face')
axes[i].set_xlabel(column_names[n])
else:
axes[i].axis("off")
#axes[i].set_title(sf_cols[n])
n = n+1
axes[0].set_ylabel(column_names[y_indx])
for i in range(len(axes)):
axes[i].tick_params(direction='in', length=5, width=1, labelsize = fs*.8, grid_alpha = 0.5)
axes[i].grid(True, linestyle='-.')
plt.show()
###Output
The color is for variable Age (day) with red as the highest and blue as the lowest
###Markdown
normalized data
###Code
for n in np.arange(0, 8, n_col):
fig,axes = plt.subplots(1, n_col, figsize=(18, 3.5), sharey = False)
for i in np.arange(n_col):
#print(n)
if n< len(column_names)-1:
im = axes[i].scatter(data_normalized.iloc[:,n],data_normalized.iloc[:,y_indx],
c=data_normalized.iloc[:,color_indx], s = 20, cmap=cmap, alpha =0.5, edgecolors = 'face')
axes[i].set_xlabel(column_names[n])
else:
axes[i].axis("off")
#axes[i].set_title(sf_cols[n])
n = n+1
axes[0].set_ylabel(column_names[y_indx])
for i in range(len(axes)):
axes[i].tick_params(direction='in', length=5, width=1, labelsize = fs*.8, grid_alpha = 0.5)
axes[i].grid(True, linestyle='-.')
plt.show()
###Output
_____no_output_____
|
REMARKs/KrusellSmith/KrusellSmith.ipynb
|
###Markdown
[Krusell Smith (1998)](https://www.journals.uchicago.edu/doi/pdf/10.1086/250034)- Original version by Tim Munday - Comments and extensions by Tao Wang- Further edits by Chris Carroll [](https://mybinder.org/v2/gh/econ-ark/DemARK/master?filepath=notebooks%2FKrusellSmith.ipynb) OverviewThe benchmark Krusell-Smith model has the following broad features: * The aggregate state switches between "good" and "bad" with known probabilities * All consumers experience the same aggregate state for the economy (good or bad) * _ex ante_ there is only one type of consumer, which is infinitely lived * _ex post_ heterogeneity arises from uninsurable idiosyncratic income shocks * Specifically, individuals are at risk of spells of unemployment * In a spell of unemployment, their income is zero Thus, each agent faces two types of uncertainty: About their employment state, and about the income they will earn when employed. And the values of income and unemployment risk depend on the aggregate state. Details IdiosyncraticEach agent _attempts_ to supply an amount of productive labor $\ell$ in each period. (Here and below we mostly follow the notation of Krusell and Smith (1998)).However, whether they _succeed_ in supplying that labor (and earning a corresponding wage) is governed by the realization of the stochastic variable $\epsilon$. If the agent is unlucky, $\epsilon$ is zero and the agent is unemployed. The amount of labor they succeed in supplying is thus $\epsilon\ell$. AggregateAggregate output ($\bar{y}$) is produced using a Cobb-Douglas production function using capital and labor. (Bars over variables indicate the aggregate value of a variable that has different values across different idiosyncratic consumers).$z$ denotes the aggregate shock to productivity. $z$ can take two values, either $z_g$ -- the "good" state, or $z_b < z_g$ -- the "bad" state. Consumers gain income from providing labor, and from the rental return on any capital they own. Labor and capital markets are perfectly efficient so both factors are both paid their marginal products.The agent can choose to save by buying capital $k$ which is bounded below at the borrowing constraint of 0.Putting all of this together, aggregate output is given by: \begin{eqnarray}\bar{y} & = & z\bar{k}^\alpha \bar{\ell}^{1-\alpha}\end{eqnarray} The aggregate shocks $z$ follow first-order Markov chains with the transition probability of moving from state $s$ to state $s'$ denoted by $\pi_{ss'}$. The aggregate shocks and individual shocks are correlated: The probability of being unemployed is higher in bad times, when aggregate productivity is low, than in good times, when aggregate productivity is high. Idiosyncratic and Aggregate TogetherThe individual shocks satisfy the law of large numbers, and the model is constructed so that the number of agents who are unemployed in the good state always equals $u_g$, and is always $u_b$ in the bad state. Given the aggregate state, individual shocks are independent from each other.For the individual, the probability of moving between a good state and employment to a bad state and unemployment is denoted $\pi_{gb10}$ with similar notation for the other transition probabilities.(Krusell and Smith allow for serially correlated unemployment at the idiosyncratic level. Here we will simplify this and have unemployment be serially uncorrelated.) Finally, $\Gamma$ denotes the current distribution of consumers over capital and employment status, and $H$ denotes the law of motion of this distribution. The Idiosyncratic Individual's Problem Given the Aggregate StateThe individual's problem is:\begin{eqnarray*}V(k, \epsilon; \Gamma, z) &=& \max_{k'}\{U(c) + \beta \mathbb{E}[V(k' ,\epsilon'; \Gamma', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)\ell\epsilon + (1-\delta)k \\\Gamma' &=& H(\Gamma, z, z') \\k' &\geq& 0 \\\end{eqnarray*} Krusell and Smith define an equilibrium as a law of motion $H$, a value function $V$, a rule for updating capital $f$ and pricing functions $r$ and $w$, such that $V$ and $f$ solve the consumers problem, $r$ and $w$ denote the marginal products of capital and labour, and $H$ is consistent with $f$ (i.e. if we add up all of the individual agents capital choices we get the correct distribution of capital). Discussion of the KS AlgorithmIn principle, $\Gamma$ is a high-dimensional object because it includes the whole distribution of individuals' wealth and employment states. Because the optimal amount to save is a nonlinear function of the level of idiosyncratic $k$, next period's aggregate capital stock $\bar{k}'$ depends on the distribution of the holdings of idiosyncratic $k$ across the population of consumers. Therefore the law of motion $H$ is not a trivial function of the $\Gamma$. KS simplified this problem by noting the following. 1. The agent cares about the future aggregate aggregate state only insofar as that state affects their own personal value of $c$1. Future values of $c$ depend on the aggregate state only through the budget constraint1. The channels by which the budget constraint depends on the aggregate state are: * The probability distributions of $\epsilon$ and $z$ are affected by the aggregate state * Interest rates and wages depend on the future values of $\bar{k}$ and $\bar{\ell}$1. The probability distributions for the future values of $\{\epsilon, z\}$ are known * They are fully determined by the Markov transition matrices1. But the values of $r$ and $w$ are both determined by the future value of $\bar{k}$ (in combination with the exogenous value of $\bar{\ell}$) * So the only _endogenous_ object that the agent needs to form expectations about, in order to have a complete rational expectation about everything affecting them, is $\bar{k}'$The key result in Krusell and Smith is the discovery that a very simple linear rule does an extraordinarily good job (though not quite perfect) in forecasting $\bar{k'}$They then argue that, since rationality is surely bounded to some degree, the solution that an agent obtains using a good forecasting rule for $\bar{k}'$ is "good enough" to compute an "approximate" solution to the consumer's optimization problem.They define a generic algorithm to find a forecasting rule for $\bar{k}$ as follows1. Choose the number of moments $n$ of the distribution of $k$ to be included in the set of variables to forecast $\bar{k}'$. In the simplest case, $n=1$, the only forecasting variable for next period's $\bar{k}'$ is the mean (the first moment, $n=1$)) of current capital, $\bar{k}$.2. Each individual adopts the same belief about the law motion of these moments, $H_I$ and finds the optimal decision policy, $f_I$, contingent on that guess.3. Use the optimal policy to simulate a history of aggregate capital with a large number of agents. 4. Characterize the realized law of motion using the same number of moments $n$ 5. Compare it with the $H_I$, what is taken as given by individuals. 6. Iterate until the two converge. In the end, the solution to the original problem is well approximated by the following simplified problem:\begin{eqnarray*}V(k, \epsilon; \bar k, z) &=& max_{c, k'}\{U(c) + \beta E[V(k' ,\epsilon'; \bar k', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)l\epsilon + (1-\delta)k \\\text{When }~ z=z_g, \quad \mathbb{E}[\log\bar{k}'] & = & a_0 + a_1 \log\bar k \\\text{When }~ z=z_b, \quad \mathbb{E}[\log\bar{k}'] & = & b_0 + b_1 \log\bar k \\k' &\geq& 0 \\\end{eqnarray*} Implementation Using the HARK Toolkit The Consumer
###Code
# Import generic setup tools
# This is a jupytext paired notebook that autogenerates KrusellSmith.py
# which can be executed from a terminal command line via "ipython KrusellSmith.py"
# But a terminal does not permit inline figures, so we need to test jupyter vs terminal
# Google "how can I check if code is executed in the ipython notebook"
# Determine whether to make the figures inline (for spyder or jupyter)
# vs whatever is the automatic setting that will apply if run from the terminal
import remark # 20191113 CDC to Seb: Where do you propose that this module should go (permanently?)
# in the /binder folder, where it could be installed by postBuild (unix) or postBuild.bat?
# Import the plot-figure library matplotlib
from HARK.utilities import plotFuncs, plotFuncsDer
# Import components of HARK needed for solving the KS model
from HARK.ConsumptionSaving.ConsAggShockModel import *
import HARK.ConsumptionSaving.ConsumerParameters as Params
# Markov consumer type that allows aggregate shocks (redundant but instructive)
from HARK.ConsumptionSaving.ConsAggShockModel import AggShockMarkovConsumerType
# Define a dictionary to make an 'instance' of our Krusell-Smith consumer.
# The folded dictionary below contains many parameters to the
# AggShockMarkovConsumerType agent that are not needed for the KS model
KSAgentDictionary = {
"CRRA": 1.0, # Coefficient of relative risk aversion
"DiscFac": 0.99, # Intertemporal discount factor
"LivPrb" : [1.0], # Survival probability
"AgentCount" : 10000, # Number of agents of this type (only matters for simulation)
"aNrmInitMean" : 0.0, # Mean of log initial assets (only matters for simulation)
"aNrmInitStd" : 0.0, # Standard deviation of log initial assets (only for simulation)
"pLvlInitMean" : 0.0, # Mean of log initial permanent income (only matters for simulation)
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income (only matters for simulation)
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor (only matters for simulation)
"T_age" : None, # Age after which simulated agents are automatically killed
"T_cycle" : 1, # Number of periods in the cycle for this agent type
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraExtra" : [None], # Some other value of "assets above minimum" to add to the grid
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraCount" : 24, # Number of points in the grid of "assets above minimum"
# Parameters describing the income process
"PermShkCount" : 1, # Number of points in discrete approximation to permanent income shocks - no shocks of this kind!
"TranShkCount" : 1, # Number of points in discrete approximation to transitory income shocks - no shocks of this kind!
"PermShkStd" : [0.], # Standard deviation of log permanent income shocks - no shocks of this kind!
"TranShkStd" : [0.], # Standard deviation of log transitory income shocks - no shocks of this kind!
"UnempPrb" : 0.0, # Probability of unemployment while working - no shocks of this kind!
"UnempPrbRet" : 0.00, # Probability of "unemployment" while retired - no shocks of this kind!
"IncUnemp" : 0.0, # Unemployment benefits replacement rate
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"tax_rate" : 0.0, # Flat income tax rate
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"cycles": 0, # Consumer is infinitely lived
"PermGroFac" : [1.0], # Permanent income growth factor
# New Parameters that we need now
'MgridBase': np.array([0.1,0.3,0.6,
0.8,0.9,0.98,
1.0,1.02,1.1,
1.2,1.6,2.0,
3.0]), # Grid of capital-to-labor-ratios (factors)
'MrkvArray': np.array([[0.875,0.125],
[0.125,0.875]]), # Transition probabilities for macroecon. [i,j] is probability of being in state j next
# period conditional on being in state i this period.
'PermShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate permanent shocks by state. No continous shocks in a state.
'TranShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate transitory shocks by state. No continuous shocks in a state.
'PermGroFacAgg' : 1.0
}
# Here we restate just the "interesting" parts of the consumer's specification
KSAgentDictionary['CRRA'] = 1.0 # Relative risk aversion
KSAgentDictionary['DiscFac'] = 0.99 # Intertemporal discount factor
KSAgentDictionary['cycles'] = 0 # cycles=0 means consumer is infinitely lived
# KS assume that 'good' and 'bad' times are of equal expected duration
# The probability of a change in the aggregate state is p_change=0.125
p_change=0.125
p_remain=1-p_change
# Now we define macro transition probabilities for AggShockMarkovConsumerType
# [i,j] is probability of being in state j next period conditional on being in state i this period.
# In both states, there is 0.875 chance of staying, 0.125 chance of switching
AggMrkvArray = \
np.array([[p_remain,p_change], # Probabilities of states 0 and 1 next period if in state 0
[p_change,p_remain]]) # Probabilities of states 0 and 1 next period if in state 1
KSAgentDictionary['MrkvArray'] = AggMrkvArray
# Create the Krusell-Smith agent as an instance of AggShockMarkovConsumerType
KSAgent = AggShockMarkovConsumerType(**KSAgentDictionary)
###Output
_____no_output_____
###Markdown
Now we need to specify the income distribution. The HARK toolkit allows for two components of labor income: Persistent (or permanent), and transitory. Using the KS notation above, a HARK consumer's income is\begin{eqnarray}y & = & w p \ell \epsilon \end{eqnarray}where $p$ is the persistent component of income. Krusell and Smith did not incorporate a persistent component of income, however, so we will simply calibrate $p=1$ for all states.For each of the two aggregate states we need to specify: * The _proportion_ of consumers in the $e$ and the $u$ states * The level of persistent/permanent productivity $p$ (always 1) * The ratio of actual to permanent productivity in each state $\{e,u\}$ * In the KS notation, this is $\epsilon\ell$
###Code
# Construct the income distribution for the Krusell-Smith agent
prb_eg = 0.96 # Probability of employment in the good state
prb_ug = 1-prb_eg # Probability of unemployment in the good state
prb_eb = 0.90 # Probability of employment in the bad state
prb_ub = 1-prb_eb # Probability of unemployment in the bad state
p_ind = 1 # Persistent component of income is always 1
ell_ug = ell_ub = 0 # Labor supply is zero for unemployed consumers in either agg state
ell_eg = 1.0/prb_eg # Labor supply for employed consumer in good state
ell_eb = 1.0/prb_eb # 1=pe_g*ell_ge+pu_b*ell_gu=pe_b*ell_be+pu_b*ell_gu
# IncomeDstn is a list of lists, one for each aggregate Markov state
# Each contains three arrays of floats, representing a discrete approximation to the income process.
# Order:
# state probabilities
# idiosyncratic persistent income level by state (KS have no persistent shocks p_ind is always 1.0)
# idiosyncratic transitory income level by state
KSAgent.IncomeDstn[0] = \
[[np.array([prb_eg,prb_ug]),np.array([p_ind,p_ind]),np.array([ell_eg,ell_ug])], # Agg state good
[np.array([prb_eb,prb_ub]),np.array([p_ind,p_ind]),np.array([ell_eb,ell_ub])] # Agg state bad
]
###Output
_____no_output_____
###Markdown
Up to this point, individual agents do not have enough information to solve their decision problem yet. What is missing are beliefs about the endogenous macro variables $r$ and $w$, both of which are functions of $\bar{k}$. The Aggregate Economy
###Code
from HARK.ConsumptionSaving.ConsAggShockModel import CobbDouglasMarkovEconomy
KSEconomyDictionary = {
'PermShkAggCount': 1,
'TranShkAggCount': 1,
'PermShkAggStd': [0.0,0.0],
'TranShkAggStd': [0.0,0.0],
'DeprFac': 0.025, # Depreciation factor
'CapShare': 0.36, # Share of capital income in cobb-douglas production function
'DiscFac': 0.99,
'CRRA': 1.0,
'PermGroFacAgg': [1.0,1.0],
'AggregateL':1.0, # Fix aggregate labor supply at 1.0 - makes interpretation of z easier
'act_T':1200, # Number of periods for economy to run in simulation
'intercept_prev': [0.0,0.0], # Make some initial guesses at linear savings rule intercepts for each state
'slope_prev': [1.0,1.0], # Make some initial guesses at linear savings rule slopes for each state
'MrkvArray': np.array([[0.875,0.125],
[0.125,0.875]]), # Transition probabilities
'MrkvNow_init': 0 # Pick a state to start in (we pick the first state)
}
# The 'interesting' parts of the CobbDouglasMarkovEconomy
KSEconomyDictionary['CapShare'] = 0.36
KSEconomyDictionary['MrkvArray'] = AggMrkvArray
KSEconomy = CobbDouglasMarkovEconomy(agents = [KSAgent], **KSEconomyDictionary) # Combine production and consumption sides into an "Economy"
###Output
_____no_output_____
###Markdown
We have now populated the $\texttt{KSEconomy}$ with $\texttt{KSAgents}$ defined before. That is basically telling the agents to take the macro state from the $\texttt{KSEconomy}$. Now we construct the $\texttt{AggShkDstn}$ that specifies the evolution of the dynamics of the $\texttt{KSEconomy}$.The structure of the inputs for $\texttt{AggShkDstn}$ follows the same logic as for $\texttt{IncomeDstn}$. Now there is only one possible outcome for each aggregate state (the KS aggregate states are very simple), therefore, each aggregate state has only one possible condition which happens with probability 1.
###Code
# Calibrate the magnitude of the aggregate shocks
Tran_g = 1.01 # Productivity z in the good aggregate state
Tran_b = 0.99 # and the bad state
# The HARK framework allows permanent shocks
Perm_g = Perm_b = 1.0 # KS assume there are no aggregate permanent shocks
# Aggregate productivity shock distribution by state.
# First element is probabilities of different outcomes, given the state you are in.
# Second element is agg permanent shocks (here we don't have any, so just they are just 1.).
# Third element is agg transitory shocks, which are calibrated the same as in Krusell Smith.
KSAggShkDstn = [
[np.array([1.0]),np.array([Perm_g]),np.array([Tran_g])], # Aggregate good
[np.array([1.0]),np.array([Perm_b]),np.array([Tran_b])] # Aggregate bad
]
KSEconomy.AggShkDstn = KSAggShkDstn
###Output
_____no_output_____
###Markdown
Summing UpThe combined idiosyncratic and aggregate assumptions can be summarized mathematically as follows.$\forall \{s,s'\}=\{g,b\}\times\{g,b\}$, the following two conditions hold:$$\underbrace{\pi_{ss'01}}_{p(s \rightarrow s',u \rightarrow e)}+\underbrace{\pi_{ss'00}}_{p(s \rightarrow s', u \rightarrow u)} = \underbrace{\pi_{ss'11}}_{p(s\rightarrow s', e \rightarrow e) } + \underbrace{\pi_{ss'10}}_{p(s \rightarrow s', e \rightarrow u)} = \underbrace{\pi_{ss'}}_{p(s\rightarrow s')}$$$$u_s \frac{\pi_{ss'00}}{\pi_{ss'}}+ (1-u_s) \frac{\pi_{ss'10}}{\pi_{ss'}} = u_{s'}$$ Solving the ModelNow, we have fully defined all of the elements of the macroeconomy, and we are in postion to construct an object that represents the economy and to construct a rational expectations equilibrium.
###Code
# Construct the economy, make an initial history, then solve
KSAgent.getEconomyData(KSEconomy) # Makes attributes of the economy, attributes of the agent
KSEconomy.makeAggShkHist() # Make a simulated history of the economy
# Set tolerance level.
KSEconomy.tolerance = 0.01
# Solve macro problem by finding a fixed point for beliefs
KSEconomy.solve() # Solve the economy using the market method.
# i.e. guess the saving function, and iterate until a fixed point
###Output
intercept=[-0.2391625043170693, -0.23612394642292522], slope=[1.0589379022340042, 1.0583972665830461], r-sq=[0.9999007630097778, 0.9998101850187024]
intercept=[-0.23284348000117538, -0.23005037314000176], slope=[1.0437242579937263, 1.0430192001137852], r-sq=[0.999597842568587, 0.9995228787583207]
intercept=[-0.14179377913302404, -0.1397845879435921], slope=[1.019522234277403, 1.0190138495183052], r-sq=[0.9999999957550034, 0.9999999943223266]
intercept=[-0.16165570894331788, -0.1596785727367067], slope=[1.024322321505227, 1.023796249802059], r-sq=[0.9999999330730815, 0.9999998121130941]
intercept=[-0.15367980777948007, -0.1523709066299711], slope=[1.0228330019333458, 1.0224883026048857], r-sq=[0.9999999953306511, 0.9999999958583536]
###Markdown
The last line above is the converged aggregate saving rule for good and bad times, respectively.
###Code
# Plot some key results
print('Aggregate savings as a function of aggregate market resources:')
fig = plt.figure()
bottom = 0.1
top = 2*KSEconomy.kSS
x = np.linspace(bottom,top,1000,endpoint=True)
print(KSEconomy.AFunc)
y0 = KSEconomy.AFunc[0](x)
y1 = KSEconomy.AFunc[1](x)
plt.plot(x,y0)
plt.plot(x,y1)
plt.xlim([bottom, top])
remark.show('aggregate_savings')
print('Consumption function at each aggregate market resources gridpoint (in general equilibrium):')
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
c_at_this_M = KSAgent.solution[0].cFunc[0](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,c_at_this_M)
remark.show('consumption_function')
print('Savings at each individual market resources gridpoint (in general equilibrium):')
fig = plt.figure()
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
s_at_this_M = m_grid-KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid))
c_at_this_M = KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,s_at_this_M)
remark.show('savings_function')
###Output
Aggregate savings as a function of aggregate market resources:
[<HARK.ConsumptionSaving.ConsAggShockModel.AggregateSavingRule object at 0x1c17b079e8>, <HARK.ConsumptionSaving.ConsAggShockModel.AggregateSavingRule object at 0x1c17b07940>]
Saving figure aggregate_savings in Figures
###Markdown
The Wealth Distribution in KS Benchmark Model
###Code
sim_wealth = KSEconomy.aLvlNow[0]
print("The mean of individual wealth is "+ str(sim_wealth.mean()) + ";\n the standard deviation is "
+ str(sim_wealth.std())+";\n the median is " + str(np.median(sim_wealth)) +".")
# Get some tools for plotting simulated vs actual wealth distributions
from HARK.utilities import getLorenzShares, getPercentiles
# The cstwMPC model conveniently has data on the wealth distribution
# from the U.S. Survey of Consumer Finances
from HARK.cstwMPC.SetupParamsCSTW import SCF_wealth, SCF_weights
# Construct the Lorenz curves and plot them
pctiles = np.linspace(0.001,0.999,15)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
# Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
remark.show('wealth_distribution_1')
# Calculate a measure of the difference between the simulated and empirical distributions
lorenz_distance = np.sqrt(np.sum((SCF_Lorenz_points - sim_Lorenz_points)**2))
print("The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is "+str(lorenz_distance) )
###Output
The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is 1.461381224774914
###Markdown
Heterogeneous Time Preference RatesAs the figures show, the distribution of wealth that the baseline KS model produces is very far from matching the empirical degree of inequality in the US data.This could matter for macroeconomic purposes. For example, the SCF data indicate that many agents are concentrated at low values of wealth where the MPC is very large. We might expect, therefore, that a fiscal policy "stimulus" that gives a fixed amount of money to every agent would have a large effect on the consumption of the low-wealth households who have a high Marginal Propensity to Consume.KS attempt to address this problem by assuming that an individual agent's time preference rate can change over time.The rationale is that this represents a generational transition: The "agent" is really a "dynasty" and the time preference rate of the "child" dynast may differ from that of the "parent."Specifically, KS assume that $\beta$ can take on three values, 0.9858, 0.9894, and 0.9930, and that the transition probabilities are such that - The invariant distribution for $\beta$’s has 80 percent of the population at the middle $\beta$ and 10 percent at each of the other $\beta$’s.- Immediate transitions between the extreme values of $\beta$ occur with probability zero. - The average duration of the highest and lowest $\beta$’s is 50 years. The HARK toolkit is not natively set up to accommodate stochastic time preference factors (though an extension to accommodate this would be easy). Here, instead, we assume that different agents have different values of $\beta$ that are uniformly distributed over some range. We approximate the uniform distribution by three points. The agents are heterogeneous _ex ante_ (and permanently).
###Code
# Construct the distribution of types
from HARK.utilities import approxUniform
# Specify the distribution of the discount factor
num_types = 3 # number of types we want;
DiscFac_mean = 0.9858 # center of beta distribution
DiscFac_spread = 0.0085 # spread of beta distribution
DiscFac_dstn = approxUniform(num_types, DiscFac_mean-DiscFac_spread, DiscFac_mean+DiscFac_spread)[1]
BaselineType = deepcopy(KSAgent)
MyTypes = [] # initialize an empty list to hold our consumer types
for nn in range(len(DiscFac_dstn)):
# Now create the types, and append them to the list MyTypes
NewType = deepcopy(BaselineType)
NewType.DiscFac = DiscFac_dstn[nn]
NewType.seed = nn # give each consumer type a different RNG seed
MyTypes.append(NewType)
# Put all agents into the economy
KSEconomy_sim = CobbDouglasMarkovEconomy(agents = MyTypes, **KSEconomyDictionary)
KSEconomy_sim.AggShkDstn = KSAggShkDstn # Agg shocks are the same as defined earlier
for ThisType in MyTypes:
ThisType.getEconomyData(KSEconomy_sim) # Makes attributes of the economy, attributes of the agent
KSEconomy_sim.makeAggShkHist() # Make a simulated prehistory of the economy
KSEconomy_sim.solve() # Solve macro problem by getting a fixed point dynamic rule
# Get the level of end-of-period assets a for all types of consumers
aLvl_all = np.concatenate([KSEconomy_sim.aLvlNow[i] for i in range(len(MyTypes))])
print('Aggregate capital to income ratio is ' + str(np.mean(aLvl_all)))
# Plot the distribution of wealth across all agent types
sim_3beta_wealth = aLvl_all
pctiles = np.linspace(0.001,0.999,15)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_3beta_Lorenz_points = getLorenzShares(sim_3beta_wealth,percentiles=pctiles)
## Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,sim_3beta_Lorenz_points,'-*r',label='3 Types')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
remark.show('wealth_distribution_2')
# The mean levels of wealth for the three types of consumer are
[np.mean(KSEconomy_sim.aLvlNow[0]),np.mean(KSEconomy_sim.aLvlNow[1]),np.mean(KSEconomy_sim.aLvlNow[2])]
fig = plt.figure()
# Plot the distribution of wealth
for i in range(len(MyTypes)):
if i<=2:
plt.hist(np.log(KSEconomy_sim.aLvlNow[i])\
,label=r'$\beta$='+str(round(DiscFac_dstn[i],4))\
,bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.legend(loc=2)
plt.title('Log Wealth Distribution of 3 Types')
remark.show('log_wealth_3_types')
fig = plt.figure()
# Distribution of wealth in original model with one type
plt.hist(np.log(sim_wealth),bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.title('Log Wealth Distribution of Original Model with One Type')
remark.show('log_wealth_1')
###Output
Saving figure log_wealth_1 in Figures
###Markdown
Target Wealth is Nonlinear in Time Preference RateNote the nonlinear relationship between wealth and time preference in the economy with three types. Although the three groups are uniformly spaced in $\beta$ values, there is a lot of overlap in the distribution of wealth of the two impatient types, who are both separated from the most patient type by a large gap. A model of buffer stock saving that has simplified enough to be [tractable](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/Consumption/TractableBufferStock) yields some insight. If $\sigma$ is a measure of income risk, $r$ is the interest rate, and $\theta$ is the time preference rate, then for an 'impatient' consumer (for whom $\theta > r$), in the logarithmic utility case an approximate formula for the target level of wealth is:\begin{eqnarray} a & \approx & \left(\frac{1}{ \theta(1+(\theta-r)/\sigma)-r}\right)\end{eqnarray}Conceptually, this reflects the fact that the only reason any of these agents holds positive wealth is the precautionary motive. (If there is no uncertainty, $\sigma=0$ and thus $a=0$). For positive uncertainty $\sigma>0$, as the degree of impatience (given by $\theta-r$) approaches zero, the target level of wealth approaches infinity. A plot of $a$ as a function of $\theta$ for a particular parameterization is shown below.
###Code
# Plot target wealth as a function of time preference rate for calibrated tractable model
fig = plt.figure()
ax = plt.axes()
sigma = 0.01
r = 0.02
theta = np.linspace(0.023,0.10,100)
plt.plot(theta,1/(theta*(1+(theta-r)/sigma)-r))
plt.xlabel(r'$\theta$')
plt.ylabel('Target wealth')
remark.show('target_wealth')
###Output
Saving figure target_wealth in Figures
###Markdown
[Krusell Smith (1998)](https://www.journals.uchicago.edu/doi/pdf/10.1086/250034)- Original version by Tim Munday - Comments and extensions by Tao Wang- Further edits by Chris Carroll [](https://mybinder.org/v2/gh/econ-ark/DemARK/master?filepath=notebooks%2FKrusellSmith.ipynb) OverviewThe benchmark Krusell-Smith model has the following broad features: * The aggregate state switches between "good" and "bad" with known probabilities * All consumers experience the same aggregate state for the economy (good or bad) * _ex ante_ there is only one type of consumer, which is infinitely lived * _ex post_ heterogeneity arises from uninsurable idiosyncratic income shocks * Specifically, individuals are at risk of spells of unemployment * In a spell of unemployment, their income is zero Thus, each agent faces two types of uncertainty: About their employment state, and about the income they will earn when employed. And the values of income and unemployment risk depend on the aggregate state. Details IdiosyncraticEach agent _attempts_ to supply an amount of productive labor $\ell$ in each period. (Here and below we mostly follow the notation of Krusell and Smith (1998)).However, whether they _succeed_ in supplying that labor (and earning a corresponding wage) is governed by the realization of the stochastic variable $\epsilon$. If the agent is unlucky, $\epsilon$ is zero and the agent is unemployed. The amount of labor they succeed in supplying is thus $\epsilon\ell$. AggregateAggregate output ($\bar{y}$) is produced using a Cobb-Douglas production function using capital and labor. (Bars over variables indicate the aggregate value of a variable that has different values across different idiosyncratic consumers).$z$ denotes the aggregate shock to productivity. $z$ can take two values, either $z_g$ -- the "good" state, or $z_b < z_g$ -- the "bad" state. Consumers gain income from providing labor, and from the rental return on any capital they own. Labor and capital markets are perfectly efficient so both factors are both paid their marginal products.The agent can choose to save by buying capital $k$ which is bounded below at the borrowing constraint of 0.Putting all of this together, aggregate output is given by: \begin{eqnarray}\bar{y} & = & z\bar{k}^\alpha \bar{\ell}^{1-\alpha}\end{eqnarray} The aggregate shocks $z$ follow first-order Markov chains with the transition probability of moving from state $s$ to state $s'$ denoted by $\pi_{ss'}$. The aggregate shocks and individual shocks are correlated: The probability of being unemployed is higher in bad times, when aggregate productivity is low, than in good times, when aggregate productivity is high. Idiosyncratic and Aggregate TogetherThe individual shocks satisfy the law of large numbers, and the model is constructed so that the number of agents who are unemployed in the good state always equals $u_g$, and is always $u_b$ in the bad state. Given the aggregate state, individual shocks are independent from each other.For the individual, the probability of moving between a good state and employment to a bad state and unemployment is denoted $\pi_{gb10}$ with similar notation for the other transition probabilities.(Krusell and Smith allow for serially correlated unemployment at the idiosyncratic level. Here we will simplify this and have unemployment be serially uncorrelated.) Finally, $\Gamma$ denotes the current distribution of consumers over capital and employment status, and $H$ denotes the law of motion of this distribution. The Idiosyncratic Individual's Problem Given the Aggregate StateThe individual's problem is:\begin{eqnarray*}V(k, \epsilon; \Gamma, z) &=& \max_{k'}\{U(c) + \beta \mathbb{E}[V(k' ,\epsilon'; \Gamma', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)\ell\epsilon + (1-\delta)k \\\Gamma' &=& H(\Gamma, z, z') \\k' &\geq& 0 \\\end{eqnarray*} Krusell and Smith define an equilibrium as a law of motion $H$, a value function $V$, a rule for updating capital $f$ and pricing functions $r$ and $w$, such that $V$ and $f$ solve the consumers problem, $r$ and $w$ denote the marginal products of capital and labour, and $H$ is consistent with $f$ (i.e. if we add up all of the individual agents capital choices we get the correct distribution of capital). Discussion of the KS AlgorithmIn principle, $\Gamma$ is a high-dimensional object because it includes the whole distribution of individuals' wealth and employment states. Because the optimal amount to save is a nonlinear function of the level of idiosyncratic $k$, next period's aggregate capital stock $\bar{k}'$ depends on the distribution of the holdings of idiosyncratic $k$ across the population of consumers. Therefore the law of motion $H$ is not a trivial function of the $\Gamma$. KS simplified this problem by noting the following. 1. The agent cares about the future aggregate aggregate state only insofar as that state affects their own personal value of $c$1. Future values of $c$ depend on the aggregate state only through the budget constraint1. The channels by which the budget constraint depends on the aggregate state are: * The probability distributions of $\epsilon$ and $z$ are affected by the aggregate state * Interest rates and wages depend on the future values of $\bar{k}$ and $\bar{\ell}$1. The probability distributions for the future values of $\{\epsilon, z\}$ are known * They are fully determined by the Markov transition matrices1. But the values of $r$ and $w$ are both determined by the future value of $\bar{k}$ (in combination with the exogenous value of $\bar{\ell}$) * So the only _endogenous_ object that the agent needs to form expectations about, in order to have a complete rational expectation about everything affecting them, is $\bar{k}'$The key result in Krusell and Smith is the discovery that a very simple linear rule does an extraordinarily good job (though not quite perfect) in forecasting $\bar{k'}$They then argue that, since rationality is surely bounded to some degree, the solution that an agent obtains using a good forecasting rule for $\bar{k}'$ is "good enough" to compute an "approximate" solution to the consumer's optimization problem.They define a generic algorithm to find a forecasting rule for $\bar{k}$ as follows1. Choose the number of moments $n$ of the distribution of $k$ to be included in the set of variables to forecast $\bar{k}'$. In the simplest case, $n=1$, the only forecasting variable for next period's $\bar{k}'$ is the mean (the first moment, $n=1$)) of current capital, $\bar{k}$.2. Each individual adopts the same belief about the law motion of these moments, $H_I$ and finds the optimal decision policy, $f_I$, contingent on that guess.3. Use the optimal policy to simulate a history of aggregate capital with a large number of agents. 4. Characterize the realized law of motion using the same number of moments $n$ 5. Compare it with the $H_I$, what is taken as given by individuals. 6. Iterate until the two converge. In the end, the solution to the original problem is well approximated by the following simplified problem:\begin{eqnarray*}V(k, \epsilon; \bar k, z) &=& max_{c, k'}\{U(c) + \beta E[V(k' ,\epsilon'; \bar k', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)l\epsilon + (1-\delta)k \\\text{When }~ z=z_g, \quad \mathbb{E}[\log\bar{k}'] & = & a_0 + a_1 \log\bar k \\\text{When }~ z=z_b, \quad \mathbb{E}[\log\bar{k}'] & = & b_0 + b_1 \log\bar k \\k' &\geq& 0 \\\end{eqnarray*} Implementation Using the HARK Toolkit The Consumer
###Code
# Import generic setup tools
# This is a jupytext paired notebook that autogenerates KrusellSmith.py
# which can be executed from a terminal command line via "ipython KrusellSmith.py"
# But a terminal does not permit inline figures, so we need to test jupyter vs terminal
# Google "how can I check if code is executed in the ipython notebook"
def in_ipynb():
try:
if str(type(get_ipython())) == "<class 'ipykernel.zmqshell.ZMQInteractiveShell'>":
return True
else:
return False
except NameError:
return False
# Determine whether to make the figures inline (for spyder or jupyter)
# vs whatever is the automatic setting that will apply if run from the terminal
if in_ipynb():
# %matplotlib inline generates a syntax error when run from the shell
# so do this instead
get_ipython().run_line_magic('matplotlib', 'inline')
else:
get_ipython().run_line_magic('matplotlib', 'auto')
# Import the plot-figure library matplotlib
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
from HARK.utilities import plotFuncs, plotFuncsDer, make_figs
# Markov consumer type that allows aggregate shocks
from HARK.ConsumptionSaving.ConsAggShockModel import AggShockMarkovConsumerType
# Define a dictionary to make an 'instance' of our Krusell-Smith consumer.
# The folded dictionary below contains many parameters to the
# AggShockMarkovConsumerType agent that are not needed for the KS model
KSAgentDictionary = {
"LivPrb" : [1.0], # Survival probability
"AgentCount" : 10000, # Number of agents of this type (only matters for simulation)
"aNrmInitMean" : 0.0, # Mean of log initial assets (only matters for simulation)
"aNrmInitStd" : 0.0, # Standard deviation of log initial assets (only for simulation)
"pLvlInitMean" : 0.0, # Mean of log initial permanent income (only matters for simulation)
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income (only matters for simulation)
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor (only matters for simulation)
"T_age" : None, # Age after which simulated agents are automatically killed
"T_cycle" : 1, # Number of periods in the cycle for this agent type
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraExtra" : [None], # Some other value of "assets above minimum" to add to the grid
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraCount" : 24, # Number of points in the grid of "assets above minimum"
# Parameters describing the income process
"PermShkCount" : 1, # Number of points in discrete approximation to permanent income shocks - no shocks of this kind!
"TranShkCount" : 1, # Number of points in discrete approximation to transitory income shocks - no shocks of this kind!
"PermShkStd" : [0.], # Standard deviation of log permanent income shocks - no shocks of this kind!
"TranShkStd" : [0.], # Standard deviation of log transitory income shocks - no shocks of this kind!
"UnempPrb" : 0.0, # Probability of unemployment while working - no shocks of this kind!
"UnempPrbRet" : 0.00, # Probability of "unemployment" while retired - no shocks of this kind!
"IncUnemp" : 0.0, # Unemployment benefits replacement rate
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"tax_rate" : 0.0, # Flat income tax rate
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"PermGroFac" : [1.0], # Permanent income growth factor
# New Parameters that we need now
'MgridBase': np.array([0.1,0.3,0.6,
0.8,0.9,0.98,
1.0,1.02,1.1,
1.2,1.6,2.0,
3.0]), # Grid of capital-to-labor-ratios (factors)
'PermShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate permanent shocks by state. No continous shocks in a state.
'TranShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate transitory shocks by state. No continuous shocks in a state.
'PermGroFacAgg' : 1.0
}
# Here we state just the "interesting" parts of the consumer's specification
KSAgentDictionary['CRRA'] = 1.0 # Relative risk aversion
KSAgentDictionary['DiscFac'] = 0.99 # Intertemporal discount factor
KSAgentDictionary['cycles'] = 0 # cycles=0 means consumer is infinitely lived
# KS assume that 'good' and 'bad' times are of equal expected duration
# The probability of a change in the aggregate state is p_change=0.125
p_change=0.125
p_remain=1-p_change
# Now we define macro transition probabilities for AggShockMarkovConsumerType
# [i,j] is probability of being in state j next period conditional on being in state i this period.
# In both states, there is 0.875 chance of staying, 0.125 chance of switching
AggMrkvArray = \
np.array([[p_remain,p_change], # Probabilities of states 0 and 1 next period if in state 0
[p_change,p_remain]]) # Probabilities of states 0 and 1 next period if in state 1
KSAgentDictionary['MrkvArray'] = AggMrkvArray
# Create the Krusell-Smith agent as an instance of AggShockMarkovConsumerType
KSAgent = AggShockMarkovConsumerType(**KSAgentDictionary)
###Output
_____no_output_____
###Markdown
Now we need to specify the income distribution. The HARK toolkit allows for two components of labor income: Persistent (or permanent), and transitory. Using the KS notation above, a HARK consumer's income is\begin{eqnarray}y & = & w p \ell \epsilon \end{eqnarray}where $p$ is the persistent component of income. Krusell and Smith did not incorporate a persistent component of income, however, so we will simply calibrate $p=1$ for all states.For each of the two aggregate states we need to specify: * The _proportion_ of consumers in the $e$ and the $u$ states * The level of persistent/permanent productivity $p$ (always 1) * The ratio of actual to permanent productivity in each state $\{e,u\}$ * In the KS notation, this is $\epsilon\ell$
###Code
# Construct the income distribution for the Krusell-Smith agent
prb_eg = 0.96 # Probability of employment in the good state
prb_ug = 1-prb_eg # Probability of unemployment in the good state
prb_eb = 0.90 # Probability of employment in the bad state
prb_ub = 1-prb_eb # Probability of unemployment in the bad state
p_ind = 1 # Persistent component of income is always 1
ell_ug = ell_ub = 0 # Labor supply is zero for unemployed consumers in either agg state
ell_eg = 1.0/prb_eg # Labor supply for employed consumer in good state
ell_eb = 1.0/prb_eb # 1=pe_g*ell_ge+pu_b*ell_gu=pe_b*ell_be+pu_b*ell_gu
# IncomeDstn is a list of lists, one for each aggregate Markov state
# Each contains three arrays of floats, representing a discrete approximation to the income process.
# Order:
# state probabilities
# idiosyncratic persistent income level by state (KS have no persistent shocks p_ind is always 1.0)
# idiosyncratic transitory income level by state
KSAgent.IncomeDstn[0] = \
[[np.array([prb_eg,prb_ug]),np.array([p_ind,p_ind]),np.array([ell_eg,ell_ug])], # Agg state good
[np.array([prb_eb,prb_ub]),np.array([p_ind,p_ind]),np.array([ell_eb,ell_ub])] # Agg state bad
]
###Output
_____no_output_____
###Markdown
Up to this point, individual agents do not have enough information to solve their decision problem yet. What is missing are beliefs about the endogenous macro variables $r$ and $w$, both of which are functions of $\bar{k}$. The Aggregate Economy
###Code
from HARK.ConsumptionSaving.ConsAggShockModel import CobbDouglasMarkovEconomy
KSEconomyDictionary = {
'PermShkAggCount': 1,
'TranShkAggCount': 1,
'PermShkAggStd': [0.0,0.0],
'TranShkAggStd': [0.0,0.0],
'DeprFac': 0.025, # Depreciation factor
'DiscFac': 0.99,
'CRRA': 1.0,
'PermGroFacAgg': [1.0,1.0],
'AggregateL':1.0, # Fix aggregate labor supply at 1.0 - makes interpretation of z easier
'act_T':1200, # Number of periods for economy to run in simulation
'intercept_prev': [0.0,0.0], # Make some initial guesses at linear savings rule intercepts for each state
'slope_prev': [1.0,1.0], # Make some initial guesses at linear savings rule slopes for each state
'MrkvNow_init': 0 # Pick a state to start in (we pick the first state)
}
# The 'interesting' parts of the CobbDouglasMarkovEconomy
KSEconomyDictionary['CapShare'] = 0.36
KSEconomyDictionary['MrkvArray'] = AggMrkvArray
KSEconomy = CobbDouglasMarkovEconomy(agents = [KSAgent], **KSEconomyDictionary) # Combine production and consumption sides into an "Economy"
###Output
_____no_output_____
###Markdown
We have now populated the $\texttt{KSEconomy}$ with $\texttt{KSAgents}$ defined before. That is basically telling the agents to take the macro state from the $\texttt{KSEconomy}$. Now we construct the $\texttt{AggShkDstn}$ that specifies the evolution of the dynamics of the $\texttt{KSEconomy}$.The structure of the inputs for $\texttt{AggShkDstn}$ follows the same logic as for $\texttt{IncomeDstn}$. Now there is only one possible outcome for each aggregate state (the KS aggregate states are very simple), therefore, each aggregate state has only one possible condition which happens with probability 1.
###Code
# Calibrate the magnitude of the aggregate shocks
Tran_g = 1.01 # Productivity z in the good aggregate state
Tran_b = 0.99 # and the bad state
# The HARK framework allows permanent shocks
Perm_g = Perm_b = 1.0 # KS assume there are no aggregate permanent shocks
# Aggregate productivity shock distribution by state.
# First element is probabilities of different outcomes, given the state you are in.
# Second element is agg permanent shocks (here we don't have any, so just they are just 1.).
# Third element is agg transitory shocks, which are calibrated the same as in Krusell Smith.
KSAggShkDstn = [
[np.array([1.0]),np.array([Perm_g]),np.array([Tran_g])], # Aggregate good
[np.array([1.0]),np.array([Perm_b]),np.array([Tran_b])] # Aggregate bad
]
KSEconomy.AggShkDstn = KSAggShkDstn
###Output
_____no_output_____
###Markdown
Summing UpThe combined idiosyncratic and aggregate assumptions can be summarized mathematically as follows.$\forall \{s,s'\}=\{g,b\}\times\{g,b\}$, the following two conditions hold:$$\underbrace{\pi_{ss'01}}_{p(s \rightarrow s',u \rightarrow e)}+\underbrace{\pi_{ss'00}}_{p(s \rightarrow s', u \rightarrow u)} = \underbrace{\pi_{ss'11}}_{p(s\rightarrow s', e \rightarrow e) } + \underbrace{\pi_{ss'10}}_{p(s \rightarrow s', e \rightarrow u)} = \underbrace{\pi_{ss'}}_{p(s\rightarrow s')}$$$$u_s \frac{\pi_{ss'00}}{\pi_{ss'}}+ (1-u_s) \frac{\pi_{ss'10}}{\pi_{ss'}} = u_{s'}$$ Solving the ModelNow, we have fully defined all of the elements of the macroeconomy, and we are in postion to construct an object that represents the economy and to construct a rational expectations equilibrium.
###Code
# Construct the economy, make an initial history, then solve
KSAgent.getEconomyData(KSEconomy) # Makes attributes of the economy, attributes of the agent
KSEconomy.makeAggShkHist() # Make a simulated history of the economy
# Set tolerance level.
KSEconomy.tolerance = 0.01
# Solve macro problem by finding a fixed point for beliefs
KSEconomy.solve() # Solve the economy using the market method.
# i.e. guess the saving function, and iterate until a fixed point
###Output
intercept=[-0.2391625043170693, -0.23612394642292522], slope=[1.0589379022340042, 1.0583972665830461], r-sq=[0.9999007630097778, 0.9998101850187024]
intercept=[-0.23284348000117538, -0.23005037314000176], slope=[1.0437242579937263, 1.0430192001137852], r-sq=[0.999597842568587, 0.9995228787583207]
intercept=[-0.14179377913302404, -0.1397845879435921], slope=[1.019522234277403, 1.0190138495183052], r-sq=[0.9999999957550034, 0.9999999943223266]
intercept=[-0.16165570894331788, -0.1596785727367067], slope=[1.024322321505227, 1.023796249802059], r-sq=[0.9999999330730815, 0.9999998121130941]
intercept=[-0.15367980777948007, -0.1523709066299711], slope=[1.0228330019333458, 1.0224883026048857], r-sq=[0.9999999953306511, 0.9999999958583536]
###Markdown
The last line above is the converged aggregate saving rule for good and bad times, respectively.
###Code
# Plot some key results
print('Aggregate savings as a function of aggregate market resources:')
bottom = 0.1
top = 2 * KSEconomy.kSS
x = np.linspace(bottom, top, 1000, endpoint=True)
y0 = KSEconomy.AFunc[0](x)
y1 = KSEconomy.AFunc[1](x)
plt.plot(x, y0)
plt.plot(x, y1)
plt.xlim([bottom, top])
make_figs('aggregate_savings', True, False)
plt.show()
plt.clf()
print('Consumption function at each aggregate market resources gridpoint (in general equilibrium):')
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
c_at_this_M = KSAgent.solution[0].cFunc[0](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,c_at_this_M)
make_figs('consumption_function', True, False)
plt.show()
plt.clf()
print('Savings at each individual market resources gridpoint (in general equilibrium):')
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
s_at_this_M = m_grid-KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid))
c_at_this_M = KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,s_at_this_M)
make_figs('savings_function', True, False)
plt.show()
###Output
Aggregate savings as a function of aggregate market resources:
Saving figure aggregate_savings in Figures
###Markdown
The Wealth Distribution in KS Benchmark Model
###Code
sim_wealth = KSEconomy.aLvlNow[0]
print("The mean of individual wealth is "+ str(sim_wealth.mean()) + ";\n the standard deviation is "
+ str(sim_wealth.std())+";\n the median is " + str(np.median(sim_wealth)) +".")
# Get some tools for plotting simulated vs actual wealth distributions
from HARK.utilities import getLorenzShares, getPercentiles
# The cstwMPC model conveniently has data on the wealth distribution
# from the U.S. Survey of Consumer Finances
from HARK.cstwMPC.SetupParamsCSTW import SCF_wealth, SCF_weights
# Construct the Lorenz curves and plot them
pctiles = np.linspace(0.001,0.999,15)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
# Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
make_figs('wealth_distribution_1', True, False)
plt.show()
# Calculate a measure of the difference between the simulated and empirical distributions
lorenz_distance = np.sqrt(np.sum((SCF_Lorenz_points - sim_Lorenz_points)**2))
print("The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is "+str(lorenz_distance) )
###Output
The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is 1.461381224774914
###Markdown
Heterogeneous Time Preference RatesAs the figures show, the distribution of wealth that the baseline KS model produces is very far from matching the empirical degree of inequality in the US data.This could matter for macroeconomic purposes. For example, the SCF data indicate that many agents are concentrated at low values of wealth where the MPC is very large. We might expect, therefore, that a fiscal policy "stimulus" that gives a fixed amount of money to every agent would have a large effect on the consumption of the low-wealth households who have a high Marginal Propensity to Consume.KS attempt to address this problem by assuming that an individual agent's time preference rate can change over time.The rationale is that this represents a generational transition: The "agent" is really a "dynasty" and the time preference rate of the "child" dynast may differ from that of the "parent."Specifically, KS assume that $\beta$ can take on three values, 0.9858, 0.9894, and 0.9930, and that the transition probabilities are such that - The invariant distribution for $\beta$’s has 80 percent of the population at the middle $\beta$ and 10 percent at each of the other $\beta$’s.- Immediate transitions between the extreme values of $\beta$ occur with probability zero. - The average duration of the highest and lowest $\beta$’s is 50 years. The HARK toolkit is not natively set up to accommodate stochastic time preference factors (though an extension to accommodate this would be easy). Here, instead, we assume that different agents have different values of $\beta$ that are uniformly distributed over some range. We approximate the uniform distribution by three points. The agents are heterogeneous _ex ante_ (and permanently).
###Code
# Construct the distribution of types
from HARK.utilities import approxUniform
# Specify the distribution of the discount factor
num_types = 3 # number of types we want;
DiscFac_mean = 0.9858 # center of beta distribution
DiscFac_spread = 0.0085 # spread of beta distribution
DiscFac_dstn = approxUniform(num_types, DiscFac_mean-DiscFac_spread, DiscFac_mean+DiscFac_spread)[1]
BaselineType = deepcopy(KSAgent)
MyTypes = [] # initialize an empty list to hold our consumer types
for nn in range(len(DiscFac_dstn)):
# Now create the types, and append them to the list MyTypes
NewType = deepcopy(BaselineType)
NewType.DiscFac = DiscFac_dstn[nn]
NewType.seed = nn # give each consumer type a different RNG seed
MyTypes.append(NewType)
# Put all agents into the economy
KSEconomy_sim = CobbDouglasMarkovEconomy(agents = MyTypes, **KSEconomyDictionary)
KSEconomy_sim.AggShkDstn = KSAggShkDstn # Agg shocks are the same as defined earlier
for ThisType in MyTypes:
ThisType.getEconomyData(KSEconomy_sim) # Makes attributes of the economy, attributes of the agent
KSEconomy_sim.makeAggShkHist() # Make a simulated prehistory of the economy
KSEconomy_sim.solve() # Solve macro problem by getting a fixed point dynamic rule
# Get the level of end-of-period assets a for all types of consumers
aLvl_all = np.concatenate([KSEconomy_sim.aLvlNow[i] for i in range(len(MyTypes))])
print('Aggregate capital to income ratio is ' + str(np.mean(aLvl_all)))
# Plot the distribution of wealth across all agent types
sim_3beta_wealth = aLvl_all
pctiles = np.linspace(0.001,0.999,15)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_3beta_Lorenz_points = getLorenzShares(sim_3beta_wealth,percentiles=pctiles)
## Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,sim_3beta_Lorenz_points,'-*r',label='3 Types')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
make_figs('wealth_distribution_2', True, False)
plt.show()
# The mean levels of wealth for the three types of consumer are
[np.mean(KSEconomy_sim.aLvlNow[0]),np.mean(KSEconomy_sim.aLvlNow[1]),np.mean(KSEconomy_sim.aLvlNow[2])]
# Plot the distribution of wealth
for i in range(len(MyTypes)):
if i<=2:
plt.hist(np.log(KSEconomy_sim.aLvlNow[i])\
,label=r'$\beta$='+str(round(DiscFac_dstn[i],4))\
,bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.legend(loc=2)
plt.title('Log Wealth Distribution of 3 Types')
make_figs('log_wealth_3_types', True, False)
plt.show()
# Distribution of wealth in original model with one type
plt.hist(np.log(sim_wealth),bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.title('Log Wealth Distribution of Original Model with One Type')
make_figs('log_wealth_1', True, False)
plt.show()
###Output
Saving figure log_wealth_1 in Figures
###Markdown
Target Wealth is Nonlinear in Time Preference RateNote the nonlinear relationship between wealth and time preference in the economy with three types. Although the three groups are uniformly spaced in $\beta$ values, there is a lot of overlap in the distribution of wealth of the two impatient types, who are both separated from the most patient type by a large gap. A model of buffer stock saving that has simplified enough to be [tractable](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/Consumption/TractableBufferStock) yields some insight. If $\sigma$ is a measure of income risk, $r$ is the interest rate, and $\theta$ is the time preference rate, then for an 'impatient' consumer (for whom $\theta > r$), in the logarithmic utility case an approximate formula for the target level of wealth is:\begin{eqnarray} a & \approx & \left(\frac{1}{ \theta(1+(\theta-r)/\sigma)-r}\right)\end{eqnarray}Conceptually, this reflects the fact that the only reason any of these agents holds positive wealth is the precautionary motive. (If there is no uncertainty, $\sigma=0$ and thus $a=0$). For positive uncertainty $\sigma>0$, as the degree of impatience (given by $\theta-r$) approaches zero, the target level of wealth approaches infinity. A plot of $a$ as a function of $\theta$ for a particular parameterization is shown below.
###Code
# Plot target wealth as a function of time preference rate for calibrated tractable model
fig = plt.figure()
ax = plt.axes()
sigma = 0.01
r = 0.02
theta = np.linspace(0.023,0.10,100)
plt.plot(theta,1/(theta*(1+(theta-r)/sigma)-r))
plt.xlabel(r'$\theta$')
plt.ylabel('Target wealth')
make_figs('target_wealth', True, False)
plt.show()
###Output
Saving figure target_wealth in Figures
###Markdown
[Krusell Smith (1998)](https://www.journals.uchicago.edu/doi/pdf/10.1086/250034)- Original version by Tim Munday - Comments and extensions by Tao Wang- Further edits by Chris Carroll [](https://mybinder.org/v2/gh/econ-ark/DemARK/master?filepath=notebooks%2FKrusellSmith.ipynb) OverviewThe benchmark Krusell-Smith model has the following broad features: * The aggregate state switches between "good" and "bad" with known probabilities * All consumers experience the same aggregate state for the economy (good or bad) * _ex ante_ there is only one type of consumer, which is infinitely lived * _ex post_ heterogeneity arises from uninsurable idiosyncratic income shocks * Specifically, individuals are at risk of spells of unemployment * In a spell of unemployment, their income is zero Thus, each agent faces two types of uncertainty: About their employment state, and about the income they will earn when employed. And the values of income and unemployment risk depend on the aggregate state. Details IdiosyncraticEach agent _attempts_ to supply an amount of productive labor $\ell$ in each period. (Here and below we mostly follow the notation of Krusell and Smith (1998)).However, whether they _succeed_ in supplying that labor (and earning a corresponding wage) is governed by the realization of the stochastic variable $\epsilon$. If the agent is unlucky, $\epsilon$ is zero and the agent is unemployed. The amount of labor they succeed in supplying is thus $\epsilon\ell$. AggregateAggregate output ($\bar{y}$) is produced using a Cobb-Douglas production function using capital and labor. (Bars over variables indicate the aggregate value of a variable that has different values across different idiosyncratic consumers).$z$ denotes the aggregate shock to productivity. $z$ can take two values, either $z_g$ -- the "good" state, or $z_b < z_g$ -- the "bad" state. Consumers gain income from providing labor, and from the rental return on any capital they own. Labor and capital markets are perfectly efficient so both factors are both paid their marginal products.The agent can choose to save by buying capital $k$ which is bounded below at the borrowing constraint of 0.Putting all of this together, aggregate output is given by: \begin{eqnarray}\bar{y} & = & z\bar{k}^\alpha \bar{\ell}^{1-\alpha}\end{eqnarray} The aggregate shocks $z$ follow first-order Markov chains with the transition probability of moving from state $s$ to state $s'$ denoted by $\pi_{ss'}$. The aggregate shocks and individual shocks are correlated: The probability of being unemployed is higher in bad times, when aggregate productivity is low, than in good times, when aggregate productivity is high. Idiosyncratic and Aggregate TogetherThe individual shocks satisfy the law of large numbers, and the model is constructed so that the number of agents who are unemployed in the good state always equals $u_g$, and is always $u_b$ in the bad state. Given the aggregate state, individual shocks are independent from each other.For the individual, the probability of moving between a good state and employment to a bad state and unemployment is denoted $\pi_{gb10}$ with similar notation for the other transition probabilities.(Krusell and Smith allow for serially correlated unemployment at the idiosyncratic level. Here we will simplify this and have unemployment be serially uncorrelated.) Finally, $\Gamma$ denotes the current distribution of consumers over capital and employment status, and $H$ denotes the law of motion of this distribution. The Idiosyncratic Individual's Problem Given the Aggregate StateThe individual's problem is:\begin{eqnarray*}V(k, \epsilon; \Gamma, z) &=& \max_{k'}\{U(c) + \beta \mathbb{E}[V(k' ,\epsilon'; \Gamma', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)\ell\epsilon + (1-\delta)k \\\Gamma' &=& H(\Gamma, z, z') \\k' &\geq& 0 \\\end{eqnarray*} Krusell and Smith define an equilibrium as a law of motion $H$, a value function $V$, a rule for updating capital $f$ and pricing functions $r$ and $w$, such that $V$ and $f$ solve the consumers problem, $r$ and $w$ denote the marginal products of capital and labour, and $H$ is consistent with $f$ (i.e. if we add up all of the individual agents capital choices we get the correct distribution of capital). Discussion of the KS AlgorithmIn principle, $\Gamma$ is a high-dimensional object because it includes the whole distribution of individuals' wealth and employment states. Because the optimal amount to save is a nonlinear function of the level of idiosyncratic $k$, next period's aggregate capital stock $\bar{k}'$ depends on the distribution of the holdings of idiosyncratic $k$ across the population of consumers. Therefore the law of motion $H$ is not a trivial function of the $\Gamma$. KS simplified this problem by noting the following. 1. The agent cares about the future aggregate aggregate state only insofar as that state affects their own personal value of $c$1. Future values of $c$ depend on the aggregate state only through the budget constraint1. The channels by which the budget constraint depends on the aggregate state are: * The probability distributions of $\epsilon$ and $z$ are affected by the aggregate state * Interest rates and wages depend on the future values of $\bar{k}$ and $\bar{\ell}$1. The probability distributions for the future values of $\{\epsilon, z\}$ are known * They are fully determined by the Markov transition matrices1. But the values of $r$ and $w$ are both determined by the future value of $\bar{k}$ (in combination with the exogenous value of $\bar{\ell}$) * So the only _endogenous_ object that the agent needs to form expectations about, in order to have a complete rational expectation about everything affecting them, is $\bar{k}'$The key result in Krusell and Smith is the discovery that a very simple linear rule does an extraordinarily good job (though not quite perfect) in forecasting $\bar{k'}$They then argue that, since rationality is surely bounded to some degree, the solution that an agent obtains using a good forecasting rule for $\bar{k}'$ is "good enough" to compute an "approximate" solution to the consumer's optimization problem.They define a generic algorithm to find a forecasting rule for $\bar{k}$ as follows1. Choose the number of moments $n$ of the distribution of $k$ to be included in the set of variables to forecast $\bar{k}'$. In the simplest case, $n=1$, the only forecasting variable for next period's $\bar{k}'$ is the mean (the first moment, $n=1$)) of current capital, $\bar{k}$.2. Each individual adopts the same belief about the law motion of these moments, $H_I$ and finds the optimal decision policy, $f_I$, contingent on that guess.3. Use the optimal policy to simulate a history of aggregate capital with a large number of agents. 4. Characterize the realized law of motion using the same number of moments $n$ 5. Compare it with the $H_I$, what is taken as given by individuals. 6. Iterate until the two converge. In the end, the solution to the original problem is well approximated by the following simplified problem:\begin{eqnarray*}V(k, \epsilon; \bar k, z) &=& max_{c, k'}\{U(c) + \beta E[V(k' ,\epsilon'; \bar k', z')|z, \epsilon]\} \\c + k' &=& r(\bar{k}, \bar{\ell}, z)k + w(\bar{k}, \bar{\ell}, z)l\epsilon + (1-\delta)k \\\text{When }~ z=z_g, \quad \mathbb{E}[\log\bar{k}'] & = & a_0 + a_1 \log\bar k \\\text{When }~ z=z_b, \quad \mathbb{E}[\log\bar{k}'] & = & b_0 + b_1 \log\bar k \\k' &\geq& 0 \\\end{eqnarray*} Implementation Using the HARK Toolkit The Consumer
###Code
# Import generic setup tools
# This is a jupytext paired notebook that autogenerates KrusellSmith.py
# which can be executed from a terminal command line via "ipython KrusellSmith.py"
# But a terminal does not permit inline figures, so we need to test jupyter vs terminal
# Google "how can I check if code is executed in the ipython notebook"
# Determine whether to make the figures inline (for spyder or jupyter)
# vs whatever is the automatic setting that will apply if run from the terminal
# import remark # 20191113 CDC to Seb: Where do you propose that this module should go (permanently?)
# in the /binder folder, where it could be installed by postBuild (unix) or postBuild.bat?
%matplotlib inline
# Import the plot-figure library matplotlib
import numpy as np
import matplotlib.pyplot as plt
from HARK.utilities import plotFuncs, plotFuncsDer, make_figs
from copy import deepcopy
# Import components of HARK needed for solving the KS model
# from HARK.ConsumptionSaving.ConsAggShockModel import *
import HARK.ConsumptionSaving.ConsumerParameters as Params
# Markov consumer type that allows aggregate shocks (redundant but instructive)
from HARK.ConsumptionSaving.ConsAggShockModel import AggShockMarkovConsumerType
# Define a dictionary to make an 'instance' of our Krusell-Smith consumer.
# The folded dictionary below contains many parameters to the
# AggShockMarkovConsumerType agent that are not needed for the KS model
KSAgentDictionary = {
"CRRA": 1.0, # Coefficient of relative risk aversion
"DiscFac": 0.99, # Intertemporal discount factor
"LivPrb" : [1.0], # Survival probability
"AgentCount" : 10000, # Number of agents of this type (only matters for simulation)
"aNrmInitMean" : 0.0, # Mean of log initial assets (only matters for simulation)
"aNrmInitStd" : 0.0, # Standard deviation of log initial assets (only for simulation)
"pLvlInitMean" : 0.0, # Mean of log initial permanent income (only matters for simulation)
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income (only matters for simulation)
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor (only matters for simulation)
"T_age" : None, # Age after which simulated agents are automatically killed
"T_cycle" : 1, # Number of periods in the cycle for this agent type
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 20, # Maximum end-of-period "assets above minimum" value
"aXtraExtra" : [None], # Some other value of "assets above minimum" to add to the grid
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraCount" : 24, # Number of points in the grid of "assets above minimum"
# Parameters describing the income process
"PermShkCount" : 1, # Number of points in discrete approximation to permanent income shocks - no shocks of this kind!
"TranShkCount" : 1, # Number of points in discrete approximation to transitory income shocks - no shocks of this kind!
"PermShkStd" : [0.], # Standard deviation of log permanent income shocks - no shocks of this kind!
"TranShkStd" : [0.], # Standard deviation of log transitory income shocks - no shocks of this kind!
"UnempPrb" : 0.0, # Probability of unemployment while working - no shocks of this kind!
"UnempPrbRet" : 0.00, # Probability of "unemployment" while retired - no shocks of this kind!
"IncUnemp" : 0.0, # Unemployment benefits replacement rate
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"tax_rate" : 0.0, # Flat income tax rate
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"cycles": 0, # Consumer is infinitely lived
"PermGroFac" : [1.0], # Permanent income growth factor
# New Parameters that we need now
'MgridBase': np.array([0.1,0.3,0.6,
0.8,0.9,0.98,
1.0,1.02,1.1,
1.2,1.6,2.0,
3.0]), # Grid of capital-to-labor-ratios (factors)
'MrkvArray': np.array([[0.875,0.125],
[0.125,0.875]]), # Transition probabilities for macroecon. [i,j] is probability of being in state j next
# period conditional on being in state i this period.
'PermShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate permanent shocks by state. No continous shocks in a state.
'TranShkAggStd' : [0.0,0.0], # Standard deviation of log aggregate transitory shocks by state. No continuous shocks in a state.
'PermGroFacAgg' : 1.0
}
# Here we restate just the "interesting" parts of the consumer's specification
KSAgentDictionary['CRRA'] = 1.0 # Relative risk aversion
KSAgentDictionary['DiscFac'] = 0.99 # Intertemporal discount factor
KSAgentDictionary['cycles'] = 0 # cycles=0 means consumer is infinitely lived
# KS assume that 'good' and 'bad' times are of equal expected duration
# The probability of a change in the aggregate state is p_change=0.125
p_change=0.125
p_remain=1-p_change
# Now we define macro transition probabilities for AggShockMarkovConsumerType
# [i,j] is probability of being in state j next period conditional on being in state i this period.
# In both states, there is 0.875 chance of staying, 0.125 chance of switching
AggMrkvArray = \
np.array([[p_remain,p_change], # Probabilities of states 0 and 1 next period if in state 0
[p_change,p_remain]]) # Probabilities of states 0 and 1 next period if in state 1
KSAgentDictionary['MrkvArray'] = AggMrkvArray
# Create the Krusell-Smith agent as an instance of AggShockMarkovConsumerType
KSAgent = AggShockMarkovConsumerType(**KSAgentDictionary)
###Output
_____no_output_____
###Markdown
Now we need to specify the income distribution. The HARK toolkit allows for two components of labor income: Persistent (or permanent), and transitory. Using the KS notation above, a HARK consumer's income is\begin{eqnarray}y & = & w p \ell \epsilon \end{eqnarray}where $p$ is the persistent component of income. Krusell and Smith did not incorporate a persistent component of income, however, so we will simply calibrate $p=1$ for all states.For each of the two aggregate states we need to specify: * The _proportion_ of consumers in the $e$ and the $u$ states * The level of persistent/permanent productivity $p$ (always 1) * The ratio of actual to permanent productivity in each state $\{e,u\}$ * In the KS notation, this is $\epsilon\ell$
###Code
# Construct the income distribution for the Krusell-Smith agent
prb_eg = 0.96 # Probability of employment in the good state
prb_ug = 1-prb_eg # Probability of unemployment in the good state
prb_eb = 0.90 # Probability of employment in the bad state
prb_ub = 1-prb_eb # Probability of unemployment in the bad state
p_ind = 1 # Persistent component of income is always 1
ell_ug = ell_ub = 0 # Labor supply is zero for unemployed consumers in either agg state
ell_eg = 1.0/prb_eg # Labor supply for employed consumer in good state
ell_eb = 1.0/prb_eb # 1=pe_g*ell_ge+pu_b*ell_gu=pe_b*ell_be+pu_b*ell_gu
# IncomeDstn is a list of lists, one for each aggregate Markov state
# Each contains three arrays of floats, representing a discrete approximation to the income process.
# Order:
# state probabilities
# idiosyncratic persistent income level by state (KS have no persistent shocks p_ind is always 1.0)
# idiosyncratic transitory income level by state
KSAgent.IncomeDstn[0] = \
[[np.array([prb_eg,prb_ug]),np.array([p_ind,p_ind]),np.array([ell_eg,ell_ug])], # Agg state good
[np.array([prb_eb,prb_ub]),np.array([p_ind,p_ind]),np.array([ell_eb,ell_ub])] # Agg state bad
]
###Output
_____no_output_____
###Markdown
Up to this point, individual agents do not have enough information to solve their decision problem yet. What is missing are beliefs about the endogenous macro variables $r$ and $w$, both of which are functions of $\bar{k}$. The Aggregate Economy
###Code
from HARK.ConsumptionSaving.ConsAggShockModel import CobbDouglasMarkovEconomy
KSEconomyDictionary = {
'PermShkAggCount': 1,
'TranShkAggCount': 1,
'PermShkAggStd': [0.0,0.0],
'TranShkAggStd': [0.0,0.0],
'DeprFac': 0.025, # Depreciation factor
'CapShare': 0.36, # Share of capital income in cobb-douglas production function
'DiscFac': 0.99,
'CRRA': 1.0,
'PermGroFacAgg': [1.0,1.0],
'AggregateL':1.0, # Fix aggregate labor supply at 1.0 - makes interpretation of z easier
'act_T':1200, # Number of periods for economy to run in simulation
'intercept_prev': [0.0,0.0], # Make some initial guesses at linear savings rule intercepts for each state
'slope_prev': [1.0,1.0], # Make some initial guesses at linear savings rule slopes for each state
'MrkvArray': np.array([[0.875,0.125],
[0.125,0.875]]), # Transition probabilities
'MrkvNow_init': 0 # Pick a state to start in (we pick the first state)
}
# The 'interesting' parts of the CobbDouglasMarkovEconomy
KSEconomyDictionary['CapShare'] = 0.36
KSEconomyDictionary['MrkvArray'] = AggMrkvArray
KSEconomy = CobbDouglasMarkovEconomy(agents = [KSAgent], **KSEconomyDictionary) # Combine production and consumption sides into an "Economy"
###Output
_____no_output_____
###Markdown
We have now populated the $\texttt{KSEconomy}$ with $\texttt{KSAgents}$ defined before. That is basically telling the agents to take the macro state from the $\texttt{KSEconomy}$. Now we construct the $\texttt{AggShkDstn}$ that specifies the evolution of the dynamics of the $\texttt{KSEconomy}$.The structure of the inputs for $\texttt{AggShkDstn}$ follows the same logic as for $\texttt{IncomeDstn}$. Now there is only one possible outcome for each aggregate state (the KS aggregate states are very simple), therefore, each aggregate state has only one possible condition which happens with probability 1.
###Code
# Calibrate the magnitude of the aggregate shocks
Tran_g = 1.01 # Productivity z in the good aggregate state
Tran_b = 0.99 # and the bad state
# The HARK framework allows permanent shocks
Perm_g = Perm_b = 1.0 # KS assume there are no aggregate permanent shocks
# Aggregate productivity shock distribution by state.
# First element is probabilities of different outcomes, given the state you are in.
# Second element is agg permanent shocks (here we don't have any, so just they are just 1.).
# Third element is agg transitory shocks, which are calibrated the same as in Krusell Smith.
KSAggShkDstn = [
[np.array([1.0]),np.array([Perm_g]),np.array([Tran_g])], # Aggregate good
[np.array([1.0]),np.array([Perm_b]),np.array([Tran_b])] # Aggregate bad
]
KSEconomy.AggShkDstn = KSAggShkDstn
###Output
_____no_output_____
###Markdown
Summing UpThe combined idiosyncratic and aggregate assumptions can be summarized mathematically as follows.$\forall \{s,s'\}=\{g,b\}\times\{g,b\}$, the following two conditions hold:$$\underbrace{\pi_{ss'01}}_{p(s \rightarrow s',u \rightarrow e)}+\underbrace{\pi_{ss'00}}_{p(s \rightarrow s', u \rightarrow u)} = \underbrace{\pi_{ss'11}}_{p(s\rightarrow s', e \rightarrow e) } + \underbrace{\pi_{ss'10}}_{p(s \rightarrow s', e \rightarrow u)} = \underbrace{\pi_{ss'}}_{p(s\rightarrow s')}$$$$u_s \frac{\pi_{ss'00}}{\pi_{ss'}}+ (1-u_s) \frac{\pi_{ss'10}}{\pi_{ss'}} = u_{s'}$$ Solving the ModelNow, we have fully defined all of the elements of the macroeconomy, and we are in postion to construct an object that represents the economy and to construct a rational expectations equilibrium.
###Code
# Construct the economy, make an initial history, then solve
KSAgent.getEconomyData(KSEconomy) # Makes attributes of the economy, attributes of the agent
KSEconomy.makeAggShkHist() # Make a simulated history of the economy
# Set tolerance level.
KSEconomy.tolerance = 0.01
# Solve macro problem by finding a fixed point for beliefs
KSEconomy.solve() # Solve the economy using the market method.
# i.e. guess the saving function, and iterate until a fixed point
###Output
intercept=[-0.3633934627371411, -0.3608578483955383], slope=[1.0830022961892363, 1.0827184480926204], r-sq=[0.9999007630097778, 0.9998101850187024]
intercept=[-0.17089644375033364, -0.16963019904696233], slope=[1.0241207368081324, 1.023951127937166], r-sq=[0.999999971178563, 0.9999999591213105]
intercept=[-0.12888285400465332, -0.12832143070835694], slope=[1.0177130805423737, 1.017644170947194], r-sq=[0.9999999991078745, 0.9999999989510322]
intercept=[-0.1750382403992826, -0.17381712042503975], slope=[1.0274485844743586, 1.0271360611194558], r-sq=[0.9999070289989878, 0.9998685927866565]
intercept=[-0.14701428612699458, -0.1460816057160878], slope=[1.0217199230918563, 1.0214857615950614], r-sq=[0.999999996556908, 0.9999999957854477]
intercept=[-0.16621233386964065, -0.16517078873160526], slope=[1.0250443688171609, 1.0247626299746349], r-sq=[0.9999975261945797, 0.9999964924559726]
intercept=[-0.14636239785900296, -0.14560955065089015], slope=[1.0215114019521827, 1.0213159398080411], r-sq=[0.9999999960541959, 0.9999999952356258]
intercept=[-0.16558197136205566, -0.16462181351719998], slope=[1.0248890561916615, 1.0246251908919906], r-sq=[0.9999977518642956, 0.9999967948317857]
intercept=[-0.1464936629756448, -0.1457733291389326], slope=[1.0215311912742095, 1.0213426544894717], r-sq=[0.9999999959276893, 0.9999999950824621]
intercept=[-0.1653892769329942, -0.1644483859132379], slope=[1.0248418313742473, 1.024582534515455], r-sq=[0.999997889822512, 0.9999969837600351]
intercept=[-0.14654438687031163, -0.14583122605542506], slope=[1.0215393482923663, 1.0213524594988954], r-sq=[0.9999999958822734, 0.9999999950272114]
intercept=[-0.16532460124496665, -0.16438881586734444], slope=[1.0248259575254974, 1.024567974252665], r-sq=[0.9999979362364356, 0.9999970473655019]
intercept=[-0.14656252828747313, -0.14585115959812595], slope=[1.0215422978847915, 1.021355861751399], r-sq=[0.9999999958660795, 0.9999999950074796]
intercept=[-0.16530245776203054, -0.16436817115895294], slope=[1.0248205153817054, 1.024562944188768], r-sq=[0.9999979522232706, 0.999997069283659]
intercept=[-0.14656893160341555, -0.1458580647440007], slope=[1.0215433427526472, 1.0213570446046103], r-sq=[0.9999999958603605, 0.9999999950005081]
intercept=[-0.16529479167154393, -0.16436098127625334], slope=[1.0248186295101278, 1.0245611953067848], r-sq=[0.9999979577832047, 0.9999970769068269]
intercept=[-0.14657117535204678, -0.14586046335227082], slope=[1.0215437090940278, 1.021357456219035], r-sq=[0.9999999958583508, 0.9999999949980571]
intercept=[-0.16529212738142712, -0.16435847558433578], slope=[1.0248179736935827, 1.0245605863737666], r-sq=[0.9999979597214894, 0.999997079563888]
intercept=[-0.14657195924175268, -0.14586129803982012], slope=[1.0215438370577066, 1.0213575996065887], r-sq=[0.9999999958576475, 0.9999999949971996]
intercept=[-0.1652911174279229, -0.16435751632321458], slope=[1.0248177243651695, 1.0245603522902766], r-sq=[0.9999979603530815, 0.9999970804271507]
###Markdown
The last line above is the converged aggregate saving rule for good and bad times, respectively.
###Code
# Plot some key results
print('Aggregate savings as a function of aggregate market resources:')
fig = plt.figure()
bottom = 0.1
top = 2*KSEconomy.kSS
x = np.linspace(bottom,top,1000,endpoint=True)
print(KSEconomy.AFunc)
y0 = KSEconomy.AFunc[0](x)
y1 = KSEconomy.AFunc[1](x)
plt.plot(x,y0)
plt.plot(x,y1)
plt.xlim([bottom, top])
make_figs('aggregate_savings', True, False)
# remark.show('aggregate_savings')
print('Consumption function at each aggregate market resources gridpoint (in general equilibrium):')
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
c_at_this_M = KSAgent.solution[0].cFunc[0](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,c_at_this_M)
make_figs('consumption_function', True, False)
# remark.show('consumption_function')
print('Savings at each individual market resources gridpoint (in general equilibrium):')
fig = plt.figure()
KSAgent.unpackcFunc()
m_grid = np.linspace(0,10,200)
KSAgent.unpackcFunc()
for M in KSAgent.Mgrid:
s_at_this_M = m_grid-KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid))
c_at_this_M = KSAgent.solution[0].cFunc[1](m_grid,M*np.ones_like(m_grid)) #Have two consumption functions, check this
plt.plot(m_grid,s_at_this_M)
make_figs('savings_function', True, False)
# remark.show('savings_function')
###Output
Aggregate savings as a function of aggregate market resources:
[<HARK.ConsumptionSaving.ConsAggShockModel.AggregateSavingRule object at 0x7fa5490dc6d8>, <HARK.ConsumptionSaving.ConsAggShockModel.AggregateSavingRule object at 0x7fa5490dc710>]
Saving figure aggregate_savings in Figures
Consumption function at each aggregate market resources gridpoint (in general equilibrium):
Saving figure consumption_function in Figures
Savings at each individual market resources gridpoint (in general equilibrium):
Saving figure savings_function in Figures
###Markdown
The Wealth Distribution in KS Benchmark Model
###Code
sim_wealth = KSEconomy.aLvlNow[0]
print("The mean of individual wealth is "+ str(sim_wealth.mean()) + ";\n the standard deviation is "
+ str(sim_wealth.std())+";\n the median is " + str(np.median(sim_wealth)) +".")
# Get some tools for plotting simulated vs actual wealth distributions
from HARK.utilities import getLorenzShares, getPercentiles
# The cstwMPC model conveniently has data on the wealth distribution
# from the U.S. Survey of Consumer Finances
from HARK.cstwMPC.SetupParamsCSTW import SCF_wealth, SCF_weights
# Construct the Lorenz curves and plot them
pctiles = np.linspace(0.001,0.999,15)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
# Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
make_figs('wealth_distribution_1', True, False)
# remark.show('')
# Calculate a measure of the difference between the simulated and empirical distributions
lorenz_distance = np.sqrt(np.sum((SCF_Lorenz_points - sim_Lorenz_points)**2))
print("The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is "+str(lorenz_distance) )
###Output
The Euclidean distance between simulated wealth distribution and the estimates from the SCF data is 1.461381224774914
###Markdown
Heterogeneous Time Preference RatesAs the figures show, the distribution of wealth that the baseline KS model produces is very far from matching the empirical degree of inequality in the US data.This could matter for macroeconomic purposes. For example, the SCF data indicate that many agents are concentrated at low values of wealth where the MPC is very large. We might expect, therefore, that a fiscal policy "stimulus" that gives a fixed amount of money to every agent would have a large effect on the consumption of the low-wealth households who have a high Marginal Propensity to Consume.KS attempt to address this problem by assuming that an individual agent's time preference rate can change over time.The rationale is that this represents a generational transition: The "agent" is really a "dynasty" and the time preference rate of the "child" dynast may differ from that of the "parent."Specifically, KS assume that $\beta$ can take on three values, 0.9858, 0.9894, and 0.9930, and that the transition probabilities are such that - The invariant distribution for $\beta$’s has 80 percent of the population at the middle $\beta$ and 10 percent at each of the other $\beta$’s.- Immediate transitions between the extreme values of $\beta$ occur with probability zero. - The average duration of the highest and lowest $\beta$’s is 50 years. The HARK toolkit is not natively set up to accommodate stochastic time preference factors (though an extension to accommodate this would be easy). Here, instead, we assume that different agents have different values of $\beta$ that are uniformly distributed over some range. We approximate the uniform distribution by three points. The agents are heterogeneous _ex ante_ (and permanently).
###Code
# Construct the distribution of types
from HARK.utilities import approxUniform
# Specify the distribution of the discount factor
num_types = 3 # number of types we want;
DiscFac_mean = 0.9858 # center of beta distribution
DiscFac_spread = 0.0085 # spread of beta distribution
DiscFac_dstn = approxUniform(num_types, DiscFac_mean-DiscFac_spread, DiscFac_mean+DiscFac_spread)[1]
BaselineType = deepcopy(KSAgent)
MyTypes = [] # initialize an empty list to hold our consumer types
for nn in range(len(DiscFac_dstn)):
# Now create the types, and append them to the list MyTypes
NewType = deepcopy(BaselineType)
NewType.DiscFac = DiscFac_dstn[nn]
NewType.seed = nn # give each consumer type a different RNG seed
MyTypes.append(NewType)
# Put all agents into the economy
KSEconomy_sim = CobbDouglasMarkovEconomy(agents = MyTypes, **KSEconomyDictionary)
KSEconomy_sim.AggShkDstn = KSAggShkDstn # Agg shocks are the same as defined earlier
for ThisType in MyTypes:
ThisType.getEconomyData(KSEconomy_sim) # Makes attributes of the economy, attributes of the agent
KSEconomy_sim.makeAggShkHist() # Make a simulated prehistory of the economy
KSEconomy_sim.solve() # Solve macro problem by getting a fixed point dynamic rule
# Get the level of end-of-period assets a for all types of consumers
aLvl_all = np.concatenate([KSEconomy_sim.aLvlNow[i] for i in range(len(MyTypes))])
print('Aggregate capital to income ratio is ' + str(np.mean(aLvl_all)))
# Plot the distribution of wealth across all agent types
sim_3beta_wealth = aLvl_all
pctiles = np.linspace(0.001,0.999,15)
sim_Lorenz_points = getLorenzShares(sim_wealth,percentiles=pctiles)
SCF_Lorenz_points = getLorenzShares(SCF_wealth,weights=SCF_weights,percentiles=pctiles)
sim_3beta_Lorenz_points = getLorenzShares(sim_3beta_wealth,percentiles=pctiles)
## Plot
plt.figure(figsize=(5,5))
plt.title('Wealth Distribution')
plt.plot(pctiles,SCF_Lorenz_points,'--k',label='SCF')
plt.plot(pctiles,sim_Lorenz_points,'-b',label='Benchmark KS')
plt.plot(pctiles,sim_3beta_Lorenz_points,'-*r',label='3 Types')
plt.plot(pctiles,pctiles,'g-.',label='45 Degree')
plt.xlabel('Percentile of net worth')
plt.ylabel('Cumulative share of wealth')
plt.legend(loc=2)
plt.ylim([0,1])
make_figs('wealth_distribution_2', True, False)
# remark.show('wealth_distribution_2')
# The mean levels of wealth for the three types of consumer are
[np.mean(KSEconomy_sim.aLvlNow[0]),np.mean(KSEconomy_sim.aLvlNow[1]),np.mean(KSEconomy_sim.aLvlNow[2])]
fig = plt.figure()
# Plot the distribution of wealth
for i in range(len(MyTypes)):
if i<=2:
plt.hist(np.log(KSEconomy_sim.aLvlNow[i])\
,label=r'$\beta$='+str(round(DiscFac_dstn[i],4))\
,bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.legend(loc=2)
plt.title('Log Wealth Distribution of 3 Types')
make_figs('log_wealth_3_types', True, False)
# remark.show('log_wealth_3_types')
fig = plt.figure()
# Distribution of wealth in original model with one type
plt.hist(np.log(sim_wealth),bins=np.arange(-2.,np.log(max(aLvl_all)),0.05))
plt.yticks([])
plt.title('Log Wealth Distribution of Original Model with One Type')
make_figs('log_wealth_1', True, False)
# remark.show('log_wea`lth_1')
###Output
Saving figure log_wealth_1 in Figures
###Markdown
Target Wealth is Nonlinear in Time Preference RateNote the nonlinear relationship between wealth and time preference in the economy with three types. Although the three groups are uniformly spaced in $\beta$ values, there is a lot of overlap in the distribution of wealth of the two impatient types, who are both separated from the most patient type by a large gap. A model of buffer stock saving that has simplified enough to be [tractable](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/Consumption/TractableBufferStock) yields some insight. If $\sigma$ is a measure of income risk, $r$ is the interest rate, and $\theta$ is the time preference rate, then for an 'impatient' consumer (for whom $\theta > r$), in the logarithmic utility case an approximate formula for the target level of wealth is:\begin{eqnarray} a & \approx & \left(\frac{1}{ \theta(1+(\theta-r)/\sigma)-r}\right)\end{eqnarray}Conceptually, this reflects the fact that the only reason any of these agents holds positive wealth is the precautionary motive. (If there is no uncertainty, $\sigma=0$ and thus $a=0$). For positive uncertainty $\sigma>0$, as the degree of impatience (given by $\theta-r$) approaches zero, the target level of wealth approaches infinity. A plot of $a$ as a function of $\theta$ for a particular parameterization is shown below.
###Code
# Plot target wealth as a function of time preference rate for calibrated tractable model
fig = plt.figure()
ax = plt.axes()
sigma = 0.01
r = 0.02
theta = np.linspace(0.023,0.10,100)
plt.plot(theta,1/(theta*(1+(theta-r)/sigma)-r))
plt.xlabel(r'$\theta$')
plt.ylabel('Target wealth')
make_figs('target_wealth', True, False)
# remark.show('target_wealth')
###Output
Saving figure target_wealth in Figures
|
04.14-Automatic-Imputation-Method-Detection-Sklearn.ipynb
|
###Markdown
Automatic selection of best imputation technique with SklearnIn this notebook we will do a grid search over the imputation methods available in Scikit-learn to determine which imputation technique works best for this dataset and the machine learning model of choice.We will also train a very simple machine learning model as part of a small pipeline.We will use the House Price dataset.- To download the dataset please visit the lecture **Datasets** in **Section 1** of the course.
###Code
import pandas as pd
import numpy as np
# import classes for imputation
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
# import extra classes for modelling
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split, GridSearchCV
np.random.seed(0)
# load dataset with all the variables
data = pd.read_csv('../houseprice.csv',)
data.head()
# find categorical variables
# those of type 'Object' in the dataset
features_categorical = [c for c in data.columns if data[c].dtypes=='O']
# find numerical variables
# those different from object and also excluding the target SalePrice
features_numerical = [c for c in data.columns if data[c].dtypes!='O' and c !='SalePrice']
# inspect the categorical variables
data[features_categorical].head()
# inspect the numerical variables
data[features_numerical].head()
# separate intro train and test set
X_train, X_test, y_train, y_test = train_test_split(
data.drop('SalePrice', axis=1), # just the features
data['SalePrice'], # the target
test_size=0.3, # the percentage of obs in the test set
random_state=0) # for reproducibility
X_train.shape, X_test.shape
# We create the preprocessing pipelines for both
# numerical and categorical data
# adapted from Scikit-learn code available here under BSD3 license:
# https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('numerical', numeric_transformer, features_numerical),
('categorical', categorical_transformer, features_categorical)])
# Note that to initialise the pipeline I pass any argument to the transformers.
# Those will be changed during the gridsearch below.
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', preprocessor),
('regressor', Lasso(max_iter=2000))])
# now we create the grid with all the parameters that we would like to test
param_grid = {
'preprocessor__numerical__imputer__strategy': ['mean', 'median'],
'preprocessor__categorical__imputer__strategy': ['most_frequent', 'constant'],
'regressor__alpha': [10, 100, 200],
}
grid_search = GridSearchCV(clf, param_grid, cv=5, iid=False, n_jobs=-1, scoring='r2')
# cv=3 is the cross-validation
# no_jobs =-1 indicates to use all available cpus
# scoring='r2' indicates to evaluate using the r squared
# for more details in the grid parameters visit:
#https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
###Output
_____no_output_____
###Markdown
When setting the grid parameters, this is how we indicate the parameters:preprocessor__numerical__imputer__strategy': ['mean', 'median'],the above line of code indicates that I would like to test the mean and the median in the imputer step of the numerical processor.preprocessor__categorical__imputer__strategy': ['most_frequent', 'constant']the above line of code indicates that I would like to test the most frequent or a constant value in the imputer step of the categorical processorclassifier__alpha': [0.1, 1.0, 0.5]the above line of code indicates that I want to test those 3 values for the alpha parameter of Lasso. Note that Lasso is the 'classifier' step of our last pipeline
###Code
# and now we train over all the possible combinations of the parameters above
grid_search.fit(X_train, y_train)
# and we print the best score over the train set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_train, y_train)))
# we can print the best estimator parameters like this
grid_search.best_estimator_
# and find the best fit parameters like this
grid_search.best_params_
# here we can see all the combinations evaluated during the gridsearch
grid_search.cv_results_['params']
# and here the scores for each of one of the above combinations
grid_search.cv_results_['mean_test_score']
# and finally let's check the performance over the test set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_test, y_test)))
###Output
best linear regression from grid search: 0.738
###Markdown
Automatic selection of best imputation technique with SklearnIn this notebook we will do a grid search over the imputation methods available in Scikit-learn to determine which imputation technique works best for this dataset and the machine learning model of choice.We will also train a very simple machine learning model as part of a small pipeline.We will use the House Price dataset.- To download the dataset please visit the lecture **Datasets** in **Section 1** of the course.
###Code
import pandas as pd
import numpy as np
# import classes for imputation
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
# import extra classes for modelling
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import Lasso
from sklearn.model_selection import train_test_split, GridSearchCV
np.random.seed(0)
# load dataset with all the variables
data = pd.read_csv('../houseprice.csv',)
data.head()
# find categorical variables
# those of type 'Object' in the dataset
features_categorical = [c for c in data.columns if data[c].dtypes=='O']
# find numerical variables
# those different from object and also excluding the target SalePrice
features_numerical = [c for c in data.columns if data[c].dtypes!='O' and c !='SalePrice']
# inspect the categorical variables
data[features_categorical].head()
# inspect the numerical variables
data[features_numerical].head()
# separate intro train and test set
X_train, X_test, y_train, y_test = train_test_split(
data.drop('SalePrice', axis=1), # just the features
data['SalePrice'], # the target
test_size=0.3, # the percentage of obs in the test set
random_state=0) # for reproducibility
X_train.shape, X_test.shape
# We create the preprocessing pipelines for both
# numerical and categorical data
# adapted from Scikit-learn code available here under BSD3 license:
# https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('numerical', numeric_transformer, features_numerical),
('categorical', categorical_transformer, features_categorical)])
# Note that to initialise the pipeline I pass any argument to the transformers.
# Those will be changed during the gridsearch below.
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
clf = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', Lasso(max_iter=2000))])
# now we create the grid with all the parameters that we would like to test
param_grid = {
'preprocessor__numerical__imputer__strategy': ['mean', 'median'],
'preprocessor__categorical__imputer__strategy': ['most_frequent', 'constant'],
'classifier__alpha': [10, 100, 200],
}
grid_search = GridSearchCV(clf, param_grid, cv=5, iid=False, n_jobs=-1, scoring='r2')
# cv=3 is the cross-validation
# no_jobs =-1 indicates to use all available cpus
# scoring='r2' indicates to evaluate using the r squared
# for more details in the grid parameters visit:
#https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
###Output
_____no_output_____
###Markdown
When setting the grid parameters, this is how we indicate the parameters:preprocessor__numerical__imputer__strategy': ['mean', 'median'],the above line of code indicates that I would like to test the mean and the median in the imputer step of the numerical processor.preprocessor__categorical__imputer__strategy': ['most_frequent', 'constant']the above line of code indicates that I would like to test the most frequent or a constant value in the imputer step of the categorical processorclassifier__alpha': [0.1, 1.0, 0.5]the above line of code indicates that I want to test those 3 values for the alpha parameter of Lasso. Note that Lasso is the 'classifier' step of our last pipeline
###Code
# and now we train over all the possible combinations of the parameters above
grid_search.fit(X_train, y_train)
# and we print the best score over the train set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_train, y_train)))
# we can print the best estimator parameters like this
grid_search.best_estimator_
# and find the best fit parameters like this
grid_search.best_params_
# here we can see all the combinations evaluated during the gridsearch
grid_search.cv_results_['params']
# and here the scores for each of one of the above combinations
grid_search.cv_results_['mean_test_score']
# and finally let's check the performance over the test set
print(("best linear regression from grid search: %.3f"
% grid_search.score(X_test, y_test)))
###Output
best linear regression from grid search: 0.738
|
section5-worldbank/Section_5_Homework_Fill_in_the_Blanks.ipynb
|
###Markdown
Section 5 Homework - Fill in the Blanks Import the packages needed to perform the analysis
###Code
import _ as pd
import _ as np
import _ as plt
import _ as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the data provided for the exercise
###Code
# Import the csv dataset
data = _("Section5-Homework-Data.csv")
###Output
_____no_output_____
###Markdown
Explore the data
###Code
# Visualize the dataframe
data
# Rename the column names
data._ = ['CountryName', 'CountryCode', 'BirthRate', 'InternetUsers', 'IncomeGroup']
# Check top 6 rows
data._(6)
# Check bottom 7 rows
data._(7)
# Check the structure of the data frame
data._()
# Check the summary of the data
data._()
###Output
_____no_output_____
###Markdown
Request 1You are employed as a Data Scientist by the World Bank and you are working on a project to analyse the World’s demographic trends.You are required to produce a scatterplot illustrating Birth Rate and Internet Usage statistics by Country.The scatterplot needs to also be categorised by Countries’ Income Groups.
###Code
# Plot the BirthRate versus Internet Users categorized by Income Group
vis1 = sns._( data = data, x = '_', y = '_', fit_reg = False, hue = '_', size = 10 )
###Output
_____no_output_____
###Markdown
Request 2You have received an urgent update from your manager. You are required to produce a second scatterplot also illustrating Birth Rate and Internet Usage statistics by Country.However, this time the scatterplot needs to be categorised by Countries’ Regions.Additional data has been supplied in the form of lists.
###Code
# Copy here the data from the homework provided in lists, Country names, codes and regions dataset
# Create the dataframe
country_data = pd._({'CountryName': np.array(Countries_2012_Dataset),
'CountryCode': np.array(Codes_2012_Dataset),
'CountryRegion': np.array(Regions_2012_Dataset)})
# Explore the dataset
country_data._()
# Merge the country data to the original dataframe
merged_data = pd._(left=data, right=country_data, how='inner', on="CountryCode")
# Explore the dataset
merged_data._()
# Plot the BirthRate versus Internet Users cathegorized by Country Region
vis2 = sns._( data = merged_data, x = '_', y = '_', fit_reg = False, hue = '_', size = 10 )
###Output
_____no_output_____
###Markdown
ChallengeThe world bank was very impressed with your deliverables on the previous assignment and they have a new project for you. You are required to produce a scatterplot depicting Life Expectancy (y-axis) and Fertility Rate (x-axis) statistics by Country. The scatterplot need to be categorised by Countries Regions.You have been supplied with data for 2 years: 1960 and 2013 and you are requires to produce a visualisation for each of these years.Some data has been provided in a CSV file, some in Python lists. All data manipulations have to be performed in Python (not in Excel) because this project can be audited at the later stage.You also have been requested to provide insights into how the two periods compare.
###Code
# Copy here the data from the homework provided in lists, for Country code and life expectancy at birth in 1960 and 2013
# Create a data frame with the life expectancy
life_exp_data = pd._({'CountryCode': np.array(Country_Code),
'LifeExp1960': np.array(Life_Expectancy_At_Birth_1960),
'LifeExp2013': np.array(Life_Expectancy_At_Birth_2013)})
# Check row counts
_(_(life_exp_data)) #187 rows
# Check summaries
life_exp_data._()
###Output
_____no_output_____
###Markdown
Did you pick up that there is more than one year in the data? From the challenge we know that there are two: **1960** and **2013**
###Code
# Merge the data frame with the life expectancy
merged_data = pd._(left=merged_data, right=life_exp_data, how='inner', on='CountryCode')
# Explore the dataset
merged_data._()
# Check the new structures
merged_data._()
###Output
_____no_output_____
###Markdown
We can see obsolete columns because of the merge operation
###Code
# Rename the one of the colunms containing the country names and delete the other
merged_data._(columns = {'CountryName_x':'CountryName'}, inplace = True)
_ merged_data['CountryName_y']
# Check structures again
merged_data._()
# Plot the BirthRate versus LifeExpectancy cathegorized by Country Region in 1960
vis3 = sns._( data = merged_data, x = '_', y = '_', fit_reg = False, hue = '_', size = 10 )
# Plot the BirthRate versus LifeExpectancy cathegorized by Country Region in 2013
###Output
_____no_output_____
|
research/jensOpportunityDeepL/explanation.ipynb
|
###Markdown
How the data gets concatenated
###Code
import pandas as pd
df_01 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [7, 8, 9]})
df_02 = pd.DataFrame({'a': [10, 20, 30], 'b': [40, 50, 60], 'c': [70, 80, 90]})
df_03 = pd.DataFrame({'a': [100, 200, 300], 'b': [400, 500, 600], 'c': [700, 800, 900]})
df_big = pd.DataFrame()
df_big = df_big.append(df_01, ignore_index=True)
df_big
df_big = df_big.append(df_02, ignore_index=True)
df_big
df_big = df_big.append(df_03, ignore_index=True)
df_big
df_big.reset_index(drop=True, inplace=True)
df_big
###Output
_____no_output_____
###Markdown
pd.numeric
###Code
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, None, 'k'], 'c': [7, 8, None]})
df
df = df.apply(pd.to_numeric, errors = 'coerce')
df
###Output
_____no_output_____
###Markdown
isna()
###Code
df.isna()
df.isna().sum()
df.isna().sum().sum()
###Output
_____no_output_____
###Markdown
df.interpolate()
###Code
df_04 = pd.DataFrame({'a': [1, 2, 3, 4], 'b': [4, None, 6, 6], 'c': [8, None, None, 7], 'd': [None, 9, None, 8]})
df_04
df_04 = df_04.interpolate()
df_04
###Output
_____no_output_____
###Markdown
np.expand_dims
###Code
import numpy as np
example_np_array = np.array([[1, 2, 3], [4, 5, 6]])
example_np_array
np.expand_dims(example_np_array, -1)
# same as
np.expand_dims(example_np_array, axis=2)
###Output
_____no_output_____
###Markdown
drop everything except
###Code
df_05 = pd.DataFrame({'a': [1, 2, 3, 4], 'b': [4, 3, 6, 6], 'c': [8, 3, 3, 7], 'd': [3, 9, 3, 8]})
df_05
df_05.drop(df_05.columns.difference(['a','b']), 1, inplace=False) # only keep a and b
"""
labels:
IMU-BACK-accX
IMU-BACK-accY
IMU-BACK-accZ
IMU-BACK-Quaternion1
IMU-BACK-Quaternion2
IMU-BACK-Quaternion3
IMU-BACK-Quaternion4
IMU-RLA-accX
IMU-RLA-accY
IMU-RLA-accZ
IMU-RLA-Quaternion1
IMU-RLA-Quaternion2
IMU-RLA-Quaternion3
IMU-RLA-Quaternion4
IMU-LLA-accX
IMU-LLA-accY
IMU-LLA-accZ
IMU-LLA-Quaternion1
IMU-LLA-Quaternion2
IMU-LLA-Quaternion3
IMU-LLA-Quaternion4
IMU-L-SHOE-EuX
IMU-L-SHOE-EuY
IMU-L-SHOE-EuZ
IMU-L-SHOE-Nav_Ax
IMU-L-SHOE-Nav_Ay
IMU-L-SHOE-Nav_Az
IMU-L-SHOE-Body_Ax
IMU-L-SHOE-Body_Ay
IMU-L-SHOE-Body_Az
IMU-L-SHOE-AngVelBodyFrameX
IMU-L-SHOE-AngVelBodyFrameY
IMU-L-SHOE-AngVelBodyFrameZ
IMU-L-SHOE-AngVelNavFrameX
IMU-L-SHOE-AngVelNavFrameY
IMU-L-SHOE-AngVelNavFrameZ
IMU-R-SHOE-EuX
IMU-R-SHOE-EuY
IMU-R-SHOE-EuZ
IMU-R-SHOE-Nav_Ax
IMU-R-SHOE-Nav_Ay
IMU-R-SHOE-Nav_Az
IMU-R-SHOE-Body_Ax
IMU-R-SHOE-Body_Ay
IMU-R-SHOE-Body_Az
IMU-R-SHOE-AngVelBodyFrameX
IMU-R-SHOE-AngVelBodyFrameY
IMU-R-SHOE-AngVelBodyFrameZ
IMU-R-SHOE-AngVelNavFrameX
IMU-R-SHOE-AngVelNavFrameY
IMU-R-SHOE-AngVelNavFrameZ
Locomotion
HL_Activity
file_index
"""
labels = ['IMU-BACK-accX', 'IMU-BACK-accY', 'IMU-BACK-accZ', 'IMU-BACK-Quaternion1', 'IMU-BACK-Quaternion2', 'IMU-BACK-Quaternion3', 'IMU-BACK-Quaternion4',
'IMU-RLA-accX', 'IMU-RLA-accY', 'IMU-RLA-accZ', 'IMU-RLA-Quaternion1', 'IMU-RLA-Quaternion2', 'IMU-RLA-Quaternion3', 'IMU-RLA-Quaternion4',
'IMU-LLA-accX', 'IMU-LLA-accY', 'IMU-LLA-accZ', 'IMU-LLA-Quaternion1', 'IMU-LLA-Quaternion2', 'IMU-LLA-Quaternion3', 'IMU-LLA-Quaternion4',
'IMU-L-SHOE-EuX', 'IMU-L-SHOE-EuY', 'IMU-L-SHOE-EuZ', 'IMU-L-SHOE-Nav_Ax', 'IMU-L-SHOE-Nav_Ay', 'IMU-L-SHOE-Nav_Az', 'IMU-L-SHOE-Body_Ax', 'IMU-L-SHOE-Body_Ay', 'IMU-L-SHOE-Body_Az', 'IMU-L-SHOE-AngVelBodyFrameX', 'IMU-L-SHOE-AngVelBodyFrameY', 'IMU-L-SHOE-AngVelBodyFrameZ', 'IMU-L-SHOE-AngVelNavFrameX', 'IMU-L-SHOE-AngVelNavFrameY', 'IMU-L-SHOE-AngVelNavFrameZ',
'IMU-R-SHOE-EuX', 'IMU-R-SHOE-EuY', 'IMU-R-SHOE-EuZ', 'IMU-R-SHOE-Nav_Ax', 'IMU-R-SHOE-Nav_Ay', 'IMU-R-SHOE-Nav_Az', 'IMU-R-SHOE-Body_Ax', 'IMU-R-SHOE-Body_Ay', 'IMU-R-SHOE-Body_Az', 'IMU-R-SHOE-AngVelBodyFrameX', 'IMU-R-SHOE-AngVelBodyFrameY', 'IMU-R-SHOE-AngVelBodyFrameZ', 'IMU-R-SHOE-AngVelNavFrameX', 'IMU-R-SHOE-AngVelNavFrameY', 'IMU-R-SHOE-AngVelNavFrameZ',
'Locomotion', 'HL_Activity', 'file_index']
###Output
_____no_output_____
|
random/vectorizing-tricks/ex1.ipynb
|
###Markdown
Vectorizing tricks let say we have to mesure the graient for function, ...$$\frac{\partial J}{\partial\theta_{j}} = \frac{1}{m} \sum_{i=1}^{m}(h_{\theta}(x^{(i)}) - y^{(i)})x_{j}^{(i)}$$$$h_{\theta}(x) = \theta^{T}x$$where small x or theta is column vector ex. $$x_{3x1}$$capital X is matrix (array) of row vectors ex. $$X_{nxm}$$n : number of rows.m : number of column (numbe rof features).
###Code
import numpy as np
import time
X = np.array([
[1, 2, 3],
[1, 5, 6],
[1, 8, 9],
[1, 4, 7],
])
y = np.array([
[13],
[30],
[45],
[29]
])
theta = np.array([
[1],
[2],
[3]
])
# ...
print(X.shape)
print(y.shape)
print(theta.shape)
###Output
(4, 3)
(4, 1)
(3, 1)
###Markdown
Hypothesis
###Code
def h(X):
return X.dot(theta)
def h_loop(X):
return [theta.T.dot(x) for x in X]
h(X)
# This solution is similar to dot h() operation it's slower.
x0 = theta.T.dot(X[0, :])
x1 = theta.T.dot(X[1, :])
x2 = theta.T.dot(X[2, :])
x3 = theta.T.dot(X[3, :])
print(x0)
print(x1)
print(x2)
print(x3)
XX = np.random.rand(10000, 3)
s0 = time.time()
h(XX)
e0 = time.time()
s1 = time.time()
h_loop(XX)
e1 = time.time()
print('Time of vectorizing : ', (e0 - s0))
print('Time of looping : ', (e1 - s1))
print('is vectorizing time slower than loop time : ', (e0 - s0) > (e1 - s1))
###Output
Time of vectorizing : 0.00844573974609375
Time of looping : 0.0724332332611084
is vectorizing time slower than loop time : False
###Markdown
Gradient$$\frac{\partial J}{\partial\theta_{j}} = \frac{1}{m} \sum_{i=1}^{m}(h_{\theta}(x^{(i)}) - y^{(i)})x_{j}^{(i)}$$
###Code
def grad_1(xx, yy):
m = len(xx)
error = h(xx) - yy
return (1./m) * np.sum(error * xx, axis=0)
def grad_2(xx, yy):
m = len(xx)
J = [0, 0, 0]
for i in range(m):
x0 = theta.T.dot(xx[i, :])
e0 = x0 - yy[i]
for ii in range(len(J)):
J[ii] += (e0 * xx[i][ii])[0]
J = [(1. / m) * j for j in J]
return J
X.T
X
def grad_3(xx, yy):
m = len(xx)
error = h(xx) - yy
return (1./m) * xx.T.dot(error)
print(grad_1(X, y))
print(grad_2(X, y))
print(grad_3(X, y))
XX = np.random.rand(10000, 3)
yy = np.random.rand(10000, 1)
s1 = time.time()
j1 = grad_1(XX, yy)
e1 = time.time()
s2 = time.time()
j2 = grad_2(XX, yy)
e2 = time.time()
s3 = time.time()
j3 = grad_3(XX, yy)
e3 = time.time()
print('Time of grad_1 : ', (e1 - s1))
print('Time of looping : ', (e2 - s2))
print('Time of vectorizing : ', (e3 - s3))
print('is vectorizing time slower than loop time : ', (e3 - s3) > (e2 - s2))
print('\n\n',j1)
print('\n\n',j2)
print('\n\n',j3)
###Output
Time of grad_1 : 0.0025784969329833984
Time of looping : 0.19057941436767578
Time of vectorizing : 0.00023317337036132812
is vectorizing time slower than loop time : False
[1.34718835 1.42779259 1.50030532]
[1.3471883509081484, 1.4277925899084303, 1.5003053187969504]
[[1.34718835]
[1.42779259]
[1.50030532]]
|
2021-iqc/ex5/ex5.ipynb
|
###Markdown
Exercise 5 - Variational quantum eigensolver Historical backgroundDuring the last decade, quantum computers matured quickly and began to realize Feynman's initial dream of a computing system that could simulate the laws of nature in a quantum way. A 2014 paper first authored by Alberto Peruzzo introduced the **Variational Quantum Eigensolver (VQE)**, an algorithm meant for finding the ground state energy (lowest energy) of a molecule, with much shallower circuits than other approaches.[1] And, in 2017, the IBM Quantum team used the VQE algorithm to simulate the ground state energy of the lithium hydride molecule.[2]VQE's magic comes from outsourcing some of the problem's processing workload to a classical computer. The algorithm starts with a parameterized quantum circuit called an ansatz (a best guess) then finds the optimal parameters for this circuit using a classical optimizer. The VQE's advantage over classical algorithms comes from the fact that a quantum processing unit can represent and store the problem's exact wavefunction, an exponentially hard problem for a classical computer. This exercise 5 allows you to realize Feynman's dream yourself, setting up a variational quantum eigensolver to determine the ground state and the energy of a molecule. This is interesting because the ground state can be used to calculate various molecular properties, for instance the exact forces on nuclei than can serve to run molecular dynamics simulations to explore what happens in chemical systems with time.[3] References1. Peruzzo, Alberto, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7.2. Kandala, Abhinav, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246.3. Sokolov, Igor O., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125. IntroductionFor the implementation of VQE, you will be able to make choices on how you want to compose your simulation, in particular focusing on the ansatz quantum circuits.This is motivated by the fact that one of the important tasks when running VQE on noisy quantum computers is to reduce the loss of fidelity (which introduces errors) by finding the most compact quantum circuit capable of representing the ground state.Practically, this entails to minimizing the number of two-qubit gates (e.g. CNOTs) while not loosing accuracy.Goal Find the shortest ansatz circuits for representing accurately the ground state of given problems. Be creative! Plan First you will learn how to compose a VQE simulation for the smallest molecule and then apply what you have learned to a case of a larger one. **1. Tutorial - VQE for H$_2$:** familiarize yourself with VQE and select the best combination of ansatz/classical optimizer by running statevector simulations.**2. Final Challenge - VQE for LiH:** perform similar investigation as in the first part but restricting to statevector simulator only. Use the qubit number reduction schemes available in Qiskit and find the optimal circuit for this larger system. Optimize the circuit and use your imagination to find ways to select the best building blocks of parameterized circuits and compose them to construct the most compact ansatz circuit for the ground state, better than the ones already available in Qiskit. Below is an introduction to the theory behind VQE simulations. You don't have to understand the whole thing before moving on. Don't be scared! TheoryHere below is the general workflow representing how the molecular simulations using VQE are performed on quantum computers.The core idea hybrid quantum-classical approach is to outsource to **CPU (classical processing unit)** and **QPU (quantum processing unit)** the parts that they can do best. The CPU takes care of listing the terms that need to be measured to compute the energy and also optimizing the circuit parameters. The QPU implements a quantum circuit representing the quantum state of a system and measures the energy. Some more details are given below:**CPU** can compute efficiently the energies associated to electron hopping and interactions (one-/two-body integrals by means of a Hartree-Fock calculation) that serve to represent the total energy operator, Hamiltonian. The [Hartree–Fock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\Psi_{HF} \rangle = |0101 \rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\sum_i c_i |i\rangle$ states in $|\Psi \rangle = c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $ where $i$ is a bitstring). After a HF calculation, operators in the Hamiltonian are mapped to measurements on a QPU using fermion-to-qubit transformations (see Hamiltonian section below). One can further analyze the properties of the system to reduce the number of qubits or shorten the ansatz circuit:- For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1).- For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1).- For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits.**QPU** implements quantum circuits (see Ansatzes section below), parameterized by angles $\vec\theta$, that would represent the ground state wavefunction by placing various single qubit rotations and entanglers (e.g. two-qubit gates). The quantum advantage lies in the fact that QPU can efficiently represent and store the exact wavefunction, which becomes intractable on a classical computer for systems that have more than a few atoms. Finally, QPU measures the operators of choice (e.g. ones representing a Hamiltonian).Below we go slightly more in mathematical details of each component of the VQE algorithm. It might be also helpful if you watch our [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w). Hamiltonian Here we explain how we obtain the operators that we need to measure to obtain the energy of a given system.These terms are included in the molecular Hamiltonian defined as:$$\begin{aligned}\hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\&+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N}\end{aligned}$$with$$h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r)$$$$g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|} $$where the $h_{r s}$ and $g_{p q r s}$ are the one-/two-body integrals (using the Hartree-Fock method) and $E_{N N}$ the nuclear repulsion energy. The one-body integrals represent the kinetic energy of the electrons and their interaction with nuclei. The two-body integrals represent the electron-electron interaction.The $\hat{a}_{r}^{\dagger}, \hat{a}_{r}$ operators represent creation and annihilation of electron in spin-orbital $r$ and require mappings to operators, so that we can measure them on a quantum computer.Note that VQE minimizes the electronic energy so you have to retrieve and add the nuclear repulsion energy $E_{NN}$ to compute the total energy. So, for every non-zero matrix element in the $ h_{r s}$ and $g_{p q r s}$ tensors, we can construct corresponding Pauli string (tensor product of Pauli operators) with the following fermion-to-qubit transformation. For instance, in Jordan-Wigner mapping for an orbital $r = 3$, we obtain the following Pauli string:$$\hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1$$where $\hat \sigma_x, \hat \sigma_y, \hat \sigma_z$ are the well-known Pauli operators. The tensor products of $\hat \sigma_z$ operators are placed to enforce the fermionic anti-commutation relations.A representation of the Jordan-Wigner mapping between the 14 spin-orbitals of a water molecule and some 14 qubits is given below:Then, one simply replaces the one-/two-body excitations (e.g. $\hat{a}_{r}^{\dagger} \hat{a}_{s}$, $\hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}$) in the Hamiltonian by corresponding Pauli strings (i.e. $\hat{P}_i$, see picture above). The resulting operator set is ready to be measured on the QPU.For additional details see [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1). AnsatzesThere are mainly 2 types of ansatzes you can use for chemical problems. - **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below.- **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state. As in the figure below, the R gates represent the parametrized single qubit rotations and $U_{CNOT}$ the entanglers (two-qubit gates). The idea is that after repeating certain $D$-times the same block (with independent parameters) one can reach the ground state. For additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf). VQEGiven a Hermitian operator $\hat H$ with an unknown minimum eigenvalue $E_{min}$, associated with the eigenstate $|\psi_{min}\rangle$, VQE provides an estimate $E_{\theta}$, bounded by $E_{min}$:\begin{align*} E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle\end{align*} where $|\psi(\theta)\rangle$ is the trial state associated with $E_{\theta}$. By applying a parameterized circuit, represented by $U(\theta)$, to some arbitrary starting state $|\psi\rangle$, the algorithm obtains an estimate $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$ on $|\psi_{min}\rangle$. The estimate is iteratively optimized by a classical optimizer by changing the parameter $\theta$ and minimizing the expectation value of $\langle \psi(\theta) |\hat H|\psi(\theta) \rangle$. As applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few. References for additional details For the qiskit-nature tutorial that implements this algorithm see [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)but this won't be sufficient and you might want to look on the [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test) containing tests that are written for each component, they provide the base code for the use of each functionality. Part 1: Tutorial - VQE for H$_2$ molecule In this part, you will simulate H$_2$ molecule using the STO-3G basis with the PySCF driver and Jordan-Wigner mapping.We will guide you through the following parts so then you can tackle harder problems. 1. DriverThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers.We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available. By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.
###Code
from qiskit_nature.drivers import PySCFDriver
molecule = "H .0 .0 .0; H .0 .0 0.739"
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
###Output
_____no_output_____
###Markdown
Tutorial questions 1 Look into the attributes of `qmolecule` and answer the questions below. 1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?2. What is the number of molecular orbitals?3. What is the number of spin-orbitals?3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?5. What is the value of the nuclear repulsion energy?You can find the answers at the end of this notebook.
###Code
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
###Output
_____no_output_____
###Markdown
2. Electronic structure problemYou can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).
###Code
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
from qiskit_nature.transformers import FreezeCoreTransformer
problem = ElectronicStructureProblem(driver, q_molecule_transformers =
[FreezeCoreTransformer(freeze_core = True, remove_orbitals = [3,4])])
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
###Output
Traceback [1;36m(most recent call last)[0m:
File [0;32m"<ipython-input-5-e542ad75ddb5>"[0m, line [0;32m7[0m, in [0;35m<module>[0m
second_q_ops = problem.second_q_ops()
File [0;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/problems/second_quantization/electronic/electronic_structure_problem.py"[0m, line [0;32m65[0m, in [0;35msecond_q_ops[0m
self._molecule_data_transformed = cast(QMolecule, self._transform(self._molecule_data))
File [0;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/problems/second_quantization/base_problem.py"[0m, line [0;32m73[0m, in [0;35m_transform[0m
data = transformer.transform(data)
File [0;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/transformers/freeze_core_transformer.py"[0m, line [0;32m74[0m, in [0;35mtransform[0m
molecule_data_new = super().transform(molecule_data)
File [0;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/transformers/active_space_transformer.py"[0m, line [0;32m139[0m, in [0;35mtransform[0m
active_orbs_idxs, inactive_orbs_idxs = self._determine_active_space(molecule_data)
File [0;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/transformers/freeze_core_transformer.py"[0m, line [0;32m105[0m, in [0;35m_determine_active_space[0m
nelec_inactive = int(sum([self._mo_occ_total[o] for o in inactive_orbs_idxs]))
[1;36m File [1;32m"/opt/conda/lib/python3.8/site-packages/qiskit_nature/transformers/freeze_core_transformer.py"[1;36m, line [1;32m105[1;36m, in [1;35m<listcomp>[1;36m[0m
[1;33m nelec_inactive = int(sum([self._mo_occ_total[o] for o in inactive_orbs_idxs]))[0m
[1;31mIndexError[0m[1;31m:[0m index 3 is out of bounds for axis 0 with size 2
Use %tb to get the full traceback.
###Markdown
3. QubitConverterAllows to define the mapping that you will use in the simulation. You can try different mapping but we will stick to `JordanWignerMapper` as allows a simple correspondence: a qubit represents a spin-orbital in the molecule.
###Code
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'ParityMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction = [1,-1])
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
###Output
Traceback [1;36m(most recent call last)[0m:
[1;36m File [1;32m"<ipython-input-7-28609b96d039>"[1;36m, line [1;32m17[1;36m, in [1;35m<module>[1;36m[0m
[1;33m num_particles = (problem.molecule_data_transformed.num_alpha,[0m
[1;31mAttributeError[0m[1;31m:[0m 'NoneType' object has no attribute 'num_alpha'
Use %tb to get the full traceback.
###Markdown
4. Initial stateAs we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\Psi_{HF} \rangle = |0101 \rangle$). We can initialize it as follows:
###Code
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state)
###Output
_____no_output_____
###Markdown
5. AnsatzOne of the most important choices is the quantum circuit that you choose to approximate your ground state.Here is the example of qiskit circuit library that contains many possibilities for making your own circuit.
###Code
from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
# Choose the ansatz
ansatz_type = "SUCCD"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
entanglement = 'full'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 3
# Skip the final rotation_blocks layer
skip_final_rotation_layer = True
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
theta = Parameter('a')
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
qc.h(qubit_label)
# Place a CNOT ladder
for i in range(n-1):
qc.cx(i, i+1)
# Visual separator
qc.barrier()
# rz rotations on all qubits
qc.rz(theta, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
###Output
_____no_output_____
###Markdown
6. BackendThis is where you specify the simulator or device where you want to run your algorithm.We will focus on the `statevector_simulator` in this challenge.
###Code
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
###Output
_____no_output_____
###Markdown
7. OptimizerThe optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.A clever choice might reduce drastically the number of needed energy evaluations.
###Code
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
optimizer_type = 'COBYLA'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=500)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=500)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=500)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=500)
###Output
_____no_output_____
###Markdown
8. Exact eigensolverFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE.Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. For very large systems you would run out of memory trying to store their wavefunctions.
###Code
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# The targeted electronic energy for H2 is -1.85336 Ha
# Check with your VQE result.
###Output
_____no_output_____
###Markdown
9. VQE and initial parameters for the ansatzNow we can import the VQE class and run the algorithm.
###Code
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
###Output
_____no_output_____
###Markdown
9. Scoring function We need to judge how good are your VQE simulations, your choice of ansatz/optimizer.For this, we implemented the following simple scoring function:$$ score = N_{CNOT}$$where $N_{CNOT}$ is the number of CNOTs. But you have to reach the chemical accuracy which is $\delta E_{chem} = 0.004$ Ha $= 4$ mHa, which may be hard to reach depending on the problem. You have to reach the accuracy we set in a minimal number of CNOTs to win the challenge. The lower the score the better!
###Code
# Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_thresholdaccuract_thres,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
###Output
_____no_output_____
###Markdown
Tutorial questions 2 Experiment with all the parameters and then:1. Can you find your best (best score) heuristic ansatz (by modifying parameters of `TwoLocal` ansatz) and optimizer?2. Can you find your best q-UCC ansatz (choose among `UCCSD, PUCCD or SUCCD` ansatzes) and optimizer?3. In the cell where we define the ansatz, can you modify the `Custom` ansatz by placing gates yourself to write a better circuit than your `TwoLocal` circuit? For each question, give `ansatz` objects.Remember, you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
###Code
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
###Output
_____no_output_____
###Markdown
Part 2: Final Challenge - VQE for LiH molecule In this part, you will simulate LiH molecule using the STO-3G basis with the PySCF driver. Goal Experiment with all the parameters and then find your best ansatz. You can be as creative as you want!For each question, give `ansatz` objects as for Part 1. Your final score will be based only on Part 2. Be aware that the system is larger now. Work out how many qubits you would need for this system by retrieving the number of spin-orbitals. Reducing the problem sizeYou might want to reduce the number of qubits for your simulation:- you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. Qiskit already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation.- you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits.- you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit. Custom ansatz You might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [H. L. Tang *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You can even get try machine learning algorithms to generate best ansatz circuits. Setup the simulationLet's now run the Hartree-Fock calculation and the rest is up to you!Attention We give below the `driver`, the `initial_point`, the `initial_state` that should remain as given.You are free then to explore all other things available in Qiskit.So you have to start from this initial point (all parameters set to 0.01): `initial_point = [0.01] * len(ansatz.ordered_parameters)` or`initial_point = [0.01] * ansatz.num_parameters`and your initial state has to be the Hartree-Fock state: `init_state = HartreeFock(num_spin_orbitals, num_particles, converter)` For each question, give `ansatz` object.Remember you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
###Code
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = False # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex5
submit_ex5(ansatz,qubit_op,result,freeze_core)
###Output
_____no_output_____
|
notebooks/Math-appendix/discrete_differential_geometry/affine_combinations.ipynb
|
###Markdown
What is very cool is that the constraint (the our coefficients must sum to 1) lead us to have a new type of span: the **affine span**.
###Code
fig, ax = plt.subplots(figsize=(8,7))
p = np.array([[-1], [4]])
a = np.array([[5], [4]])
b = np.array([[2], [-3]])
for gamma in np.arange(-10, 10, 0.1):
V = np.array([
p, a, b,
gamma*a + (1 - gamma)*b,
a-p,
b-p,
p+ gamma*(a-p) + (1 - gamma)*(b-p)
])
os = np.zeros(len(V))
origin = np.array([os, os])
plt.quiver(
*origin, V[:,0], V[:,1],
color=['green','red','red', 'darkred', 'blue', 'blue', 'darkblue'], alpha=0.5,
angles='xy', scale_units='xy', scale=1)
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.show()
###Output
_____no_output_____
|
On-purpose_overfitting.ipynb
|
###Markdown
The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label `SS`, 23 were correctly indentified as `SS`, 21 were classified as `CSiS` and 2 were classified as `FSiS`.The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
###Code
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
###Output
_____no_output_____
###Markdown
As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label `i`, `adjacent_facies[i]` is an array of the adjacent facies labels.
###Code
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
###Output
_____no_output_____
###Markdown
Applying the classification model to the blind dataWe held a well back from the training, and stored it in a dataframe called `blind`:
###Code
blind
###Output
_____no_output_____
###Markdown
The label vector is just the `Facies` column:
###Code
y_blind = blind['Facies'].values
###Output
_____no_output_____
###Markdown
We can form the feature matrix by dropping some of the columns and making a new dataframe:
###Code
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
X_blind = well_features
X_blind.fillna(value=-9999,inplace=True)
X_blind = X_blind.values
training_data.fillna(value=-9999,inplace=True)
len(training_data)
X_train = training_data.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1).values
y_train = training_data['Facies'].values
X_train
###Output
_____no_output_____
###Markdown
Now it's a simple matter of making a prediction and storing it back in the dataframe:
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# clf = RandomForestClassifier(n_estimators=1000,n_jobs=-1)
# clf.fit(X_train,y_train)
clf = SVC(kernel='linear')
clf.fit(X_train,y_train)
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
###Output
_____no_output_____
###Markdown
Let's see how we did with the confusion matrix:
###Code
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.3f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.3f' % accuracy_adjacent(cv_conf, adjacent_facies))
###Output
Optimized facies classification accuracy = 0.481
|
src/notebooks/Explore Models_wq.ipynb
|
###Markdown
Explore Models Refer to the script of Explore models written by Baihua Model: MW_BASE_RC8_UpperROCONNEL.rsprojCreated by: Qian WDate created: 3/11/18 Try to find structures about Source by veneer-py:0 What are the constituents in the model?1 What functional units in this URO catchment?2 Does each funtional unit have its own model and parameters?3 What are the input data for each submodel/function? How to change the values?
###Code
import veneer
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# import geopandas as gpd
import re
from SALib.util import read_param_file
from SALib.plotting.morris import horizontal_bar_plot, covariance_plot, \
sample_histograms
import matplotlib.pyplot as plt
%matplotlib inline
## Open Source project file, then enable Veneer (Tools->Web Server Monitoring->Allow scripts)
v = veneer.Veneer(port=9876)
#### Run this to improve model performance, mainly through parallel computing. These can also be modified through Source UI
def configure_options(self,options):
lines = ["# Generated Script","from Dynamic_SedNet.PluginSetup import DSScenarioDetails"]
lines += ["DSScenarioDetails.%s = %s"%(k,v) for (k,v) in options.items()]
script = '\n'.join(lines)
#print(script)
res = self.model._safe_run(script)
configure_options(v,{'RunNetworksInParallel':True,'PreRunCatchments':True,'ParallelFlowPhase':True})
v.model.sourceScenarioOptions("PerformanceConfiguration","ProcessCatchmentsInParallel",True)
#### Run this to turn off dsednet reporting window
configure_options(v,{'ShowResultsAfterRun':False,'OverwriteResults':True})
###Output
_____no_output_____
###Markdown
Run the model with script codes
###Code
from veneer.manage import start, create_command_line, kill_all_now
import veneer
import pandas as pd
import gc
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from subprocess import Popen, PIPE
import subprocess
import shutil
import os
import re
parent_dir = os.getcwd()
job_name = 'work'
pst_file = '126001A.pst'
catchment_project= parent_dir + '\\pest_source\\MW_BASE_RC10.rsproj'
pest_path= parent_dir + '\\pest_source'
print('pest path ',pest_path)
python_path = 'C:\\UserData\\Qian\\anaconda'
os.environ['PATH'] = os.environ['PATH']+';'+pest_path
os.environ['PATH'] = os.environ['PATH']+';'+python_path
print(os.environ['PATH'])
# Setup Veneer
# define paths to veneer command and the catchment project
veneer_path = 'pest_source\\vcmd45\\FlowMatters.Source.VeneerCmd.exe'
# Number of instances to open
num_copies=1 # Important - set this to be a number ~ the number of CPU cores in your system!
first_port=15000
#Now, go ahead and start source
processes,ports = start(catchment_project,
n_instances=num_copies,
ports=first_port,
debug=True,
veneer_exe=veneer_path,
remote=False,
overwrite_plugins=True)
###Output
pest path E:\cloudStor\Projects\predict_uq\src\notebooks\pest_source
C:\Users\qianw\anaconda3\envs\oed;C:\Users\qianw\anaconda3\envs\oed\Library\mingw-w64\bin;C:\Users\qianw\anaconda3\envs\oed\Library\usr\bin;C:\Users\qianw\anaconda3\envs\oed\Library\bin;C:\Users\qianw\anaconda3\envs\oed\Scripts;C:\Users\qianw\anaconda3\envs\oed\bin;C:\Users\qianw\anaconda3\condabin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;D:\Program\Microsoft VS Code\bin;D:\Program\Git\cmd;D:\StrawberrryPerl\c\bin;D:\StrawberrryPerl\perl\site\bin;D:\StrawberrryPerl\perl\bin;C:\Users\qianw\AppData\Local\Microsoft\WindowsApps;C:\Users\qianw\AppData\Local\gitkraken\bin;D:\Bandizip;D:\miktex\miktex\bin\x64;C:\Users\qianw\anaconda3\envs\oed\lib\site-packages\numpy\.libs;C:\Users\qianw\anaconda3\envs\oed\lib\site-packages\scipy\.libs;E:\cloudStor\Projects\predict_uq\src\notebooks\pest_source;C:\UserData\Qian\anaconda
E:\cloudStor\Projects\predict_uq\src\Plugins.xml
0.0.0
C:Users\qianw\AppData\Roaming\Source\0.0.0
Starting pest_source\vcmd45\FlowMatters.Source.VeneerCmd.exe -p 15000 -s "E:\cloudStor\Projects\predict_uq\src\notebooks\pest_source\MW_BASE_RC10.rsproj"
###Markdown
find sub-catchments upstream
###Code
v = veneer.Veneer(port=ports[0])
# filter gauges to use
gauge_names = ['gauge_126001A_SandyCkHomebush',
'Outlet Node24']
# find links and upstream subcatchments for the whole Sandy Creek Catchment
gauges_ID = [a[6:13] for a in gauge_names]
links_ID = [160, 101]
the_network = v.network()
the_links = the_network['features'].find_by_feature_type('link')
ct_gauges = {ga: None for ga in gauges_ID}
for i in range(len(links_ID)):
ct_temp = []
link_all = []
link_find = []
ini_link = the_links[links_ID[i]]
link_temp = the_network.upstream_links(ini_link)
link_temp
while len(link_temp)>0:
link_find = []
for lt in link_temp:
link_all.append(lt)
ele = the_network.upstream_links(lt)
if lt['properties']['name'] == 'downstream_MultiFarm_gauge1260092':
ct_temp.append('SC #112')
else:
sc_start = re.search(r'SC', lt['properties']['name']).start()
ct_temp.append(lt['properties']['name'][sc_start:])
if len(ele)>0:
for e in ele:
link_find.append(e)
link_temp = link_find
ct_gauges[gauges_ID[i]] = ct_temp
#find catchments
# # find the catchment area
# catchment_area = []
# for cat in ct_gauges[' Node24']:
# area_temp = v.model.catchment.get_areas(catchments=cat)
# catchment_area.append({cat: area_temp})
# # catchment_area
###Output
_____no_output_____
###Markdown
End of finding catchments upstream
###Code
## Identify list of constituents
const = v.model.get_constituents()
const_df = pd.DataFrame(const)
# const_df
#Identify the list of function units
fun_units = set(v.model.catchment.get_functional_unit_types())
fun_units_df = pd.DataFrame(list(fun_units))
# fun_units_df
for ct in ct_temp:
for i in range(len(fun_units_df)):
fu = fun_units_df.iloc[i].values[0]
area_fus = v.model.catchment.get_functional_unit_areas(fus=fu, catchments=ct)
fun_units_df.loc[i, ct] = np.sum(area_fus)
# reset index for the dataframe
fun_units_df.set_index([0], inplace=True)
fun_units_df.index.name = 'fun_units'
fun_units_df
cmt_names = ct_temp
ct_area = []
for ct in cmt_names:
ct_area.append(v.model.catchment.get_areas(catchments=ct)[0])
ct_area_total = np.sum(ct_area)
for fu in fun_units_df.index:
fun_units_df.loc[fu,'proportion'] = fun_units_df.loc[fu, :].sum() / ct_area_total
# fun_units_df.to_csv('E:/cloudStor/PhDocs/pce_fixing/func_units_area.csv')
# List of generation models
gen_models = set(v.model.catchment.generation.get_models(constituents = 'N_DIN'))
gen_models
# Parameters and value ranges of each generation model
model_params = {}
for ele in gen_models:
params = v.model.find_parameters(ele) #Get parameters of a certain model
param_values = {}
for param in params:
param_value = v.model.catchment.generation.get_param_values(param)
param_values[param] = [min(param_value), max(param_value), len(param_value), set(param_value)] #get min, max and lenth of a parameter
model_params[ele] = param_values
model_params
dwc_init = v.model.catchment.generation.get_param_values('DWC', fus=['Sugarcane'])
dwc_init
v.model.catchment.generation.set_param_values('DWC', [0.5], fus=['Sugarcane'], fromList=True)
v.model.catchment.generation.get_param_values('DWC', fus=['Sugarcane'])
v.model.catchment.generation.set_param_values('DWC', dwc_init, fus=['Sugarcane'], fromList=True)
v.model.catchment.generation.get_param_values('DWC', fus=['Sugarcane'])
for ct in ct_temp:
param_value = v.model.catchment.generation.get_param_values('DWC', fus=['Sugarcane'], catchments=ct)
print(f'{ct}: {param_value}')
param_value
a = np.array([1, 0, 0, 1])
a = np.where(a>0, 0.1, 0)
a
#find models of specific catchment and constituents
gen_models = set(v.model.catchment.generation.get_models(constituents = 'N_DIN'))
gen_models
pd.set_option('max_colwidth',200) #set length of dataframe outputs
gen_model_names = model_params.keys()
pd.DataFrame(list(gen_model_names))
###Output
_____no_output_____
###Markdown
Use the information and in Source UI-> SedNet Utilities -> Constituent Generation Model Matrix ViewerGeneration models related to fine sediment:RiverSystem.Catchments.Models.ContaminantGenerationModels.NilConstituent for WaterRiverSystem.Catchments.Models.ContaminantGenerationModels.EmcDwcCGModel for Conservation, Forestry, Horticulture, Urban, OtherDynamic_SedNet.Models.SedNet_Sediment_Generation for Grazing Forested, Grazing OpenGBR_DynSed_Extension.Models.GBR_CropSed_Wrap_Model for Sugarcane, Dryland Cropping, Irrigated Cropping
###Code
## To find the parameters:
param_emcdwc = v.model.find_parameters('RiverSystem.Catchments.Models.ContaminantGenerationModels.NilConstituent')
print(param_emcdwc)
for p in param_emcdwc:
param_val = v.model.catchment.generation.get_param_values(p)
print(p, ' values: ', set(param_val))
# transport models
transport_models = v.model.link.constituents.get_models(constituents = 'Sediment - Fine')
set(transport_models)
#find parameters for sediment transport model
transport_models = v.model.find_parameters('Dynamic_SedNet.Models.SedNet_InStream_Fine_Sediment_Model')
pd.DataFrame(transport_models)
###Output
_____no_output_____
###Markdown
Use the above information and Source UI -> SetNet Model Setup -> Edit Routing and Instream ModelsTransport models for fine sediment:'Dynamic_SedNet.Models.SedNet_InStream_Fine_Sediment_Model'
###Code
#get node models
set(v.model.node.get_models())
#get parameters for node model
v.model.find_parameters('RiverSystem.Nodes.Confluence.ConfluenceNodeModel')
###Output
_____no_output_____
###Markdown
Find Parameters used for fine sediment
###Code
gen_models
#get all models for sediment generation and transport in this project
sed_gen_models = ['Dynamic_SedNet.Models.SedNet_Sediment_Generation','GBR_DynSed_Extension.Models.GBR_CropSed_Wrap_Model',
'RiverSystem.Catchments.Models.ContaminantGenerationModels.EmcDwcCGModel']
sed_trp_models = ['Dynamic_SedNet.Models.SedNet_InStream_Fine_Sediment_Model']
sed_gen_params = []
for model in sed_gen_models:
sed_gen_param = v.model.find_parameters(model)
sed_gen_params = sed_gen_params + sed_gen_param
sed_trp_params = v.model.find_parameters(sed_trp_models)
#sed_gen_params
print('These are %d parameters for sediment generation models\n' % len(sed_gen_params))
print(pd.DataFrame(sed_gen_params))
print('\nThese are %d parameters for sediment transport models\n' % len(sed_trp_params))
print(pd.DataFrame(sed_trp_params))
# Overview of parameters for fine sediment, such as the count of the parameter values, and unique values (e.g. are they constant/binary/vary, numeric/string)
for param in sed_gen_params:
param_value = v.model.catchment.generation.get_param_values(param)
param_value_len = len(param_value)
param_value_set = set(param_value) #get uni
print(param, param_value_len, param_value_set)
# Overview of parameters for fine sediment, such as the count of the parameter values, and unique values (e.g. are they constant/binary/vary, numeric/string)
for param_trp in sed_trp_params:
param_value = v.model.link.constituents.get_param_values(param_trp)
param_value_len = len(param_value)
param_value_set = set(param_value) #get uni
print(param_trp, param_value_len, param_value_set)
###Output
_____no_output_____
###Markdown
Change parameter values
###Code
#place all parameters (for both sediment generation and transport) together
myparam_sed_gen = ['DeliveryRatioSeepage','DeliveryRatioSurface','Gully_Management_Practice_Factor','Gully_SDR_Fine','HillslopeCoarseSDR','HillslopeFineSDR','USLE_HSDR_Fine','Max_Conc']
# either using selected testing parameters (myparam_sedidin_gen) or all parameters (params_seddin_gen)
for i in myparam_sed_gen:
param = v.model.catchment.generation.get_param_values(i)
paraml = len(param) ## Count of the parameter values
param_set = set(param) ## unique values
print(i, paraml, param_set)
myparam_sed_trp = ['bankErosionCoeff','propBankHeightForFineDep','fineSedSettVelocity','fineSedReMobVelocity','RiparianVegPercent','UptakeVelocity']
# either using testing parameters (myparam_seddin_trp) or all parameters (params_seddin_trp)
for i in myparam_sed_trp:
param = v.model.link.constituents.get_param_values(i)
paraml = len(param) ## Count of the parameter values
param_set = set(param) ## unique values
print(i, paraml, param_set)
myparameters = myparam_sed_gen + myparam_sed_trp
myparameters
sedigen_bounds = [[0, 2.5],
[0.5, 1],
[0, 2],
[0, 1.4],
[0, 1],
[0, 3],
[0, 2],
[0.1, 1]]
seditrp_bounds = [[0, 10],
[0.1, 2.5],
[0, 10],
[0, 3],
[0.1, 1.3],
[0.1, 10]]
mybounds = sedigen_bounds + seditrp_bounds
mybounds
# Define the model inputs
problem = {
'num_vars': len(myparameters),
'names': myparameters,
'bounds': mybounds,
'groups': None
}
problem
%%time
## Generate samples (Morris)
N = 10
morris_level = 50
morris_grid = 2
optim_trj = False ## False or a int, >2 and <N, but generallly <=6
Loc_opt = False ## True or False.
samples_morris = sample(problem, N, num_levels=morris_level, grid_jump=morris_grid, optimal_trajectories = optim_trj, local_optimization=Loc_opt)
samples_morris
samples_morris.shape
## Record initial parameter values. These values will be restored after each run.
initial_params = {}
for param_i, param_n in enumerate(problem['names']):
param_gen = v.model.catchment.generation.get_param_values(param_n)
param_trp = v.model.link.constituents.get_param_values(param_n)
param_v = param_gen + param_trp
initial_params[param_n] = param_v
print(initial_params)
%%time
## Run model iteratively
v.drop_all_runs()
for index,item in enumerate(samples_morris):
print(index)
## Update parameter values
for param_i, param_n in enumerate(problem['names']):
#print(param_i, param_n)
#print(samples_morris[n,param_i])
param_new = [x * samples_morris[index,param_i] for x in initial_params[param_n]]
#print(initial_params[param_n], param_new)
if param_n in myparam_sed_gen:
assert v.model.catchment.generation.set_param_values(param_n,param_new, fromList=True)
if param_n in myparam_sed_trp:
assert v.model.link.constituents.set_param_values(param_n,param_new,fromList=True)
## Run model
v.run_model(start='01/07/2000',end='30/06/2002')
## Return default parameter value
for param_i, param_n in enumerate(problem['names']):
if param_n in myparam_sed_gen:
v.model.catchment.generation.set_param_values(param_n,initial_params[param_n], fromList=True)
if param_n in myparam_sed_trp:
v.model.link.constituents.set_param_values(param_n,initial_params[param_n], fromList=True)
# print(temp,samples_morris[n,param_i])
help(v.retrieve_multiple_time_series)
## Retrieve results
allruns = v.retrieve_runs()
result_sed=[]
for index, item in enumerate(allruns):
run_name = allruns[index]['RunUrl']
run_index = v.retrieve_run(run_name)
finesediment = v.retrieve_multiple_time_series(run = run_name, run_data=run_index, criteria={'NetworkElement':'Outlet Node17','RecordingVariable':'Constituents@Sediment - Fine@Downstream Flow Mass'})
result_sed.append(finesediment.mean()[0]) ## use [0] to extract value data only
###Output
_____no_output_____
###Markdown
find constituents, models, parameters for sediment
###Code
#obtain data sources
data_source = v.data_sources()
data_source.as_dataframe()
set(v.model.catchment.get_functional_unit_types())
constituents = v.model.get_constituents()
set(constituents)
models = v.model.catchment.generation.get_models(constituents = 'N_DIN' )
models_set = set(models)
models_set
#get parameter values of sed
gen_params = []
for model in models_set:
param_sed = v.model.find_parameters(model)
gen_params += [{model: param_sed}]
gen_params
for model in models_set:
print(model,v.model.find_inputs(model))
v.model.catchment.generation.get_param_values('dissConst_DWC ', fus='Horticulture', catchments=ct_gauges[' Node24'])
v.model.catchment.generation.get_param_values('dissConst_EMC', fus='Grazing Forested', catchments=['SC #103'])
variables = v.variables()
variables.as_dataframe()
v.model.find_model_type('Times')
v.model.find_parameters('Dynamic_SedNet.Models.SedNet_TimeSeries_Load_Model')
v.model.catchment.generation.get_param_values('Load_Conversion_Factor')
%matplotlib notebook
# Input variables
f_dir = 'rainfall_0101/'
f_name = 'rainfall_ave.csv'
rain = pd.read_csv('{}{}'.format(f_dir, f_name)).set_index('Unnamed: 0')
rain.index.name = 'Year'
rain.plot(figsize=(10, 8))
###Output
_____no_output_____
###Markdown
obtain inputs from APSIM
###Code
data_sources=v.data_sources()
data_sources.as_dataframe()
cropping = data_sources[15]['Items'][0]
# Obtain the name of DIN data for catchments in Sandy Creek area
cropping_input = [ii['Name'] for ii in cropping['Details'] if (('N_DIN' in ii['Name']) & (ii['Name'].split('$')[2] in ct_temp))]
forcing = v.data_source('Cropping Data')
forcing
###Output
_____no_output_____
|
6506_02_code_ACC_SB/Video 2.2- Bar plots and histograms.ipynb
|
###Markdown
bar()
###Code
# Load numpy for math/array operations
# and matplotlib for plotting
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Set up figure size and DPI for screen demo
plt.rcParams['figure.figsize'] = (4,3)
plt.rcParams['figure.dpi'] = 150
# basic bar plot
nums = np.random.uniform(size=10)
# horizontal bar plot
# bar color
plt.bar(np.arange(10), nums
# bar alignment
plt.bar(np.arange(10), nums, align='edge')
# hatch & fill
plt.bar(np.arange(10), nums,
# bar width
plt.bar(np.arange(10), nums
# edge color
plt.bar(np.arange(10), nums
# bottom alignment
plt.bar(np.arange(10), nums
###Output
_____no_output_____
###Markdown
hist()
###Code
# basic histogram
rands = np.random.normal(size=int(1e6))
# return type
# range
plt.hist(rands,
# bin choice
plt.hist(rands,
# histtype
plt.hist(rands, bins='auto'
# normalization
plt.hist(rands, bins='auto'
# cumulative histogram
plt.hist(rands, bins='auto',
# multiple histograms
plt.hist((rands, 0.5*rands), bins='auto', histtype='stepfilled',
###Output
_____no_output_____
|
Aluminum/.ipynb_checkpoints/Aluminum_Tube_Beam_Column_Calcs-checkpoint.ipynb
|
###Markdown
Calculations of Design Strength of Aluminum Scaffold Tubes subject to eccentric compressive loads per CSA S157-05 R2015 E.Durham - 9-Jul-2019 Case: Determine axial compressive strength and stiffness Given: - F_br = 8.34 kN - beta_L = 2608.24 kN/m - L_b of brace is 1500 mm - Consider fixity as translation free-fixed (k=1.2) although could rightly be classed as pinned-pinned (k=1.0) Setup calculation environment
###Code
import math as math
import pint
unit = pint.UnitRegistry(system='mks')
# define synanyms for common units
inch = unit.inch; mm = unit.mm; m = unit.m; MPa = unit.MPa; psi = unit.psi;
kN = unit.kN; ksi = unit.ksi; dimensionless = unit.dimensionless;
s = unit.second; kg = unit.kg
###Output
_____no_output_____
###Markdown
Define case parameters
###Code
F_br = 8.34 * kN # factored, given
beta_L_required = 2608.24 * kN/m
C_f = F_br
e = 50 * mm # given eccentricity
L = 1500 * mm # given span
K = 1.2 # given fixity or effective length factor
KL = K*L
# thus
M_f = 1.2 * e * C_f
print("M_f = " + str(round(M_f.to(kN*m), 3)))
###Output
M_f = 0.5 kilonewton * meter
###Markdown
Geometric and Material Properties for tubing:
###Code
# Safway Aluminum Scaffold Tubing (6061-T6) taken from Safway SafLock
# Technical Manual Rev G 11/14 page 38 on 4-May-2018
# Geometric Properties:
b = 1.90 * inch; b.ito(mm) # OD outside diameter, given
t = 0.145 * inch; t.ito(mm) # Wall thickness, given
A = 0.799 * inch**2; A.ito(mm**2) # Area, given
S = 0.326 * inch**3; S.ito(mm**3) # Elastic section modulus, given
Z = 0.448 * inch**3; Z.ito(m**3) # Plastic section modulus, DERIVED
I = 0.3099 * inch**4; I.ito(mm**4) # Second moment of area, given
r = 0.623 * inch; r.ito(mm) # radius of gyration, given
c = b / 2.0
b1 = b - (2 * t)
# Material Properties (Aluminum 6061-T6):
E = 70000 * MPa; E.ito(kN / m**2) # Elastic modulus per 4.3(b)
Fy = 35000 * psi; Fy.ito(kN / m**2) # Yield Strength given
Ft = 38000 * psi; Ft.ito(kN / m**2) # Tensile Strength given
lambda_aluminum = 2700 * (kg / m**3) # density of aluminum
dead_load = A * lambda_aluminum
dead_load.ito(kg / m)
# Print out Geometric Properties in metric units
print('1.900" x 0.145" 6061-T6 Extruded Tube Geometric Properties:')
print('OD, b =', round(b.to(inch), 4) , '=',round(b,2))
print('Wall, t =', round(t.to(inch), 4) , '=' , round(t,2))
print('Area, A =', round(A.to(inch**2), 4) , '=' , round(A,2))
print('Radius of Gyration, r =',round(r.to(inch), 4), '=' , round(r,2))
print('Elastic Section Modulus, S =', round(S.to(inch**3), 4),'=',
round(S,2))
print('Plastic Section Modulus, Z =',round(Z.to(inch**3),4) , '=' ,
round(Z.to(mm**3),2))
print('Dead Load =', round(dead_load, 4))
print('')
# Print out Material Properties (Aluminum 6061-T6 Extruded)
print('Material Properties for Aluminum 6061-T6 Extruded:')
print('Yield Strength, Fy =',round(Fy.to(ksi),1),'=',round(Fy.to(MPa),1))
print('Tensile Strength, Ft =',round(Ft.to(ksi),1),'=',
round(Ft.to(MPa),1))
print('Elastic Modulus, E =',round(E.to(ksi),1),'=', round(E.to(MPa),1))
lambda_flexural_buckling = KL.to(mm) / r # slenderness ratio per 9.4.2.1)
print('Slenderness Ratio, lambda =', round(lambda_flexural_buckling,1))
Fo = Fy # limiting stress per 9.3.2(a)
print('Limiting Stress, Fo =', round(Fo.to(MPa),1))
# 9.3.1
Fe = ( ( (math.pi)**2) * E.to(MPa) ) / ( lambda_flexural_buckling )
print('Elastic Buckling Stress, Fe =', round(Fe.to(MPa),1))
lambda_bar = ( (Fo.to(MPa)/Fe.to(MPa))**(1/2) )
print('Normalized Slenderness, lambda_bar =', round(lambda_bar,3))
###Output
Elastic Buckling Stress, Fe = 6073.6 megapascal
Normalized Slenderness, lambda_bar = 0.199 dimensionless
###Markdown
lambda_bar < 0.3 Therefore member section is Class 1 per 9.5.1(a)
###Code
C_e = (A.to(m**2)*(math.pi**2)*E.to(kN/m**2)) / (lambda_flexural_buckling**2)
C_e.ito(kN)
print('C_e = ', round(C_e, 1))
S_c = S # section modulus for extreme fibre in compression
S_t = S # section modulus for extreme fibre in tension
# 5.5 Resistance factors
# For general structures, the following resistance factors, phi, shall be used:
phi_y = 0.9 #(a) tension, compression, and shear in beams: on yield
phi_c = 0.9 #(b) compression in columns: on collapse due to buckling
phi_u = 0.75 #(c) tension and shear in beams: on ultimate
phi_u = 0.75 #(d) tension on a net section, bearing stress, tear-out on ultimate
phi_u = 0.75 #(e) tension and compression on butt welds: on ultimate
phi_f = 0.67 #(f) shear stress on fillet welds: on ultimate
phi_f = 0.67 #(g) tension and shear stress on fasteners: on ultimate
###Output
_____no_output_____
###Markdown
(a) where compressive stress governs,
###Code
round(phi_y * Fo.to(MPa),1)
round(((M_f / (S_c.to(m**3) * (1 - (C_f/C_e)))) + (C_f / A)).to(MPa),1)
if round(((M_f/(S_c.to(m**3)*(1-(C_f/C_e))))+(C_f / A)).to(MPa),1) <= round(phi_y*Fo.to(MPa),1):
print('Design stress is less than equal to design strength. \nTherefore',
' beam-column is OK for compressive stress.')
else:
print('Design stress in greater than design strength. \nTherefore',
', beam-column is NFG for compressive stress.')
###Output
Design stress is less than equal to design strength.
Therefore beam-column is OK for compressive stress.
###Markdown
(b) where tensile stress governs,
###Code
round(phi_y * Fy.to(MPa),1)
round(((M_f / (S_t.to(m**3) * (1 - (C_f/C_e)))) - (C_f / A)).to(MPa),1)
if round(((M_f / (S_t.to(m**3) * (1 - (C_f/C_e)))) + (C_f / A)).to(MPa),1) <= round(phi_y * Fy.to(MPa),1):
print('Design stress is less than equal to design strength. \nTherefore',
', beam-column is OK for tensile stress.')
else:
print('Design stress in greater than design strength. \nTherefore',
', beam-column is NFG for tensile stress.')
###Output
Design stress is less than equal to design strength.
Therefore , beam-column is OK for tensile stress.
###Markdown
Member checked for eccentric compressive load has strength greater than required. Therefore, OK for Axial Compression. Check Stiffness:
###Code
beta_L = (A*E)/L
beta_L.ito(kN/m)
print('Required Brace Stiffness, beta_L_required = {0:n}'.format(round(beta_L_required,-1)))
print('Actual Brace Stiffness, beta_L = {0:n}'.format(round(beta_L,-1)))
###Output
Required Brace Stiffness, beta_L_required = 2610 kilonewton / meter
Actual Brace Stiffness, beta_L = 24060 kilonewton / meter
|
Data Warehouse/Amazon United Kingdom/Amazon_UK - Hair Products - Main.ipynb
|
###Markdown
List of Products
###Code
amazon_usa = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A17911764011%2Cn%3A11057651&dc&',
'conditioner':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A17911764011%2Cn%3A11057251&dc&',
'hair_scalp_treatment':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A11057431&dc&',
'treatment_oil':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A10666439011&dc&',
'hair_loss':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11057241%2Cn%3A10898755011&dc&'},
'skin_care':{'body':{'cleansers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11056281&dc&',
'moisturizers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11060661&dc&',
'treatments':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060521%2Cn%3A11056421&dc&'},
'eyes':{'creams':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730090011&dc&',
'gels':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730092011&dc&',
'serums':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11061941%2Cn%3A7730098011&dc&'},
'face':{'f_cleansers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11060901&dc&',
'f_moisturizers':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11060901&dc&',
'scrubs':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061091&dc&',
'toners':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061931&dc&',
'f_treatments':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A11060711%2Cn%3A11061931&dc&'},
'lipcare':'https://www.amazon.com/s?i=beauty-intl-ship&bbn=16225006011&rh=n%3A%2116225006011%2Cn%3A11060451%2Cn%3A3761351&dc&'}},
'food':{'tea':{'herbal':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318511&dc&',
'green':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318471&dc&',
'black':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A16318411&dc&',
'chai':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318401%2Cn%3A348022011&dc&'},
'coffee':'https://www.amazon.com/s?k=tea&i=grocery&rh=n%3A16310101%2Cn%3A16310231%2Cn%3A16521305011%2Cn%3A16318031%2Cn%3A2251593011&dc&',
'dried_fruits':{'mixed':'https://www.amazon.com/s?k=dried+fruits&i=grocery&rh=n%3A16310101%2Cn%3A6506977011%2Cn%3A9865332011%2Cn%3A9865334011%2Cn%3A9865348011&dc&',
'mangoes':'https://www.amazon.com/s?k=dried+fruits&rh=n%3A16310101%2Cn%3A9865346011&dc&'},
'nuts':{'mixed':'https://www.amazon.com/s?k=nuts&rh=n%3A16310101%2Cn%3A16322931&dc&',
'peanuts':'https://www.amazon.com/s?k=nuts&i=grocery&rh=n%3A16310101%2Cn%3A18787303011%2Cn%3A16310221%2Cn%3A16322881%2Cn%3A16322941&dc&',
'cashews':'https://www.amazon.com/s?k=nuts&i=grocery&rh=n%3A16310101%2Cn%3A18787303011%2Cn%3A16310221%2Cn%3A16322881%2Cn%3A16322901&dc&'}},
'supplements':{'sports':{'pre_workout':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973697011&dc&',
'protein':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973704011&dc&',
'fat_burner':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973679011&dc&',
'weight_gainer':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A6973663011%2Cn%3A6973725011&dc&'},
'vitamins_dietary':{'supplements':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A3764441%2Cn%3A6939426011&dc&',
'multivitamins':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A3774861&dc&'}},
'wellness':{'ayurveda':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A13052911%2Cn%3A13052941&dc&',
'essential_oil_set':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A13052911%2Cn%3A18502613011&dc&',
'massage_oil':'https://www.amazon.com/s?k=supplements&i=hpc&rh=n%3A3760901%2Cn%3A10079996011%2Cn%3A14442631&dc&'},
'personal_accessories':{'bags':{'women':{'clutches':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A17037745011&dc&',
'crossbody':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A2475899011&dc&',
'fashion':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A16977745011&dc&',
'hobo':'https://www.amazon.com/s?k=bags&i=fashion-womens-handbags&bbn=15743631&rh=n%3A7141123011%2Cn%3A%217141124011%2Cn%3A7147440011%2Cn%3A15743631%2Cn%3A16977747011&dc&'}},
'jewelry':{'anklets':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454897011&dc&',
'bracelets':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454898011&dc&',
'earrings':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454917011&dc&',
'necklaces':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454917011&dc&',
'rings':'https://www.amazon.com/s?i=fashion-womens-intl-ship&bbn=16225018011&rh=n%3A16225018011%2Cn%3A7192394011%2Cn%3A7454939011&dc&'},
'artisan_fabrics':'https://www.amazon.com/s?k=fabrics&rh=n%3A2617941011%2Cn%3A12899121&dc&'}}
amazon_uk = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.co.uk/b/ref=amb_link_5?ie=UTF8&node=74094031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'conditioner':'https://www.amazon.co.uk/b/ref=amb_link_6?ie=UTF8&node=2867976031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'hair_loss':'https://www.amazon.co.uk/b/ref=amb_link_11?ie=UTF8&node=2867979031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'hair_scalp_treatment':'https://www.amazon.co.uk/b/ref=amb_link_7?ie=UTF8&node=2867977031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031',
'treatment_oil':'https://www.amazon.co.uk/hair-oil-argan/b/ref=amb_link_8?ie=UTF8&node=2867981031&pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=merchandised-search-leftnav&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_r=KF9SM53J2HXHP4EJD3AH&pf_rd_t=101&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_p=aaaa7182-fdd6-4b35-8f0b-993e78880b69&pf_rd_i=66469031'},
'skin_care':{'body':{'cleanser':'https://www.amazon.co.uk/s/ref=lp_344269031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A344269031%2Cn%3A344282031&bbn=344269031&ie=UTF8&qid=1581612722&rnid=344269031',
'moisturizers':'https://www.amazon.co.uk/s/ref=lp_344269031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A344269031%2Cn%3A2805272031&bbn=344269031&ie=UTF8&qid=1581612722&rnid=344269031'},
'eyes':{'creams':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_0?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344259031&bbn=118465031&ie=UTF8&qid=1581612984&rnid=118465031',
'gels':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344258031&bbn=118465031&ie=UTF8&qid=1581613044&rnid=118465031',
'serums':'https://www.amazon.co.uk/s/ref=lp_118465031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118465031%2Cn%3A344257031&bbn=118465031&ie=UTF8&qid=1581613044&rnid=118465031'},
'face':{'cleansers':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_1?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A344265031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'moisturizers':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_3?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A2805291031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'toners':'https://www.amazon.co.uk/s/ref=lp_118466031_nr_n_0?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A344267031&bbn=118466031&ie=UTF8&qid=1581613120&rnid=118466031',
'treatments':'https://www.amazon.co.uk/s?bbn=118466031&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118466031%2Cn%3A18918424031&dc&fst=as%3Aoff&qid=1581613120&rnid=118466031&ref=lp_118466031_nr_n_7'},
'lipcare':'https://www.amazon.co.uk/s/ref=lp_118464031_nr_n_4?fst=as%3Aoff&rh=n%3A117332031%2Cn%3A%21117333031%2Cn%3A118464031%2Cn%3A118467031&bbn=118464031&ie=UTF8&qid=1581613357&rnid=118464031'}},
'food':{'tea':{'herbal':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406567031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_1',
'green':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406566031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_3',
'black':'https://www.amazon.co.uk/s?k=tea&i=grocery&rh=n%3A340834031%2Cn%3A358584031%2Cn%3A11711401%2Cn%3A406564031&dc&qid=1581613483&rnid=344155031&ref=sr_nr_n_2'},
'coffee':'https://www.amazon.co.uk/s?k=coffee&rh=n%3A340834031%2Cn%3A11711391&dc&qid=1581613715&rnid=1642204031&ref=sr_nr_n_2',
'dried_fruits':{'mixed':'https://www.amazon.co.uk/s?k=dried+fruits&rh=n%3A340834031%2Cn%3A9733163031&dc&qid=1581613770&rnid=1642204031&ref=sr_nr_n_2'},
'nuts':{'mixed':'https://www.amazon.co.uk/s?k=mixed&rh=n%3A359964031&ref=nb_sb_noss',
'peanuts':'https://www.amazon.co.uk/s?k=peanuts&rh=n%3A359964031&ref=nb_sb_noss',
'cashews':'https://www.amazon.co.uk/s?k=cashew&rh=n%3A359964031&ref=nb_sb_noss'}},
'supplements':{'sports':{'pre_workout':'https://www.amazon.co.uk/b/?node=5977685031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_1&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031',
'protein':'https://www.amazon.co.uk/b/?node=2826510031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_0&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031',
'fat_burner':'https://www.amazon.co.uk/b/?node=5977737031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hc3L_2&pf_rd_r=C5MZHH5TH5F868B6FQWD&pf_rd_p=8086b6c9-ae16-5c3c-a879-030afa4ee08f&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826478031'},
'vitamins_dietary':{'supplements':'https://www.amazon.co.uk/b/?_encoding=UTF8&node=2826534031&bbn=65801031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hdc7_2&pf_rd_r=AY01DQVCB4SE7VVE7MTK&pf_rd_p=1ecdbf02-af23-502a-b7ab-9916ddd6690c&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826484031',
'multivitamins':'https://www.amazon.co.uk/b/?_encoding=UTF8&node=2826506031&bbn=65801031&ref_=Oct_s9_apbd_odnav_hd_bw_b35Hdc7_1&pf_rd_r=AY01DQVCB4SE7VVE7MTK&pf_rd_p=1ecdbf02-af23-502a-b7ab-9916ddd6690c&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=2826484031'}},
'wellness':{'massage_oil':'https://www.amazon.co.uk/b/?node=3360479031&ref_=Oct_s9_apbd_odnav_hd_bw_b50nmJ_4&pf_rd_r=GYVYF52HT2004EDTY67W&pf_rd_p=3f8e4361-c00b-588b-a07d-ff259bf98bbc&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=74073031',
'ayurveda':'https://www.amazon.co.uk/s?k=ayurveda&rh=n%3A65801031%2Cn%3A2826449031&dc&qid=1581686978&rnid=1642204031&ref=sr_nr_n_22'},
'personal_accessories':{'bags':{'women':{'clutches':'https://www.amazon.co.uk/b/?node=1769563031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_3&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'crossbody':'https://www.amazon.co.uk/b/?node=1769564031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_1&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'fashion':'https://www.amazon.co.uk/b/?node=1769560031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_5&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031',
'hobo':'https://www.amazon.co.uk/b/?node=1769565031&ref_=Oct_s9_apbd_odnav_hd_bw_b1vkt8h_4&pf_rd_r=VC8RX89R4V4JJ5TEBANF&pf_rd_p=cefca17f-8dac-5c80-848f-812aff1bfdd7&pf_rd_s=merchandised-search-11&pf_rd_t=BROWSE&pf_rd_i=1769559031'}},
'jewelry':{'anklets':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_0?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382860031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'bracelets':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_1?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382861031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'earrings':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_4?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382865031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'necklaces':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_7?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382868031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031',
'rings':'https://www.amazon.co.uk/s/ref=lp_10382835031_nr_n_10?fst=as%3Aoff&rh=n%3A193716031%2Cn%3A%21193717031%2Cn%3A10382835031%2Cn%3A10382871031&bbn=10382835031&ie=UTF8&qid=1581687575&rnid=10382835031'},
'artisan_fabrics':'https://www.amazon.co.uk/s?k=fabric&rh=n%3A11052681%2Cn%3A3063518031&dc&qid=1581687726&rnid=1642204031&ref=a9_sc_1'}}
amazon_india = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.in/b/ref=s9_acss_bw_cg_btyH1_2a1_w?ie=UTF8&node=1374334031&pf_rd_m=A1K21FY43GMZF8&pf_rd_s=merchandised-search-5&pf_rd_r=JHDJ4QHM0APVS05NGF4G&pf_rd_t=101&pf_rd_p=41b9c06b-1514-47de-a1c6-f4f13fb55ffe&pf_rd_i=1374305031',
'conditioner':'https://www.amazon.in/b/ref=s9_acss_bw_cg_btyH1_2b1_w?ie=UTF8&node=1374306031&pf_rd_m=A1K21FY43GMZF8&pf_rd_s=merchandised-search-5&pf_rd_r=CBABMCW6C69JRBGZNWWP&pf_rd_t=101&pf_rd_p=41b9c06b-1514-47de-a1c6-f4f13fb55ffe&pf_rd_i=1374305031',
'treatment_oil':''},
'skin_care':[],
'wellness_product':[]},
'food':{'tea':[],
'coffee':[],
'dried_fruits':[],
'nuts':[],
'supplements':[]},
'personal_accessories':{'bags':[],
'jewelry':[],
'artisan_fabrics':[]}}
amazon_aus = {'health_and_beauty':{'hair_products':{'shampoo':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5150253051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cXATz&pf_rd_r=6SEM7GFDN7CQ2W4KXM9M&pf_rd_p=9dd4b462-1094-5e36-890d-bb1b694c8b53&pf_rd_s=merchandised-search-12&pf_rd_t=BROWSE&pf_rd_i=5150070051',
'conditioner':'https://www.amazon.com.au/b/?_encoding=UTF8&node=5150226051&bbn=4851917051&ref_=Oct_s9_apbd_odnav_hd_bw_b5cXATz&pf_rd_r=6SEM7GFDN7CQ2W4KXM9M&pf_rd_p=9dd4b462-1094-5e36-890d-bb1b694c8b53&pf_rd_s=merchandised-search-12&pf_rd_t=BROWSE&pf_rd_i=5150070051'},
'skin_care':[],
'wellness_product':[]},
'food':{'tea':[],
'coffee':[],
'dried_fruits':[],
'nuts':[],
'supplements':[]},
'personal_accessories':{'bags':[],
'jewelry':[],
'artisan_fabrics':[]}}
amazon = {'USA':amazon_usa,
'UK':amazon_uk,
'India':amazon_india,
'Australia':amazon_aus}
def hover(browser, xpath):
'''
This function makes an automated mouse hovering in the selenium webdriver
element based on its xpath.
PARAMETER
---------
browser: Selenium based webbrowser
xpath: str
xpath of the element in the webpage where hover operation has to be
performed.
'''
element_to_hover_over = browser.find_element_by_xpath(xpath)
hover = ActionChains(browser).move_to_element(element_to_hover_over)
hover.perform()
element_to_hover_over.click()
def browser(link):
'''This funtion opens a selenium based chromebrowser specifically tuned
to work for amazon product(singular item) webpages. Few functionality
includes translation of webpage, clicking the initial popups, and hovering
over product imagesso that the images can be scrape
PARAMETER
---------
link: str
Amazon Product item link
RETURN
------
driver: Selenium web browser with operated functions
'''
options = Options()
prefs = {
"translate_whitelists": {"ja":"en","de":'en'},
"translate":{"enabled":"true"}
}
# helium = r'C:\Users\Dell-pc\AppData\Local\Google\Chrome\User Data\Default\Extensions\njmehopjdpcckochcggncklnlmikcbnb\4.2.12_0'
# options.add_argument(helium)
options.add_experimental_option("prefs", prefs)
options.headless = True
driver = webdriver.Chrome(chrome_options=options)
driver.get(link)
try:
driver.find_element_by_xpath('//*[@id="nav-main"]/div[1]/div[2]/div/div[3]/span[1]/span/input').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[3]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[4]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[5]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[6]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[7]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[8]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
try:
hover(driver,'//*[@id="altImages"]/ul/li[9]')
except:
pass
try:
driver.find_element_by_xpath('//*[@id="a-popover-6"]/div/header/button/i').click()
except:
pass
return driver
def scroll_temp(driver):
'''
Automated Scroller in Selenium Webbrowser
PARAMETER
---------
driver: Selenium Webbrowser
'''
pre_scroll_height = driver.execute_script('return document.body.scrollHeight;')
run_time, max_run_time = 0, 2
while True:
iteration_start = time.time()
# Scroll webpage, the 100 allows for a more 'aggressive' scroll
driver.execute_script('window.scrollTo(0,0.6*document.body.scrollHeight);')
post_scroll_height = driver.execute_script('return document.body.scrollHeight;')
scrolled = post_scroll_height != pre_scroll_height
timed_out = run_time >= max_run_time
if scrolled:
run_time = 0
pre_scroll_height = post_scroll_height
elif not scrolled and not timed_out:
run_time += time.time() - iteration_start
elif not scrolled and timed_out:
break
def scroll(driver):
scroll_temp(driver)
from selenium.common.exceptions import NoSuchElementException
try:
element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]/div/div[1]')
except NoSuchElementException:
try:
element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]')
except NoSuchElementException:
element = driver.find_element_by_xpath('//*[@id="detail-bullets_feature_div"]')
actions = ActionChains(driver)
actions.move_to_element(element).perform()
# def scroll(driver):
# scroll_temp(driver)
# from selenium.common.exceptions import NoSuchElementException
# try:
# try:
# element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]/div/div[1]')
# except NoSuchElementException:
# try:
# element = driver.find_element_by_xpath('//*[@id="reviewsMedley"]')
# except NoSuchElementException:
# element = driver.find_element_by_xpath('//*[@id="detail-bullets_feature_div"]')
# actions = ActionChains(driver)
# actions.move_to_element(element).perform()
# except NoSuchElementException:
# pass
def browser_link(product_link,country):
'''Returns all the web link of the products based on the first
page of the product category. It captures product link of all the pages for
that specific product.
PARAMETER
---------
link: str
The initial web link of the product page. This is generally the
first page of the all the items for that specfic product
RETURN
------
links: list
It is a list of strings which contains all the links of the items
for the specific product
'''
driver = browser(product_link)
soup = BeautifulSoup(driver.page_source, 'lxml')
try:
pages_soup = soup.findAll("ul",{"class":"a-pagination"})
pages = int(pages_soup[0].findAll("li",{'class':'a-disabled'})[1].text)
except:
pass
try:
pages_soup = soup.findAll("div",{"id":"pagn"})
pages = int(pages_soup[0].findAll("span",{'class':'pagnDisabled'})[0].text)
except:
try:
pages_soup = soup.findAll("div",{"id":"pagn"})
pages = int(pages_soup[0].findAll("span",{'class':'pagnDisabled'})[1].text)
except:
pass
print(pages)
links = []
for page in range(1,pages+1):
print(page)
link_page = product_link + '&page=' + str(page)
driver_temp = browser(link_page)
time.sleep(2)
soup_temp = BeautifulSoup(driver_temp.page_source, 'lxml')
try:
search = soup_temp.findAll("div",{"id":"mainResults"})
temp_search = search[1].findAll("a",{'class':'a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal'})
for i in range(len(temp_search)):
if country == 'UK':
link = temp_search[i].get('href')
else:
link = countries_link[country] + temp_search[i].get('href')
links.append(link)
print(len(links))
except:
try:
search = soup_temp.findAll("div",{"class":"s-result-list s-search-results sg-row"})
temp_search = search[1].findAll("h2")
if len(temp_search) < 2:
for i in range(len(search[0].findAll("h2"))):
temp = search[0].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
else:
for i in range(len(search[1].findAll("h2"))):
temp = search[1].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
except:
pass
try:
search = soup_temp.findAll("div",{"id":"mainResults"})
temp_search = search[0].findAll("a",{'class':'a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal'})
for i in range(len(temp_search)):
if country == 'UK':
link = temp_search[i].get('href')
else:
link = countries_link[country] + temp_search[i].get('href')
links.append(link)
print(len(links))
except:
try:
search = soup_temp.findAll("div",{"class":"s-result-list s-search-results sg-row"})
temp_search = search[1].findAll("h2")
if len(temp_search) < 2:
for i in range(len(search[0].findAll("h2"))):
temp = search[0].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
else:
for i in range(len(search[1].findAll("h2"))):
temp = search[1].findAll("h2")[i]
for j in range(len(temp.findAll('a'))):
link = countries_link[country]+temp.findAll('a')[j].get('href')
links.append(link)
print(len(links))
except:
print('Not Scrapable')
links = []
return links
def indexes(amazon_links,link_list):
amazon_dict = amazon_links
if len(link_list) == 5:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]][link_list[3]][link_list[4]]
elif len(link_list) == 4:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]][link_list[3]]
elif len(link_list) == 3:
return amazon_dict[link_list[0]][link_list[1]][link_list[2]]
elif len(link_list) == 2:
return amazon_dict[link_list[0]][link_list[1]]
elif len(link_list) == 1:
return amazon_dict[link_list[0]]
else:
return print("Invalid Product")
def products_links(country, **kwargs):
amazon_links = amazon[country]
directory_temp = []
for key, value in kwargs.items():
directory_temp.append(value)
directory = '/'.join(directory_temp)
print(directory)
product_link = indexes(amazon_links,directory_temp)
main_links = browser_link(product_link,country=country)
return main_links,directory
###Output
_____no_output_____
###Markdown
Product Scraper Function
###Code
def delete_images(filename):
import os
file_path = '/home/jishu/Amazon_UK/'
os.remove(file_path + filename)
def upload_s3(filename,key):
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
bucket_name = 'amazon-data-ecfullfill'
s3 = boto3.client('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.upload_file(filename,bucket_name,key)
except FileNotFoundError:
pass
def product_info(link,directory,country):
'''Get all the product information of an Amazon Product'''
#Opening Selenium Webdrive with Amazon product
driver = browser(link)
time.sleep(4)
scroll(driver)
time.sleep(2)
#Initializing BeautifulSoup operation in selenium browser
selenium_soup = BeautifulSoup(driver.page_source, 'lxml')
time.sleep(2)
#Product Title
try:
product_title = driver.find_element_by_xpath('//*[@id="productTitle"]').text
except:
product_title = 'Not Scrapable'
print(product_title)
#Ratings - Star
try:
rating_star = float(selenium_soup.findAll('span',{'class':'a-icon-alt'})[0].text.split()[0])
except:
rating_star = 'Not Scrapable'
print(rating_star)
#Rating - Overall
try:
overall_rating = int(selenium_soup.findAll('span',{'id':'acrCustomerReviewText'})[0].text.split()[0].replace(',',''))
except:
overall_rating = 'Not Scrapable'
print(overall_rating)
#Company
try:
company = selenium_soup.findAll('a',{'id':'bylineInfo'})[0].text
except:
company = 'Not Scrapable'
print(country)
#Price
try:
if country=='UK':
denomination = selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[:3]
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:])
else:
denomination = selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[0]
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[1:])
except:
try:
if country=='UK':
try:
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:].replace(',',''))
except:
price = float(selenium_soup.findAll('span',{'id':'priceblock_dealprice'})[0].text[3:].replace(',',''))
else:
try:
price = float(selenium_soup.findAll('span',{'id':'priceblock_ourprice'})[0].text[3:].replace(',',''))
except:
price = float(selenium_soup.findAll('span',{'id':'priceblock_dealprice'})[0].text[3:].replace(',',''))
except:
denomination = 'Not Scrapable'
price = 'Not Scrapable'
print(denomination,price)
#Product Highlights
try:
temp_ph = selenium_soup.findAll('ul',{'class':'a-unordered-list a-vertical a-spacing-none'})[0].findAll('li')
counter_ph = len(temp_ph)
product_highlights = []
for i in range(counter_ph):
raw = temp_ph[i].text
clean = raw.strip()
product_highlights.append(clean)
product_highlights = '<CPT14>'.join(product_highlights)
except:
try:
temp_ph = selenium_soup.findAll('div',{'id':'rich-product-description'})[0].findAll('p')
counter_ph = len(temp_ph)
product_highlights = []
for i in range(counter_ph):
raw = temp_ph[i].text
clean = raw.strip()
product_highlights.append(clean)
product_highlights = '<CPT14>'.join(product_highlights)
except:
product_highlights = 'Not Available'
print(product_highlights)
#Product Details/Dimensions:
#USA
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[0].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
try:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
except IndexError:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(';')
try:
product_length = float(pd_temp[0].split('x')[0])
except IndexError:
pass
try:
product_width = float(pd_temp[0].split('x')[1])
except IndexError:
pass
try:
product_height = float(pd_temp[0].split('x')[2].split(' ')[1])
except IndexError:
pass
try:
pd_unit = pd_temp[0].split('x')[2].split(' ')[2]
except IndexError:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except IndexError:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except IndexError:
pass
except:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[1].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
try:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
except IndexError:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(';')
try:
product_length = float(pd_temp[0].split('x')[0])
except IndexError:
pass
try:
product_width = float(pd_temp[0].split('x')[1])
except IndexError:
pass
try:
product_height = float(pd_temp[0].split('x')[2].split(' ')[1])
except IndexError:
pass
try:
pd_unit = pd_temp[0].split('x')[2].split(' ')[2]
except IndexError:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except IndexError:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except IndexError:
pass
except:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
#India
try:
temp_pd = selenium_soup.findAll('div',{'class':'content'})[0].findAll('ul')[0].findAll('li')
counter_pd = len(temp_pd)
for i in range(counter_pd):
try:
if re.findall('ASIN',temp_pd[i].text)[0]:
asin = temp_pd[i].text.split(' ')[1]
except:
pass
try:
if re.findall('Product Dimensions|Product Dimension|Product dimensions',temp_pd[i].text)[0]:
pd_temp = temp_pd[i].text.strip().split('\n')[2].strip().split(' ')
try:
product_length = float(pd_temp[0])
except:
pass
try:
product_width = float(pd_temp[2])
except:
pass
try:
product_height = float(pd_temp[4])
except:
pass
try:
pd_unit = pd_temp[5]
except:
pass
try:
product_weight = float(pd_temp[1].split(' ')[1])
except:
pass
try:
weight_unit = pd_temp[1].split(' ')[2]
except:
pass
print(asin)
except IndexError:
pass
try:
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text)[0]:
sweight_temp = temp_pd[i].text.split(':')[1].strip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except IndexError:
pass
try:
if re.findall('Item Weight|Product Weight|Item weight|Product weight|Boxed-product Weight',temp_pd[i].text)[0]:
pd_weight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].strip()
product_weight = float(pd_weight_temp.split(' ')[0])
weight_unit = pd_weight_temp.split(' ')[1]
except IndexError:
pass
try:
if re.findall('Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text)[0]:
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
try:
best_seller_cat = int(temp_pd[i].text.strip().replace('\n','').split(' ')[3].replace(',',''))
best_seller_prod = int(x[indexes[0]].split('#')[1].split('in')[0])
except:
try:
best_seller_cat = x[indexes[0]].split('#')[1]
except:
pass
try:
best_seller_prod = x[indexes[1]].split('#')[1].split('in')[0]
except:
pass
except IndexError:
pass
print(asin)
except:
pass
try:
try:
asin = list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[0].findAll('td')[1])[0]
except:
pass
try:
dimensions = list(selenium_soup.findAll('div',{'class':'pdTab'})[0].findAll('tr')[0].findAll('td')[1])[0]
except:
pass
try:
weight_temp = list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[1].findAll('td')[1])[0]
except:
pass
try:
best_seller_cat = float(list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[5].findAll('td')[1])[0].split('\n')[-1].split(' ')[0].replace(',',''))
except:
pass
try:
best_seller_prod = int(list(list(list(list(selenium_soup.findAll('div',{'class':'pdTab'})[1].findAll('tr')[5].findAll('td')[1])[5])[1])[1])[0].replace('#',''))
except:
pass
try:
product_length = float(dimensions.split('x')[0])
except:
pass
try:
product_width = float(dimensions.split('x')[1])
except:
pass
try:
product_height = float(dimensions.split('x')[2].split(' ')[1])
except:
pass
try:
product_weight = weight_temp.split(' ')[0]
except:
pass
try:
weight_unit = weight_temp.split(' ')[1]
except:
pass
try:
pd_unit = dimensions.split(' ')[-1]
except:
pass
print(asin)
except:
try:
for j in [0,1]:
temp_pd = selenium_soup.findAll('table',{'class':'a-keyvalue prodDetTable'})[j].findAll('tr')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Item Model Number|Item model number',temp_pd[i].text):
bait = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[3].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[3].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
print(asin,bait)
except:
try:
temp_pd = selenium_soup.findAll('div',{'id':'prodDetails'})[0].findAll('tr')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split('\n')[3].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[3].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[3].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except:
try:
temp_pd = selenium_soup.findAll('div',{'id':'detail_bullets_id'})[0].findAll('tr')[0].findAll('li')
for i in range(len(temp_pd)):
if re.findall('ASIN',temp_pd[i].text):
asin = temp_pd[i].text.strip().split(':')[1].strip()
if re.findall('Best Sellers Rank|Amazon Best Sellers Rank|Amazon Bestsellers Rank',temp_pd[i].text):
x = temp_pd[i].text.strip().replace('\n','').split(' ')
indexes = []
for j,k in enumerate(x):
if re.findall('#',k):
indexes.append(j)
best_seller_cat = int(x[indexes[0]].split('#')[1])
best_seller_prod = int(x[indexes[1]].split('#')[1].split('in')[0])
if re.findall('Product Dimensions|Product dimension|Product Dimension',temp_pd[i].text):
dimensions = temp_pd[i].text.strip().split('\n')[2].strip().split('x')
product_length = float(dimensions[0].strip())
product_width = float(dimensions[1].strip())
product_height = float(dimensions[2].strip().split(' ')[0])
pd_unit = dimensions[2].strip().split(' ')[1]
if re.findall('Item Weight|Product Weight|Item weight|Boxed-product Weight',temp_pd[i].text):
weight_temp = temp_pd[i].text.strip().split('\n')[2].strip()
product_weight = float(weight_temp.split(' ')[0])
weight_unit = weight_temp.split(' ')[1]
if re.findall('Shipping Weight|Shipping weight|shipping weight',temp_pd[i].text):
sweight_temp = temp_pd[i].text.replace('\n','').strip().split(' ')[1].lstrip().split(' ')
shipping_weight = float(sweight_temp[0])
shipping_weight_unit = sweight_temp[1]
except:
pass
try:
print(asin)
except NameError:
asin = 'Not Scrapable'
try:
print(best_seller_cat)
except NameError:
best_seller_cat = 'Not Scrapable'
try:
print(best_seller_prod)
except NameError:
best_seller_prod = 'Not Scrapable'
try:
print(product_length)
except NameError:
product_length = 'Not Scrapable'
try:
print(product_width)
except NameError:
product_width = 'Not Scrapable'
try:
print(product_height)
except NameError:
product_height = 'Not Scrapable'
try:
print(product_weight)
except NameError:
product_weight = 'Not Scrapable'
try:
print(weight_unit)
except NameError:
weight_unit = 'Not Scrapable'
try:
print(pd_unit)
except NameError:
pd_unit = 'Not Scrapable'
try:
print(shipping_weight_unit)
except NameError:
shipping_weight_unit = 'Not Scrapable'
try:
print(shipping_weight)
except NameError:
shipping_weight = 'Not Scrapable'
print(product_length,product_width,product_height,product_weight,asin,pd_unit,
best_seller_cat,best_seller_prod,weight_unit,shipping_weight,shipping_weight_unit)
#Customer Review Ratings - Overall
time.sleep(0.5)
try:
temp_crr = selenium_soup.findAll('table',{'id':'histogramTable'})[1].findAll('a')
crr_main = {}
crr_temp = []
counter_crr = len(temp_crr)
for i in range(counter_crr):
crr_temp.append(temp_crr[i]['title'])
crr_temp = list(set(crr_temp))
for j in range(len(crr_temp)):
crr_temp[j] = crr_temp[j].split(' ')
stopwords = ['stars','represent','of','rating','reviews','have']
for word in list(crr_temp[j]):
if word in stopwords:
crr_temp[j].remove(word)
print(crr_temp[j])
try:
if re.findall(r'%',crr_temp[j][1])[0]:
crr_main.update({int(crr_temp[j][0]): int(crr_temp[j][1].replace('%',''))})
except:
crr_main.update({int(crr_temp[j][1]): int(crr_temp[j][0].replace('%',''))})
except:
try:
temp_crr = selenium_soup.findAll('table',{'id':'histogramTable'})[1].findAll('span',{'class':'a-offscreen'})
crr_main = {}
counter_crr = len(temp_crr)
star = counter_crr
for i in range(counter_crr):
crr_main.update({star:int(temp_crr[i].text.strip().split('/n')[0].split(' ')[0].replace('%',''))})
star -= 1
except:
pass
try:
crr_5 = crr_main[5]
except:
crr_5 = 0
try:
crr_4 = crr_main[4]
except:
crr_4 = 0
try:
crr_3 = crr_main[3]
except:
crr_3 = 0
try:
crr_2 = crr_main[2]
except:
crr_2 = 0
try:
crr_1 = crr_main[1]
except:
crr_1 = 0
#Customer Review Ratings - By Feature
time.sleep(1)
try:
driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]/div[4]/a/span').click()
temp_fr = driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]').text
temp_fr = temp_fr.split('\n')
crr_feature_title = []
crr_feature_rating = []
for i in [0,2,4]:
crr_feature_title.append(temp_fr[i])
for j in [1,3,5]:
crr_feature_rating.append(temp_fr[j])
crr_feature = dict(zip(crr_feature_title,crr_feature_rating))
except:
try:
temp_fr = driver.find_element_by_xpath('//*[@id="cr-summarization-attributes-list"]').text
temp_fr = temp_fr.split('\n')
crr_feature_title = []
crr_feature_rating = []
for i in [0,2,4]:
crr_feature_title.append(temp_fr[i])
for j in [1,3,5]:
crr_feature_rating.append(temp_fr[j])
crr_feature = dict(zip(crr_feature_title,crr_feature_rating))
except:
crr_feature = 'Not Defined'
try:
crr_feature_key = list(crr_feature.keys())
except:
pass
try:
crr_fr_1 = crr_feature[crr_feature_key[0]]
except:
crr_fr_1 = 0
try:
crr_fr_2 = crr_feature[crr_feature_key[1]]
except:
crr_fr_2 = 0
try:
crr_fr_3 = crr_feature[crr_feature_key[2]]
except:
crr_fr_3 = 0
#Tags:
time.sleep(1)
try:
temp_tags = selenium_soup.findAll('div',{'class':'cr-lighthouse-terms'})[0]
counter_tags = len(temp_tags)
print('Counter Tags:',counter_tags)
tags = []
for i in range(counter_tags):
tags.append(temp_tags.findAll('span')[i].text.strip())
print(tags[i])
except:
tags = ['None']
try:
for feature in crr_feature_key:
tags.append(feature)
except:
pass
tags = list(set(tags))
tags = '<CPT14>'.join(tags)
print(tags)
#Images
images = []
for i in [0,3,4,5,6,7,8,9]:
try:
images.append(selenium_soup.findAll('div',{'class':'imgTagWrapper'})[i].find('img')['src'])
except:
pass
import urllib.request
for i in range(len(images)):
if asin =='Not Scrapable':
product_image = "{}_{}.jpg".format(product_title,i)
product_image = product_image.replace('/','')
urllib.request.urlretrieve(images[i],product_image)
upload_s3("{}_{}.jpg".format(product_title,i),
directory+"/images/" + product_image)
delete_images(product_image)
else:
product_image = "{}_{}.jpg".format(asin,i)
product_image = product_image.replace('/','')
urllib.request.urlretrieve(images[i],product_image)
upload_s3("{}_{}.jpg".format(asin,i),
directory+"/images/" + product_image)
delete_images(product_image)
return [product_title,rating_star,overall_rating,company,price,
product_highlights,product_length,product_width,product_height,
product_weight,asin,pd_unit,best_seller_cat,best_seller_prod,
weight_unit,shipping_weight,shipping_weight_unit,crr_5,crr_4,
crr_3,crr_2,crr_1,crr_fr_1,crr_fr_2,crr_fr_3,tags,directory]
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
def database(product_data,**kwargs):
try:
try:
link = kwargs['link']
except KeyError:
print('Error in Link')
try:
country = kwargs['country']
except KeyError:
print("Enter Country Name")
try:
cat1 = kwargs['cat1']
except KeyError:
pass
try:
cat2 = kwargs['cat2']
except KeyError:
pass
try:
cat3 = kwargs['cat3']
except KeyError:
pass
try:
cat4 = kwargs['cat4']
except KeyError:
pass
try:
product = kwargs['product']
except KeyError:
print("Enter Product Name")
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
try:
cat4 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
try:
cat4 = None
cat3 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
except NameError:
cat4 = None
cat3 = None
cat2 = None
metadata = [link,country,cat1,cat2,cat3,cat4,product]
conn = sqlite3.connect('{}.db'.format(product))
headers = ['link','country','cat1','cat2','cat3','cat4','product','product_title',
'rating_star','overall_rating','company','price',
'product_highlights','product_length','product_width','product_height',
'product_weight','asin','pd_unit','best_seller_cat','best_seller_prod',
'weight_unit','shipping_weight','shipping_weight_unit','crr_5','crr_4',
'crr_3','crr_2','crr_1','crr_fr_1','crr_fr_2','crr_fr_3','tags','images_link']
product_data.append(metadata)
product_data = product_data[-1] + product_data[:len(product_data)-1]
temp = pd.DataFrame(data= [product_data],columns=headers)
temp.to_sql('Product',conn,if_exists='append')
upload_s3(product+'.db',directory+'/'+product+'.db')
conn.close()
def checkpoint(link_list,directory,product):
BUCKET_NAME = 'amazon-data-ecfullfill'
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
KEY = '{}/{}.db'.format(directory,product)
s3 = boto3.resource('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.Bucket(BUCKET_NAME).download_file(KEY, 'test.db')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
conn = sqlite3.connect('test.db')
try:
df = pd.read_sql('''SELECT * FROM Product''', conn)
product_link = df['link'].unique()
new_list = []
for i in link_list:
if i in product_link:
pass
else:
new_list.append(i)
except:
new_list = link_list
return new_list
###Output
_____no_output_____
###Markdown
Execution
###Code
#Initializing the product per Jupyter Notebook
country = 'UK'
cat1 = 'health_and_beauty'
cat2='hair_products'
# cat3='None'
# cat4 = 'None'
product='conditioner'
links,directory = products_links(country=country,category=cat1,cat2=cat2,product=product)
test_1 = {'links':links,'directory':directory}
import pickle
with open('uk_hair_prod_conditioner.pkl', 'wb') as f:
pickle.dump(test_1, f)
with open('uk_hair_prod_conditioner.pkl', 'rb') as f:
file = pickle.load(f)
links = file['links']
directory = 'Amazon_UK/health_and_beauty/hair_products/conditioner'
#replace links with new_links if interruption
for link in new_links:
data = product_info(link=link,directory=directory,country=country)
conn = sqlite3.connect('{}.db'.format(product))
database(product_data=data,link=link,country=country,
cat1=cat1,cat2=cat2,product=product)
# Run if there is an interruption
new_links = checkpoint(links,directory,product)
new_links[1:]
len(new_links)
len(links)
###Output
_____no_output_____
###Markdown
Testing the datasets in S3
###Code
BUCKET_NAME = 'amazon-data-ecfullfill' # replace with your bucket name
key_id = 'AKIAWR6YW7N5ZKW35OJI'
access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
KEY = 'Amazon_USA/health_and_beauty/hair_products/shampoo/shampoo.db' # replace with your object key
s3 = boto3.resource('s3',aws_access_key_id=key_id,
aws_secret_access_key=access_key)
try:
s3.Bucket(BUCKET_NAME).download_file(KEY, 'test.db')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
conn = sqlite3.connect('shampoo.db')
df_USA = pd.read_sql("SELECT * FROM Product",conn)
df_USA.iloc[:,:15]
df_USA.iloc[:,15:]
len(link_db)
# def upload_s3(filename,key):
# key_id = 'AKIAWR6YW7N5ZKW35OJI'
# access_key = 'h/xrcI9A2SRU0ds+zts4EClKAqbzU+/iXdiDcgzm'
# bucket_name = 'amazon-data-ecfullfill'
# s3 = boto3.client('s3',aws_access_key_id=key_id,
# aws_secret_access_key=access_key)
# # s3.put_object(Bucket=bucket_name, Key='Amazon/health_and_beauty/hair_product/shampoo')
# s3.upload_file(filename,bucket_name,key)
###Output
_____no_output_____
|
CST2312_Class14.ipynb
|
###Markdown
**Reading** from the required textbook: ( [https://www.py4e.com/lessons/](https://www.py4e.com/lessons/))* [Regular Expressions](https://www.py4e.com/lessons/regex) (Chapter 12)* [Data Science Cheat Sheet Python Regular Expressions](https://www.dataquest.io/wp-content/uploads/2019/03/python-regular-expressions-cheat-sheet.pdf) So far we have been using methods like `split` and `find` to extract portions of strings or to answer a question of whether a particular item / string is part of a list-set-tuple-dictionary / longer string.Regular Expressions-------------------Regular expressions (regexes or re’s) constitute an extremely powerful, flexible and concise language for matching elements in text ranging from a few characters to complex patterns. While mastering the syntax of the regular expression language does require climbing a learning curve, this learning curve is not particularly steep, and a newcomer can find herself performing useful tasks with regular expressions almost immediately. Efforts spent learning regular expressions quickly pay off--tasks that are well suited for regular expressions abound. Indeed, regular expressions are one of the most useful computer skills, and an absolutely critical tool for data scientists.This document will present basic regular expression syntax and cover common use cases for regular expressions: pattern matching, filtering, data extraction, and string replacement. We will present examples using python’s standard [re regular expression library](http://docs.python.org/library/re.html).We will discuss Python libraries in detail later.Many examples from from this [Google tutorial](https://developers.google.com/edu/python/regular-expressions). Pay attention that tutorial itself uses a 2.x version of Python, thus, several statements (for example, print()) look differently than in the later versions of Python. In this notebook, the examples from the Google tutorial are converted to the current Python version. Searching strings using regexes The regular expression library `re` must be imported into your program before you can use it. The simplest use of the regular expression library is the `search()` function.
###Code
# first import the library
import re
inputStr = 'an example word:cat!!'
if re.search('cat', inputStr):
print(inputStr)
print ("Done with the example")
inputStr = 'an example word: cat!!'
if re.search('dog', inputStr):
print(inputStr)
print ("Done with the example")
###Output
_____no_output_____
###Markdown
We can store the result of `re.search(pat, str)` in a variable.In Python a regular expression search is typically written as:`match = re.search(pat, str)`The code `match = re.search(pat, str)` stores the search result in a variable named "match". The `re.search()` method takes a regular expression pattern and a string and searches for that pattern within the string. If the search is successful, `search()` returns a match object or `None` otherwise.Then the `if`-statement tests the match -- if `True` the search succeeded and `match.group()` is the matching text (e.g. 'word: cat'). Otherwise if the match is `False` (`None` to be more specific), then the search did not succeed, and there is no matching text.The 'r' at the start of the pattern string designates a python "raw" string which passes through backslashes without change which is very handy for regular expressions. It is recommended that you always write pattern strings with the 'r' just as a habit.
###Code
inputStr = 'an example word: cat!!'
match = re.search(r'word: \w\w\w', inputStr) ## both 'cat' and 'dog' are 3-letter words
match
###Output
_____no_output_____
###Markdown
The `re.match()` function returns a match object on success, `None` on failure. We use `group(num)` function of match object to get matched expression.`group(num=0)`: This method returns entire match (or specific subgroup num)
###Code
inputStr = 'an example word: cat!!'
# If-statement after search() tests if it succeeded
if re.search(r'word: \w\w\w', inputStr):
print ('found', match.group()) ## 'found word:cat'
else:
print ('did not find')
print ("Done with the example")
inputStr = 'an example word: dog!!'
match = re.search(r'word: \w\w\w', inputStr)
# If-statement after search() tests if it succeeded
if match:
print ('found', match.group()) ## 'found word:cat'
else:
print ('did not find')
print ("Done with the example")
line = "Cats are smarter than dogs"
matchObj = re.match( r'(.*) are (.*?) .*', line) # regExs within a regEx
if matchObj:
print ("matchObj.group() : ", matchObj.group(0))
print ("matchObj.group(0) : ", matchObj.group(0))
print ("matchObj.group(1) : ", matchObj.group(1))
print ("matchObj.group(2) : ", matchObj.group(2))
else:
print ("No match!!")
###Output
_____no_output_____
###Markdown
Basic Patternssee [Python Documentation](https://docs.python.org/3/library/re.html)The power of regular expressions is that they can specify patterns, not just fixed characters. Here are the most basic patterns which match single chars:* a, X, 9, < -- ordinary characters just match themselves exactly. The meta-characters which do not match themselves because they have special meanings are: . ^ $ * + ? { [ ] \ | ( ) (details below)* . (a period) -- matches any single character except newline '\n'* \w -- (lowercase w) matches a "word" character: a letter or digit or underbar [a-zA-Z0-9_]. Note that although "word" is the mnemonic for this, it only matches a single word char, not a whole word. \W (upper case W) matches any non-word character.* \b -- boundary between word and non-word* \s -- (lowercase s) matches a single whitespace character -- space, newline, return, tab, form [ \n\r\t\f]. \S (upper case S) matches any non-whitespace character.* \t, \n, \r -- tab, newline, return* \d -- decimal digit [0-9] (some older regex utilities do not support but \d, but they all support \w and \s)* ^ = start, $ = end -- match the start or end of the string* \ -- inhibit the "specialness" of a character. So, for example, use \. to match a period or \\ to match a slash. If you are unsure if a character has special meaning, such as '@', you can put a slash in front of it, \@, to make sure it is treated just as a character. Basic ExamplesThe basic rules of regular expression search for a pattern within a string are:* The search proceeds through the string from start to end, stopping at the first match found* All of the pattern must be matched, but not all of the string* If `match = re.search(pat, str)` is successful, match is not `None` and in particular `match.group()` is the matching text
###Code
## Search for pattern 'iii' in string 'piiig'.
## All of the pattern must match, but it may appear anywhere.
## On success, match.group() is matched text.
match = re.search(r'iii', 'piiig')
if match:
print ('found, match.group() == "iii"')
print (match.group())
match = re.search(r'igs', 'piiig')
if not match:
print ('not found, match == None')
print (re.search(r'igs', 'piiig'))
## . = any char but \n
match = re.search(r'..g', 'piiig')
if match:
print ('found, match.group() == "iig"')
print (match.group())
## \d = digit char, \w = word char
match = re.search(r'\d\d\d', 'p123g')
if match:
print ('found, match.group() == "123"')
print (match.group())
match = re.search(r'\w\w\w', '@@abcd!!')
if match:
print ('found, match.group() == "abc"')
print (match.group())
###Output
_____no_output_____
###Markdown
Email Example 1Extract the account name and the domain from the email address
###Code
email = '[email protected]'
# your code here
###Output
_____no_output_____
###Markdown
Email Example 2Suppose you want to find the email address inside the string 'xyz [email protected] data science'. Here's an attempt using the pattern r'\w+@\w+':
###Code
emailText = 'xyz [email protected] data science'
match = re.search(r'\w+@\w+', emailText)
if match:
print (match.group())
###Output
_____no_output_____
###Markdown
The search does not get the whole email address in this case because the `\w` does not match the `'-'` or `'.'` in the address. We'll fix this using the regular expression features below.**Square Brackets**Square brackets can be used to indicate a set of chars, so `[abc]` matches `'a'` or `'b'` or `'c'`. The codes `\w`, `\s` etc. work inside square brackets too with the one exception that dot (`.`) just means a literal dot. For the emails problem, the square brackets are an easy way to add `'.'` and `'-'` to the set of chars which can appear around the `@` with the pattern `r'[\w.-]+@[\w.-]+'` to get the whole email address:
###Code
emailText = 'xyz [email protected] data science'
match = re.search(r'[\w.-]+@[\w.-]+', emailText)
if match:
print (match.group())
###Output
_____no_output_____
###Markdown
We will use `group()` function to extract the account name and the domain from the email address
###Code
emailText = 'xyz [email protected] data science'
match = re.search(r'([\w.-]+)@([\w.-]+)', emailText)
if match:
print ("email address:\t", match.group())
print ("email account:\t", match.group(1))
print ("email domain:\t", match.group(2))
###Output
_____no_output_____
###Markdown
Iteration using regular expressionsIf we want to extract data from a string in Python we can use the `findall()` or `finditer()` methods to extract all of the substrings which match a regular expression. These two methods produce resutls of different types. Let’s use the example of wanting to extract anything that looks like an email address from any line regardless of format. For example, we want to pull the email addresses from each of the following lines:
###Code
emailText = '''xyz [email protected] data science or
[email protected], also we can mention [email protected]'''
match = re.search(r'([\w.-]+)@([\w.-]+)', emailText)
if match:
print ("email address:\t", match.group())
print ("email account:\t", match.group(1))
print ("email domain:\t", match.group(2))
matches = re.finditer(r'[\w.-]+@[\w.-]+', emailText)
for match in matches:
print(match.group())
matches = re.finditer(r'([\w.-]+)@([\w.-]+)', emailText)
for match in matches:
print(match.group(),"\t", match.group(1),"\t", match.group(2))
print (matches)
matches = re.findall(r'([\w\.-]+)@([\w\.-]+)', emailText)
print (matches)
for match in matches:
print (match[0]) ## username
print (match[1]) ## host
matches = re.findall(r'[\w\.-]+@[\w\.-]+', emailText)
print (matches)
for match in matches:
print (match)
###Output
_____no_output_____
###Markdown
Create a variable containing regular expression`re.compile()` method.
###Code
# We are looking for binary numbers
regex = re.compile(r'[10]+')
text = "asddf1101110100011abd1111panos0000"
matches = regex.finditer(text)
for match in matches:
print(match.group())
emailText = '''xyz [email protected] data science or
[email protected], also we can mention [email protected]'''
# We are looking for emails numbers
regex = re.compile(r'[\w\.-]+@[\w\.-]+')
matches = regex.finditer(emailText)
for match in matches:
print(match.group())
# We look for money figures, either integers, or with 1 or 2 decimal
# digits
regex = re.compile(r'\$\d+(\.\d\d?)?')
text = '$1200.23 is the price today. $1200 was the price yesterday'
matches = regex.finditer(text)
for match in matches:
print(match.group())
# This code is going to generate no matches
regex = re.compile(r'Ra*nd.*m R[egex]')
text = "CUNY, Citytech, Information and Data Management, [email protected]"
matches = regex.finditer(text)
for match in matches:
print(match.group())
print ("The end")
###Output
_____no_output_____
###Markdown
Regular expressions are typically case-sensitive.
###Code
# Regular expressions are compiled into pattern objects
# Regular expressions are case-sensitive
regex = re.compile(r'in.*on')
text = "CUNY, Citytech, Information and Data Management, [email protected]"
matches = regex.finditer(text)
for match in matches:
print(match.group())
###Output
_____no_output_____
###Markdown
But we can specify that they are case-insensitive, using the flag re.IGNORECASE
###Code
# Unless we specify that they are case-insensitive, using the flag re.IGNORECASE
regex = re.compile('in.*on',re.IGNORECASE)
text = "CUNY, Citytech, Information and Data Management, [email protected]"
matches = regex.finditer(text)
for match in matches:
print(match.group())
###Output
_____no_output_____
###Markdown
Greedy vs. Non-GreedySuppose you have text with tags in it: `foo and so on`Suppose you are trying to match each tag with the pattern '()' -- what does it match first?
###Code
text = "<b>foo</b> and <i>so on</i>"
regex = re.compile(r'(<.*>)')
match = regex.search(text)
print (match.group())
###Output
_____no_output_____
###Markdown
The result is a little surprising, but the greedy aspect of the `.*` causes it to match the whole `foo and so on` as one big match. The problem is that the `.*` goes as far as is it can, instead of stopping at the first `>` (aka it is "greedy").There is an extension to regular expression where you add a `?` at the end, such as `.*?` or `.+?`, changing them to be non-greedy. Now they stop as soon as they can. So the pattern `'()'` will get just `''` as the first match, and `''` as the second match, and so on getting each pair in turn. The style is typically that you use a `.*?`, and then immediately its right look for some concrete marker (> in this case) that forces the end of the `.*?` run.
###Code
text = "<b>foo</b> and <i>so on</i>"
regex = re.compile(r'(<.?>)')
match = regex.search(text)
print (match.group())
text = "<b>foo</b> and <i>so on</i>"
regex = re.compile(r'(<.?>)')
matches = regex.finditer(text)
for match in matches:
print(match.group())
text = "<b>foo</b> and <i>so on</i>"
regex = re.compile(r'(<.+?>)')
matches = regex.finditer(text)
for match in matches:
print(match.group())
###Output
_____no_output_____
###Markdown
Regular Expression Fuctions Analyzing* `.match()`* `.search()`* `.finditer()`* `.findall()`* `.compile()`[https://docs.python.org/3/library/re.html](https://docs.python.org/3/library/re.html)Let me separate these functions into three groups: 1. `.compile()`2. `.finditer()`, `.findall()`3. `.match()`, `.search()` 1. `re.compile()` Regular expression compilationRegular expression compilation produces a Python object that can be used to do all sort of regular expression operations. What is the benefit of that as long as we can use `re.match` and `re.search` directly? This technique is convenient in case we want to use a regular expression more than once. It makes our code efficient and more readable. `re.compile(pattern, flags=0)`Compile a regular expression pattern into a regular expression object, which can be used for matching using its `match()`, `search()` and other methods. This sequence
###Code
inStr = 'This is my 123 example string'
prog = re.compile(r'\d+')
result = prog.search(inStr)
result
###Output
_____no_output_____
###Markdown
is equivalent to
###Code
result = re.search(r'\d+', inStr)
result
###Output
_____no_output_____
###Markdown
By using `re.compile()` and saving the resulting regular expression object for reuse you can make your code more efficient when the expression will be used several times in a single program. 2. `re.finditer()`, `re.findall()` `.finditer()` VS `.findall()`* `.findall()` returns a **list** of all matches of a regex in a string.* `.finditer()` returns an iterator that yields regex matches.https://realpython.com/regex-python/https://realpython.com/regex-python-part-2/https://www.8bitavenue.com/difference-between-re-search-and-re-match-in-python/ `re.findall(, )` returns a list of all non-overlapping matches of `` in ``. It scans the search string from left to right and returns all matches in the order found:
###Code
re.findall(r'#(\w+)#', '#foo#.#bar#.#baz#')
###Output
_____no_output_____
###Markdown
In this case, the specified regex is `(\w+)`. The matching strings are `'foo'`, `'bar'`, and `'baz'`. But the hash (``) characters don’t appear in the return list because they’re outside the grouping parentheses.If `` contains more than one capturing group, then `re.findall()` returns a list of tuples containing the captured groups. The length of each tuple is equal to the number of groups specified:
###Code
re.findall(r'(\w+),(\w+)', 'foo,bar,baz,qux,quux,corge')
re.findall(r'(\w+),(\w+),(\w+)', 'foo,bar,baz,qux,quux,corge')
###Output
_____no_output_____
###Markdown
`re.finditer(, )` scans `` for non-overlapping matches of `` and returns an iterator that yields the match objects from any it finds. It scans the search string from left to right and returns matches in the order it finds them:
###Code
it = re.finditer(r'\w+', '...foo,,,,bar:%$baz//|')
it
for i in re.finditer(r'\w+', '...foo,,,,bar:%$baz//|'):
print(i.group())
###Output
_____no_output_____
###Markdown
`re.findall()` and `re.finditer()` are very similar, but they differ in two respects:1. `re.findall()` returns a **list**, whereas `re.finditer()` returns an **iterator**.The items in the list that `re.findall()` returns are the actual matching strings, whereas the items yielded by the iterator that `re.finditer()` returns are match objects.Any task that you could accomplish with one, you could probably also manage with the other. Which one you choose will depend on the circumstances. However, a lot of useful information can be obtained from a match object. If you need that information, then `re.finditer()` will probably be the better choice. For example, you can use the `.group()` method to return Match object. Match objectsIf a match is found when using `re.finditer()`, `re.match` or `re.search`, we can use some useful methods provided by the match object. Here is a short list of such methods, you may check the reference section for more details…* `group()` returns the part of the string matched by the entire regular expression* `group(1)` returns the text matched by the second capturing group* `start()` and `end()` return the indices of the start and end of the substring matched by the capturing group. 3. `re.match()`, `re.search()` .match() VS .search()This is the trickiest part. believe that the best explanation is provided in this tutorial: https://www.8bitavenue.com/difference-between-re-search-and-re-match-in-python/ which I will try to replicate.Additional information can also be found here:https://realpython.com/regex-python/https://realpython.com/regex-python-part-2/ When searching or matching, regular expression operations can take optional flags or modifiers. The following two modifiers are the most used ones…* `re.M multiline`* `re.I ignore case` .match()`re.match` matches an expression at the beginning of a string. If a match is found, a match object is returned, otherwise `None` is returned. If the input is a multiline string (i.e. starts and ends with three double quotes) that does not change the behavior of the match operation. `re.match` always tries to match the beginning of the string. In regular expressions syntax, the control character (^) is used to match the beginning of a string. If this character is used with re.match, it has no effect. The syntax for `re.match` operation is as follows… `mobj = re.match(pat, str, flag)`* pat: regular expression pattern to match* str: string in which to search for the pattern* flags: one or more modifiers, for example re.M|re.I `.match(pattern, string, flags=0)`Looks for a regex match at the beginning of a string.If zero or more characters at the beginning of *string* match the regular expression *pattern*, return a corresponding match object. Return `None` if the string does not match the pattern; note that this is different from a zero-length match.This is identical to `re.search()`, except that `re.search()` returns a match if the regular expression *pattern* matches anywhere in *string*, whereas `re.match()` returns a match only if the regular expression *pattern* matches at the beginning of *string*. .search()`re.search` attempts to find the first occurrence of the pattern anywhere in the input string as opposed to the beginning. If the search is successful, `re.search` returns a match object, otherwise it returns `None`. The syntax for re.search operation is as follows… `mobj = re.search(pat, str, flag)`* pat: regular expression pattern to search for* str: string in which to search for the pattern* flags: one or more modifiers, for example re.M|re.I `.search(pattern, string, flags=0)`Scans a string for a regex match.Scan through *string* looking for the **first** location where the regular expression *pattern* produces a match, and return a corresponding match object. Return `None` if no position in the string matches the pattern.You can use this function in a Boolean context like a conditional statement that checkes if a pattern is found in the input string (`True`) or not (`False`). Substitution The `re.sub(pat, replacement, str)` function searches for all the instances of pattern in the given string, and replaces them. The replacement string can include `'\1'`, `'\2'` which refer to the text from `group(1)`, `group(2)`, and so on from the original matching text.Here's an example which searches for all the email addresses, and changes them to keep the user `(\1)` but have `nyc.gov` as the host.
###Code
text = '''xyz [email protected] data science or
[email protected], also we can mention [email protected]'''
## re.sub(pat, replacement, str) -- returns new string with all replacements,
## \1 is group(1), \2 group(2) in the replacement
print (re.sub(r'([\w\.-]+)@([\w\.-]+)', r'\[email protected]', text))
###Output
_____no_output_____
###Markdown
Exercise 1Imagine we want to conceal all phone numbers in a document.Read the below multiline string and substitute all phone numbers with 'XXX-XXX-XXXX'
###Code
raw_text = """512-234-5234
foo
bar
124-512-5555
biz
125-555-5785
679-397-5255
2126660921
212-998-0902
888-888-2222
801-555-1211
802 555 1212
803.555.1213
(804) 555-1214
1-805-555-1215
1(806)555-1216
807-555-1217-1234
808-555-1218x1234
809-555-1219 ext. 1234
work 1-(810) 555.1220 #1234
"""
###Output
_____no_output_____
###Markdown
Solution
###Code
## your solution here
###Output
_____no_output_____
|
GS Example.ipynb
|
###Markdown
Example: Growing Spheres for 1 prediction 2D Illustrative
###Code
import numpy as np
import pandas as pd
from sklearn import datasets, ensemble, tree
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from matplotlib import pyplot as plt
X,y = datasets.make_moons(n_samples = 200, shuffle=True, noise=0.05, random_state=0)
X = (X.copy() - X.mean(axis=0))/X.std(axis=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
#clf = ensemble.RandomForestClassifier(n_estimators=200, max_depth=3)
clf = SVC(gamma=1, probability=True)
#clf = tree.DecisionTreeClassifier(max_depth=6)
clf = clf.fit(X_train, y_train)
print(' ### Accuracy:', sum(clf.predict(X_test) == y_test)/y_test.shape[0])
def plot_classification_contour(X, clf, ax=[0,1]):
## Inspired by scikit-learn documentation
h = .02 # step size in the mesh
cm = plt.cm.RdBu
x_min, x_max = X[:, ax[0]].min() - .5, X[:, ax[0]].max() + .5
y_min, y_max = X[:, ax[1]].min() - .5, X[:, ax[1]].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
#plt.sca(ax)
plt.contourf(xx, yy, Z, alpha=.5, cmap=cm)
cf_list = []
cnt = 0
X_test_class0 = X_test[np.where(y_test == 0)]
for obs in X_test_class0:
print('====================================================', cnt)
CF = cf.CounterfactualExplanation(obs, clf.predict, method='GS')
CF.fit(n_in_layer=2000, first_radius=0.1, dicrease_radius=10, sparse=True, verbose=True)
cf_list.append(CF.enemy)
cnt += 1
cf_list = np.array(cf_list)
plot_classification_contour(X_test, clf)
plt.scatter(X_test_class0[:, 0], X_test_class0[:, 1], marker='o', edgecolors='k', alpha=0.9, color='red')
plt.scatter(cf_list[:, 0], cf_list[:, 1], marker='o', edgecolors='k', alpha=0.9, color='green')
plt.title('Test instances (red) and their generated counterfactuals (green)')
plt.tight_layout()
###Output
==================================================== 0
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 35
Final radius: (0.6220000000000002, 0.6400000000000002)
Final number of ennemies: 39
Feature selection...
Reduced 0 coordinates
==================================================== 1
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 39
Final radius: (0.6940000000000003, 0.7120000000000003)
Final number of ennemies: 41
Feature selection...
Reduced 0 coordinates
==================================================== 2
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 45
Final radius: (0.8020000000000004, 0.8200000000000004)
Final number of ennemies: 41
Feature selection...
Reduced 0 coordinates
==================================================== 3
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 32
Final radius: (0.5680000000000002, 0.5860000000000002)
Final number of ennemies: 59
Feature selection...
Reduced 0 coordinates
==================================================== 4
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 27
Final radius: (0.47800000000000015, 0.49600000000000016)
Final number of ennemies: 2
Feature selection...
Reduced 0 coordinates
==================================================== 5
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 28
Final radius: (0.4960000000000001, 0.5140000000000001)
Final number of ennemies: 39
Feature selection...
Reduced 0 coordinates
==================================================== 6
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 31
Final radius: (0.5500000000000002, 0.5680000000000002)
Final number of ennemies: 28
Feature selection...
Reduced 1 coordinates
==================================================== 7
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 33
Final radius: (0.5860000000000002, 0.6040000000000002)
Final number of ennemies: 19
Feature selection...
Reduced 0 coordinates
==================================================== 8
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 30
Final radius: (0.5320000000000001, 0.5500000000000002)
Final number of ennemies: 4
Feature selection...
Reduced 0 coordinates
==================================================== 9
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 25
Final radius: (0.4420000000000001, 0.46000000000000013)
Final number of ennemies: 42
Feature selection...
Reduced 0 coordinates
==================================================== 10
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 22
Final radius: (0.38800000000000007, 0.4060000000000001)
Final number of ennemies: 247
Feature selection...
Reduced 0 coordinates
==================================================== 11
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 41
Final radius: (0.7300000000000003, 0.7480000000000003)
Final number of ennemies: 1
Feature selection...
Reduced 0 coordinates
==================================================== 12
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 35
Final radius: (0.6220000000000002, 0.6400000000000002)
Final number of ennemies: 41
Feature selection...
Reduced 0 coordinates
==================================================== 13
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 42
Final radius: (0.7480000000000003, 0.7660000000000003)
Final number of ennemies: 20
Feature selection...
Reduced 1 coordinates
==================================================== 14
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 29
Final radius: (0.5140000000000001, 0.5320000000000001)
Final number of ennemies: 38
Feature selection...
Reduced 0 coordinates
==================================================== 15
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 44
Final radius: (0.7840000000000004, 0.8020000000000004)
Final number of ennemies: 12
Feature selection...
Reduced 0 coordinates
==================================================== 16
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 40
Final radius: (0.7120000000000003, 0.7300000000000003)
Final number of ennemies: 2
Feature selection...
Reduced 0 coordinates
==================================================== 17
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 32
Final radius: (0.5680000000000002, 0.5860000000000002)
Final number of ennemies: 57
Feature selection...
Reduced 0 coordinates
==================================================== 18
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 45
Final radius: (0.8020000000000004, 0.8200000000000004)
Final number of ennemies: 12
Feature selection...
Reduced 0 coordinates
==================================================== 19
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 44
Final radius: (0.7840000000000004, 0.8020000000000004)
Final number of ennemies: 61
Feature selection...
Reduced 0 coordinates
==================================================== 20
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 34
Final radius: (0.6040000000000002, 0.6220000000000002)
Final number of ennemies: 60
Feature selection...
Reduced 0 coordinates
==================================================== 21
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 29
Final radius: (0.5140000000000001, 0.5320000000000001)
Final number of ennemies: 9
Feature selection...
Reduced 0 coordinates
==================================================== 22
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 45
Final radius: (0.8020000000000004, 0.8200000000000004)
Final number of ennemies: 41
Feature selection...
Reduced 0 coordinates
==================================================== 23
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 23
Final radius: (0.4060000000000001, 0.4240000000000001)
Final number of ennemies: 13
Feature selection...
Reduced 0 coordinates
==================================================== 24
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 32
Final radius: (0.5680000000000002, 0.5860000000000002)
Final number of ennemies: 11
Feature selection...
Reduced 0 coordinates
==================================================== 25
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 29
Final radius: (0.5140000000000001, 0.5320000000000001)
Final number of ennemies: 49
Feature selection...
Reduced 0 coordinates
==================================================== 26
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 29
Final radius: (0.5140000000000001, 0.5320000000000001)
Final number of ennemies: 73
Feature selection...
Reduced 0 coordinates
==================================================== 27
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 44
Final radius: (0.7840000000000004, 0.8020000000000004)
Final number of ennemies: 12
Feature selection...
Reduced 0 coordinates
==================================================== 28
0 ennemies found in initial sphere. Zooming in...
Exploring...
Final number of iterations: 32
Final radius: (0.5680000000000002, 0.5860000000000002)
Final number of ennemies: 14
Feature selection...
Reduced 0 coordinates
==================================================== 29
0 ennemies found in initial sphere. Zooming in...
Exploring...
###Markdown
Distance and sparsity over datasets
###Code
import numpy as np
import pandas as pd
from sklearn import datasets, ensemble, tree
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from matplotlib import pyplot as plt
import xgboost as xgb
#X,y = datasets.make_moons(n_samples = 200, shuffle=True, noise=0.05, random_state=0)
'''ONLINE NEWS POPULARITY'''
# df = pd.read_csv('datasets/newspopularity.csv', header=0, nrows=10000)
df = datasets.fetch_openml(data_id=4545)
data = df.data[:10000, :]
y = df.target[:10000]
y = np.array([int(x>=1400) for x in y])
print(df.feature_names[2:-1])
X = np.array(data[:, 2:-1])
X = (X.copy() - X.mean(axis=0))/X.std(axis=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
print('X_test shape:', X_test.shape)
clf = ensemble.RandomForestClassifier(n_estimators=200, max_depth=None, n_jobs=-1)
clf = clf.fit(X_train, y_train)
print(' ### Accuracy:', sum(clf.predict(X_test) == y_test)/y_test.shape[0])
def plot_classification_contour(X, clf, ax=[0,1]):
## Inspired by scikit-learn documentation
h = .02 # step size in the mesh
cm = plt.cm.RdBu
x_min, x_max = X[:, ax[0]].min() - .5, X[:, ax[0]].max() + .5
y_min, y_max = X[:, ax[1]].min() - .5, X[:, ax[1]].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
#plt.sca(ax)
plt.contourf(xx, yy, Z, alpha=.5, cmap=cm)
def get_CF_distances(obs, n_in_layer=10000, first_radius=0.1, dicrease_radius=10, sparse=True):
CF = cf.CounterfactualExplanation(obs, clf.predict, method='GS')
CF.fit(n_in_layer=n_in_layer, first_radius=first_radius, dicrease_radius=dicrease_radius, sparse=sparse,
verbose=False)
out = CF.distances()
l2, l0 = out['euclidean'], out['sparsity']
return l2, l0
def get_CF(obs, n_in_layer=2000, first_radius=0.1, dicrease_radius=10, sparse=True):
CF = cf.CounterfactualExplanation(obs, clf.predict, method='GS')
CF.fit(n_in_layer=n_in_layer, first_radius=first_radius, dicrease_radius=dicrease_radius, sparse=sparse,
verbose=False)
e_tilde = CF.e_star
e_f = CF.enemy
return obs, e_tilde, e_f
def iterate_gs_dataset(n_in_layer=2000, first_radius=0.1, dicrease_radius=10, sparse=True):
l2_list, l0_list = [], []
cnt = 0
for obs in X_test[:10, :]:
print('====================================================', cnt)
l2, l0 = get_CF(obs, n_in_layer=n_in_layer,
first_radius=first_radius,
dicrease_radius=dicrease_radius,
sparse=sparse)
l2_list.append(l2)
l0_list.append(l0)
cnt += 1
return l2_list, l0_list
%%time
idx = 100
obs_to_interprete = X[idx]
print(clf.predict(X[idx].reshape(1, -1)))
def get_closest_enemy(obs):
enemies = X_test[np.where((y_test != clf.predict(obs.reshape(1,-1))) & (y_test == clf.predict(X_test)))]
idx, dist = metrics.pairwise_distances_argmin_min(obs.reshape(1,-1), enemies)
return dist
from sklearn import metrics
x, e_tilde, e_f =get_CF(obs_to_interprete, n_in_layer=10000, first_radius=0.1, dicrease_radius=100)
print((e_f - x != 0).sum())
#vars_ = ['crim', 'zn', 'indus','chas', 'nox', 'rm', 'age', 'dis', 'rad', 'tax', 'ptratio', 'black', 'lstat']
vars_ = [' n_tokens_title', ' n_tokens_content', ' n_unique_tokens',
' n_non_stop_words', ' n_non_stop_unique_tokens', ' num_hrefs',
' num_self_hrefs', ' num_imgs', ' num_videos', ' average_token_length',
' num_keywords', ' data_channel_is_lifestyle',
' data_channel_is_entertainment', ' data_channel_is_bus',
' data_channel_is_socmed', ' data_channel_is_tech',
' data_channel_is_world', ' kw_min_min', ' kw_max_min', ' kw_avg_min',
' kw_min_max', ' kw_max_max', ' kw_avg_max', ' kw_min_avg',
' kw_max_avg', ' kw_avg_avg', ' self_reference_min_shares',
' self_reference_max_shares', ' self_reference_avg_sharess',
' weekday_is_monday', ' weekday_is_tuesday', ' weekday_is_wednesday',
' weekday_is_thursday', ' weekday_is_friday', ' weekday_is_saturday',
' weekday_is_sunday', ' is_weekend', ' LDA_00', ' LDA_01', ' LDA_02',
' LDA_03', ' LDA_04', ' global_subjectivity',
' global_sentiment_polarity', ' global_rate_positive_words',
' global_rate_negative_words', ' rate_positive_words',
' rate_negative_words', ' avg_positive_polarity',
' min_positive_polarity', ' max_positive_polarity',
' avg_negative_polarity', ' min_negative_polarity',
' max_negative_polarity', ' title_subjectivity',
' title_sentiment_polarity', ' abs_title_subjectivity',
' abs_title_sentiment_polarity']
for dvar, coord in list(zip(enumerate(vars_), e_f - x)):
if coord!=0:
print('variable: ', dvar[1])
print('initial value:', X[idx, dvar[0]])
print('move:')
print(coord * X[dvar[0]].std())
print('===================')
print(sorted(list(zip(vars_, e_f - x)), key=lambda x: -abs(x[1])))
###Output
variable: kw_avg_avg
initial value: 0.6404847169874107
move:
0.0943305976495755
variable: self_reference_min_shares
initial value: -0.034209598523858666
move:
-0.05398286731284633
variable: self_reference_max_shares
initial value: -0.13921057362681094
move:
0.059595624879460886
variable: self_reference_avg_sharess
initial value: -0.10316992834653013
move:
0.03999341525525414
variable: LDA_00
initial value: -0.6139198409926339
move:
-0.05169736390320836
variable: LDA_01
initial value: -0.5025852810983442
move:
0.03392571313722337
variable: LDA_02
initial value: -0.5514977945330819
move:
-0.023401785374593656
variable: LDA_03
initial value: 2.294903902435056
move:
0.011497417382422465
variable: LDA_04
initial value: -0.7530089672204197
move:
-0.02420401681702647
variable: abs_title_subjectivity
initial value: -1.6336156583986412
move:
0.00967913729039352
===================
[(' kw_avg_avg', 0.09439413804521568), (' self_reference_min_shares', -0.062334947334881395), (' LDA_00', -0.05532123856645299), (' self_reference_max_shares', 0.05055634244329413), (' self_reference_avg_sharess', 0.04355919649289395), (' LDA_01', 0.040724864688457096), (' LDA_04', -0.023157765623279092), (' LDA_02', -0.021165896195315836), (' abs_title_subjectivity', 0.011812505550350627), (' LDA_03', 0.010557375357047594), (' n_tokens_title', 0.0), (' n_tokens_content', 0.0), (' n_unique_tokens', 0.0), (' n_non_stop_words', 0.0), (' n_non_stop_unique_tokens', 0.0), (' num_hrefs', 0.0), (' num_self_hrefs', 0.0), (' num_imgs', 0.0), (' num_videos', 0.0), (' average_token_length', 0.0), (' num_keywords', 0.0), (' data_channel_is_lifestyle', 0.0), (' data_channel_is_entertainment', 0.0), (' data_channel_is_bus', 0.0), (' data_channel_is_socmed', 0.0), (' data_channel_is_tech', 0.0), (' data_channel_is_world', 0.0), (' kw_min_min', 0.0), (' kw_max_min', 0.0), (' kw_avg_min', 0.0), (' kw_min_max', 0.0), (' kw_max_max', 0.0), (' kw_avg_max', 0.0), (' kw_min_avg', 0.0), (' kw_max_avg', 0.0), (' weekday_is_monday', 0.0), (' weekday_is_tuesday', 0.0), (' weekday_is_wednesday', 0.0), (' weekday_is_thursday', 0.0), (' weekday_is_friday', 0.0), (' weekday_is_saturday', 0.0), (' weekday_is_sunday', 0.0), (' is_weekend', 0.0), (' global_subjectivity', 0.0), (' global_sentiment_polarity', 0.0), (' global_rate_positive_words', 0.0), (' global_rate_negative_words', 0.0), (' rate_positive_words', 0.0), (' rate_negative_words', 0.0), (' avg_positive_polarity', 0.0), (' min_positive_polarity', 0.0), (' max_positive_polarity', 0.0), (' avg_negative_polarity', 0.0), (' min_negative_polarity', 0.0), (' max_negative_polarity', 0.0), (' title_subjectivity', 0.0), (' title_sentiment_polarity', 0.0)]
###Markdown
Out of distribution CF
###Code
import numpy as np
import pandas as pd
from sklearn import datasets, ensemble
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from exploration import counterfactuals as cf
from sklearn.svm import SVC
PATH = ''
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def get_CF(obs, n_in_layer=2000, first_radius=0.1, dicrease_radius=10, sparse=True, target_class=None):
CF = cf.CounterfactualExplanation(obs, clf.predict, method='GS', target_class=target_class)
CF.fit(n_in_layer=n_in_layer, first_radius=first_radius, dicrease_radius=dicrease_radius, sparse=sparse,
verbose=False)
print('target class', CF.target_class)
e_tilde = CF.e_star
e_f = CF.enemy
return obs, e_tilde, e_f
# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
X = (X.copy() - X.mean(axis=0))/X.std(axis=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=0)
print(X_train.shape, X_test.shape)
clf = SVC(C=1.0, gamma=1.0, probability=True).fit(X_train, y_train)
fig, ax = plt.subplots()
X0, X1 = X_train[:, 0], X_train[:, 1]
xx, yy = make_meshgrid(X0, X1)
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.6)
ax.scatter(X0, X1, c=y_train, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
plt.show()
obs_to_interprete = np.array([-1.1, 2.1])# X_test[2]
_, _, CF = get_CF(obs_to_interprete, n_in_layer=1000, first_radius=0.1, dicrease_radius=100.0, target_class=2)
AX1, AX2 = 0, 1
fig, ax = plt.subplots()
X0, X1 = X_train[:, 0], X_train[:, 1]
xx, yy = make_meshgrid(X0, X1)
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.6)
#plt.scatter(X_train[:, AX1], X_train[:, AX2], color=[['red', 'blue', 'green'][x] for x in y_train], alpha=0.3, marker='+')
ax.scatter(X0, X1, c=y_train, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.scatter(obs_to_interprete[AX1], obs_to_interprete[AX2], color='yellow', s=100, edgecolors='k')
ax.scatter(CF[AX1], CF[AX2], color='green', s=70, edgecolors='k')
fig.tight_layout()
plt.savefig(PATH + 'discussion_iris.pdf', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
|
docs/source/narratives/data-processing.ipynb
|
###Markdown
Data Processing GraphData and GraphBatch We can create random data for testing purposes using `random(node_feat_size, edge_feat_size, glob_feat_size)`.
###Code
from caldera.data import GraphData, GraphBatch
datalist = [GraphData.random(5, 4, 3) for _ in range(100)]
for data in datalist[:3]:
print(data)
print(datalist[0].edges)
###Output
_____no_output_____
###Markdown
Use `from_data_list` for collate a list of different data into one single GraphBatch object.
###Code
batch = GraphBatch.from_data_list(datalist)
print(batch)
###Output
_____no_output_____
###Markdown
Data Loaders
###Code
from caldera.data import GraphDataLoader
loader = GraphDataLoader(datalist, batch_size=32)
for data in loader:
print(data)
import torch
input_data = [d.copy() for d in datalist]
target_data = []
for d in datalist:
d_copied = d.copy()
d_copied.x = torch.zeros_like(d.x)
target_data.append(d_copied)
loader = GraphDataLoader(input_data, target_data, batch_size=32)
for input_, target_ in loader:
print((input_, target_))
datalist[0].copy()
###Output
_____no_output_____
|
Ideal - LSTM.ipynb
|
###Markdown
Logistic Regression
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', LogisticRegression())])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 95.55%
CONFUSION MATRIX
[[3276 205]
[ 13 1403]]
###Markdown
Support Vector Classifier
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', LinearSVC())])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 96.98%
CONFUSION MATRIX
[[3345 136]
[ 12 1404]]
###Markdown
Multinomial Naive Bayes Classifier
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', LinearSVC())])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 96.98%
CONFUSION MATRIX
[[3345 136]
[ 12 1404]]
###Markdown
Gradient Boost Classifier
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', GradientBoostingClassifier(loss = 'deviance',
learning_rate = 0.01,
n_estimators = 10,
max_depth = 5,
random_state=42))])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 80.09%
CONFUSION MATRIX
[[2637 844]
[ 131 1285]]
###Markdown
XGBoost Classifier*
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', XGBClassifier(loss = 'deviance',
learning_rate = 0.01,
n_estimators = 10,
max_depth = 5,
random_state=2020))])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 80.15%
CONFUSION MATRIX
[[2641 840]
[ 132 1284]]
###Markdown
Random Forest Classifier
###Code
pipe = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', RandomForestClassifier())])
model = pipe.fit(train_copy['text'],train_copy['target'])
pred = model.predict(test_copy['text'])
print("accuracy: {}%".format(round(accuracy_score(test_copy['target'],
pred)*100,2)))
print("CONFUSION MATRIX")
print(confusion_matrix(test_copy['target'],
pred))
###Output
accuracy: 92.14%
CONFUSION MATRIX
[[3115 366]
[ 19 1397]]
###Markdown
LSTM - RNN Baseline
###Code
X = train_copy['text']
Y = train_copy['target']
le = LabelEncoder()
Y = le.fit_transform(Y)
Y = Y.reshape(-1,1)
max_words = 500
max_len = 1000
tok = Tokenizer(num_words = max_words)
tok.fit_on_texts(X)
sequences = tok.texts_to_sequences(X)
sequences_matrix = sequence.pad_sequences(sequences, maxlen = max_len)
def RNN():
inputs = Input(name='inputs',shape=[max_len])
layer = Embedding(max_words,50,input_length=max_len)(inputs)
layer = LSTM(64)(layer)
layer = Dense(256,name='FC1')(layer)
layer = Activation('relu')(layer)
layer = Dropout(0.5)(layer)
layer = Dense(1,name='out_layer')(layer)
layer = Activation('sigmoid')(layer)
model = Model(inputs=inputs,outputs=layer)
return model
model = RNN()
model.compile(loss = 'binary_crossentropy', optimizer = RMSprop(),
metrics = ['accuracy'])
model.summary()
#TAKES TIME. Fitting on GTX 1050ti.
t1 = time.time()
model.fit(sequences_matrix, Y, batch_size = 256, epochs = 20,
validation_split = 0.2)
t2 = time.time()
print("Time Taken = ", t2-t1)
###Output
C:\Users\Siraz\Anaconda3\envs\gpu\lib\site-packages\tensorflow_core\python\framework\indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
###Markdown
LSTM on test data
###Code
test_sequences = tok.texts_to_sequences(test_copy['text'])
test_sequences_matrix = sequence.pad_sequences(test_sequences,
maxlen = max_len)
accr = model.evaluate(test_sequences_matrix, test_copy['target'])
print('Accuracy :{:0.2f}'.format(accr[1]))
#SAVING MODEL
# serialize model to JSON
model_json = model.to_json()
with open("ideal_LSTM_arc.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("ideal_LSTM_weights.h5")
###Output
_____no_output_____
|
HeartDiseaseLogisticRegression_model.ipynb
|
###Markdown
###Code
#We have a data which classified if patients have heart disease or not according to features in it.
#We will try to use this data to create a model which tries predict if a patient has this disease or not.
#We will use Random Forest (classification) algorithm.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
#model evaluation
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import precision_score, recall_score, f1_score
from sklearn.metrics import plot_roc_curve
df=pd.read_csv("/content/heart.csv")
df.head()
df.target.value_counts()
df.isnull().sum()
df.describe()
df.info()
sns.countplot(x="target", data=df, palette="bwr")
plt.show()
countNoDisease = len(df[df.target == 0])
countHaveDisease = len(df[df.target == 1])
print("Percentage of Patients Haven't Heart Disease: {:.2f}%".format((countNoDisease / (len(df.target))*100)))
print("Percentage of Patients Have Heart Disease: {:.2f}%".format((countHaveDisease / (len(df.target))*100)))
sns.countplot(x='sex', data=df, palette="mako_r")
plt.xlabel("Sex (0 = female, 1= male)")
plt.show()
countFemale = len(df[df.sex == 0])
countMale = len(df[df.sex == 1])
print("Percentage of Female Patients: {:.2f}%".format((countFemale / (len(df.sex))*100)))
print("Percentage of Male Patients: {:.2f}%".format((countMale / (len(df.sex))*100)))
df.groupby('target').mean()
pd.crosstab(df.age,df.target).plot(kind="bar",figsize=(20,6))
plt.title('Heart Disease Frequency for Ages')
plt.xlabel('Age')
plt.ylabel('Frequency')
plt.savefig('heartDiseaseAndAges.png')
plt.show()
pd.crosstab(df.sex,df.target).plot(kind="bar",figsize=(15,6),color=['#1CA53B','#AA1111' ])
plt.title('Heart Disease Frequency for Sex')
plt.xlabel('Sex (0 = Female, 1 = Male)')
plt.xticks(rotation=0)
plt.legend(["Haven't Disease", "Have Disease"])
plt.ylabel('Frequency')
plt.show()
#Lest compare age, thalach(Max heart rate) and target
plt.figure(figsize=(10,6))
#scatter with positive examples
plt.scatter(df.age[df.target==1],
df.thalach[df.target==1],
c="salmon");
#scatter with negative example
plt.scatter(df.age[df.target==0],
df.thalach[df.target==0],
c="black");
plt.title("Heart Disease in function of age and thalach")
plt.xlabel("Age")
plt.ylabel("Thalache (max heart rate)")
plt.legend(["Disease","No Disease"]);
# check the dstribution of the age column with a histogram
df.age.plot.hist(figsize=(10,6));
pd.crosstab( df.cp,df.target)
#make thr cross tab visual
pd.crosstab(df.cp, df.target).plot(kind="bar", figsize=(10,6),
color=["darkblue","lightblue"])
plt.title("Heart Disease frequency per chest pain type")
plt.xlabel("Chest Pain type")
plt.ylabel("Amount")
plt.legend(["No Disease","Disease"])
plt.xticks(rotation=0);
df.corr()
# Lets make our corelation matrix a bit prettier
corr_matrix= df.corr()
fig, ax= plt.subplots(figsize=(10,6))
ax= sns.heatmap(corr_matrix,
annot= True,
linewidths=0.5,
fmt=".2f",
cmap="twilight_r");
X=df.drop("target", axis=1)
y=df["target"]
#split data into training and test split
np.random.seed(42)
#Split into trin and test set
X_train, X_test, y_train,y_test= train_test_split(X,y,test_size=0.2)
len(X_train), len(y_train)
# Put models in a dictionary
models={"Logistic Regression": LogisticRegression(),
"KNN":KNeighborsClassifier(),
"Random Forest": RandomForestClassifier()}
#create a function to fit and score models
def fit_and_score(models,X_train, X_test, y_train, y_test):
"""
Fits and evaluates given ML models.
models: a dict of diff SciKit learn ML models
X_test : test set (no label)
X_train: training set (no label)
y_train : training labels
y_test : test labels
"""
#set random seed
np.random.seed(42)
#Make dictionary to keep model scores
model_scores={}
#Loop through models
for name, model in models.items():
#fit model to data
model.fit(X_train, y_train)
#evaluate the model and store in score dict
model_scores[name]= model.score(X_test, y_test)
return model_scores
model_scores= fit_and_score(models=models,
X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test)
model_scores
model_compare =pd.DataFrame(model_scores, index=["accuracy"])
model_compare.T.plot.bar();
#lets tune KNN
train_scores = []
test_scores = []
#Create a list of diff values of n_neighbours
neighbors = range(1,21)
#Setup KNN Instance
knn = KNeighborsClassifier()
#Loop thriugh diff n_neighbours
for i in neighbors:
knn.set_params(n_neighbors = i)
#Fit the algo
knn.fit(X_train, y_train)
#Update the training scores list
train_scores.append(knn.score(X_train, y_train))
#Update the test scores
test_scores.append(knn.score(X_test, y_test))
train_scores
test_scores
#Visualize
plt.plot(neighbors, train_scores, label="Train score")
plt.plot(neighbors, test_scores, label="Test score")
plt.xticks(np.arange(1,21,1))
plt.xlabel("Number of neighbors")
plt.ylabel("Model score")
plt.legend()
print(f"Maximum KNN score on the test data :{max(test_scores)*100:.2f}%")
# Create a hyperparameter grid for LogisticRegression
log_reg_grid = {"C" : np.logspace(-4, 4, 20),
"solver": ["liblinear"]}
#Create a hyperparam grid for RandomForestClassifier
rf_grid = {"n_estimators" : np.arange(10, 1000, 50),
"max_depth" : [None,3,5,10],
"min_samples_split" : np.arange(2, 20, 2),
"min_samples_leaf": np.arange(1, 20, 2)}
# Tune LogisticRegression
np.random.seed(42)
#Setup random hyperparams search for LogisticRegression
rs_log_reg= RandomizedSearchCV(LogisticRegression(),
param_distributions = log_reg_grid,
cv=5,
n_iter=20,
verbose=True)
#Fit random hyperparam search model for LogisticRegression
rs_log_reg.fit(X_train, y_train)
rs_log_reg.best_params_
rs_log_reg.score(X_test, y_test)
np.random.seed(42)
#Setup random hyperparam search for RandomFOrestClassifier
rs_ref = RandomizedSearchCV(RandomForestClassifier(),
param_distributions = rf_grid,
cv=5,
n_iter= 20,
verbose=True)
# Fit random hyperparam search model for RandomForestCLassifier
rs_ref.fit(X_train, y_train)
#Finding best params
rs_ref.best_params_
#Evaluate RandomSearchCV search on RandomForestClassifier model
rs_ref.score(X_test, y_test)
# Different hyperparameters for LR Model
log_reg_grid = {"C": np.logspace(-4,4,30),
"solver": ["liblinear"]}
#Setup grid hyperparameter search for LogisticRegression
gs_log_reg = GridSearchCV(LogisticRegression(),
param_grid= log_reg_grid,
cv=5,
verbose=True)
#Fit grid hyperparam search model
gs_log_reg.fit(X_train, y_train);
gs_log_reg.best_params_
# Evaluate GridSearchCV for LR model
gs_log_reg.score(X_test, y_test)
model_scores
# make predictions
y_preds=gs_log_reg.predict(X_test)
y_preds
# Import ROC curve fucntion but we have done this previously.
# roc curve and calculate AUC metric
plot_roc_curve(gs_log_reg, X_test, y_test)
sns.set(font_scale=1.5)
def plot_conf_mat(y__test, y_preds):
"""
Plots a confusion matrix using Seaborn's heatmap().
"""
fig, ax = plt.subplots(figsize=(3, 3))
ax = sns.heatmap(confusion_matrix(y_test, y_preds),
annot=True, # Annotate the boxes
cbar=False)
plt.xlabel("Predicted label") # predictions go on the x-axis
plt.ylabel("True label") # true labels go on the y-axis
plot_conf_mat(y_test, y_preds)
print(classification_report(y_test, y_preds))
# check our best hyperparams
gs_log_reg.best_params_
# create a new classifier with best params
clf= LogisticRegression(C=0.20433597178569418,
solver="liblinear")
# Cross validated accuracy
cv_acc= cross_val_score(clf,
X,
y,
cv=5,
scoring="accuracy")
cv_acc
cv_acc=np.mean(cv_acc)
cv_acc
# Cross validated precision
cv_precision= cross_val_score(clf,
X,
y,
cv=5,
scoring="precision")
cv_precision=np.mean(cv_precision)
cv_precision
# Cross validated recall
cv_recall= cross_val_score(clf,
X,
y,
cv=5,
scoring="recall")
cv_recall=np.mean(cv_recall)
cv_recall
# Cross validated f1
cv_f1= cross_val_score(clf,
X,
y,
cv=5,
scoring="f1")
cv_f1=np.mean(cv_f1)
cv_f1
# putting it in a graph visualize
cv_metrics= pd.DataFrame({"Accuracy": cv_acc,
"Precision": cv_precision,
"Recall": cv_recall,
"f1": cv_f1},
index=[0])
cv_metrics.T.plot.bar(title="Cross validated classification metrics", legend=False)
###Output
_____no_output_____
|
notebooks/code/1_preprocess_des_fichiers.ipynb
|
###Markdown
Création des fichiers Setup Import des fichiers
###Code
#Temps et fichiers
import os
import warnings
import time
from datetime import timedelta
#Manipulation de données
import pandas as pd
import numpy as np
from pandas_profiling import ProfileReport
from functools import partial
#Modélisation
from sklearn.datasets import fetch_openml
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import PoissonRegressor, GammaRegressor
from sklearn.linear_model import TweedieRegressor
from sklearn.metrics import mean_tweedie_deviance
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, OneHotEncoder
from sklearn.preprocessing import StandardScaler, KBinsDiscretizer
from sklearn.metrics import mean_absolute_error, mean_squared_error, auc
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import TruncatedSVD
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.svm import LinearSVC
from sklearn.model_selection import RandomizedSearchCV# the keys can be accessed with final_pipeline.get_params().keys()
from sklearn.linear_model import LogisticRegression
from xgboost import XGBClassifier
#Text
import re
#Evaluation
from sklearn.metrics import f1_score, confusion_matrix
#Visualisation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import plotly.express as px
#Tracking d'expérience
import mlflow
import mlflow.sklearn
###Output
_____no_output_____
###Markdown
Utilisation du code du projet packagé
###Code
#Cette cellule permet d'appeler la version packagée du projet et d'en assurer le reload avant appel des fonctions
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Configuration de l'experiment MLFlow
###Code
mlflow.tracking.get_tracking_uri()
###Output
_____no_output_____
###Markdown
Chargement des données brutes
###Code
data_folder = os.path.join('/mnt', 'data', 'raw')
all_raw_files = [os.path.join(data_folder, fname)
for fname in os.listdir(data_folder)]
all_raw_files
random_state=42
expo_train = pd.read_csv('/mnt/data/raw/expo_train.csv', encoding='utf8', sep=',' )
expo_train.head()
expo_test = pd.read_csv('/mnt/data/raw/expo_test.csv', encoding='utf8', sep=',' )
expo_test.head()
sin_train = pd.read_csv('/mnt/data/raw/sin_train.csv', encoding='utf8', sep=';', decimal=',' )
sin_train.head()
primes2019 = pd.read_csv('/mnt/data/raw/primes2019.csv', encoding='utf8', sep=';', decimal=',' )
primes2019.head()
###Output
_____no_output_____
###Markdown
Problème non résolu La lecture du fichier dans un DataFrame renvoie un format `objet` pour les champs texte.Ca nous a posé des problèmes dans les pipelines par la suiteOn n'a pas été capable de les retransformer en de vrais champs texte
###Code
expo_train.info()
df=expo_train
print(df.info())
for col in list(df.select_dtypes('object').columns):
df[col]=str(df[col])
#df[list(df.select_dtypes('object').columns)]=pd.Series()
print(df[list(df.select_dtypes('object').columns)].info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 155651 entries, 0 to 155650
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 EXPO 155651 non-null float64
1 FORMULE 155651 non-null object
2 TYPE_RESIDENCE 155651 non-null object
3 TYPE_HABITATION 155651 non-null object
4 NB_PIECES 146301 non-null float64
5 SITUATION_JURIDIQUE 155651 non-null object
6 NIVEAU_JURIDIQUE 155651 non-null object
7 VALEUR_DES_BIENS 155651 non-null float64
8 OBJETS_DE_VALEUR 155651 non-null object
9 ZONIER 155651 non-null object
10 NBSIN_TYPE1_AN1 155651 non-null int64
11 NBSIN_TYPE1_AN3 155651 non-null int64
12 NBSIN_TYPE2_AN1 155651 non-null int64
13 NBSIN_TYPE2_AN2 138509 non-null float64
14 NBSIN_TYPE2_AN3 155651 non-null int64
15 id 155651 non-null int64
16 ANNEE 155651 non-null int64
17 ZONIER_2 155651 non-null object
18 NBSIN_TYPE1_AN1_RECODE 155651 non-null int64
19 NBSIN_TYPE1_AN3_RECODE 155651 non-null int64
dtypes: float64(4), int64(8), object(8)
memory usage: 23.8+ MB
None
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 155651 entries, 0 to 155650
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FORMULE 155651 non-null object
1 TYPE_RESIDENCE 155651 non-null object
2 TYPE_HABITATION 155651 non-null object
3 SITUATION_JURIDIQUE 155651 non-null object
4 NIVEAU_JURIDIQUE 155651 non-null object
5 OBJETS_DE_VALEUR 155651 non-null object
6 ZONIER 155651 non-null object
7 ZONIER_2 155651 non-null object
dtypes: object(8)
memory usage: 9.5+ MB
None
###Markdown
Premier EDA rapide On commence par créer un profile report pour se donner une idée
###Code
pr = ProfileReport(expo_train).to_file(output_file='expo_train.html')
pr = ProfileReport(expo_test).to_file(output_file='expo_test.html')
pr = ProfileReport(sin_train).to_file(output_file='sin_train.html')
###Output
_____no_output_____
###Markdown
Insight On voit plusieurs pb :- une première colonne **unnamed** dans les expo_train et expo_test- expo_train fait état de **sinistres de types 1 et 2**, alors que cette info n'est pas dans sinistres **Question 1 :** On vérifie si un Id correspond bien à une police x une année d'observation
###Code
expo_train[['id', 'ANNEE']].groupby('id').agg('count').describe()
###Output
_____no_output_____
###Markdown
**=>** c'est bien le cas **A FAIRE :** du coup on peut calculer le nombre de sinistres antérieurs **Question 2 :** est-ce qu'on retrouve dans expo_train les sinistres ? Création des données processées Le fait de créer des fonctions permet de s'assurer que les mêmes traitements soient bien appliqués
###Code
def preprocess(df):
#On vire la colonne unnamed :0 : André => ancien numéro de ligne
#On vire `NBSIN_TYPE1_AN2` => vu avec André : pb dans la variable qui est >=1
df=df.drop(['Unnamed: 0','NBSIN_TYPE1_AN2'], axis=1)
df['ZONIER_2']=df['ZONIER'].astype(str).str[0]
df['NBSIN_TYPE1_AN1_RECODE']= df['NBSIN_TYPE1_AN1'].apply(lambda x : 1 if x>1 else 0)
df['NBSIN_TYPE1_AN3_RECODE']= df['NBSIN_TYPE1_AN3'].apply(lambda x : 1 if x>1 else 0)
return df
expo_train=preprocess(expo_train)
expo_test=preprocess(expo_test)
def cree_df_merged(df_exp, df_sin):
df_merged=pd.merge(df_exp, df_sin, on=['id', 'ANNEE'], how='left' )
df_merged[['NB', 'COUT']]=df_merged[['NB', 'COUT']].fillna(0)
df_merged['Isin']=df_merged['NB'].apply(lambda x : min(1, x))
#on extraie la première lettre
return df_merged
df_merged=cree_df_merged(expo_train, sin_train)
df_merged
random_state=42
###Output
_____no_output_____
###Markdown
On fait une fois pour toute le split entre train et val ici en faisant un stratified sampling sur l'indicatrice des sinistres du fait de sa faible représentation
###Code
df_train, df_val = train_test_split(df_merged, test_size=0.2, random_state=random_state, stratify=df_merged['Isin'])
y_train = df_train[['id','EXPO','NB','COUT', 'Isin']]
y_val = df_val[['id','EXPO','NB','COUT', 'Isin']]
X_train = df_train.drop(['NB','COUT', 'Isin'], axis=1)
X_val = df_val.drop(['NB','COUT', 'Isin'], axis=1)
X_test=expo_test
###Output
_____no_output_____
###Markdown
Les données sont stockées dans le répertoire `interim` pour un accès simplifié par les autres notebooks et avoir une source de vérité unique
###Code
# On exporte les données
#df
df_merged.to_parquet('/mnt/data/interim/df_merged.gzip',compression='gzip')
df_train.to_parquet('/mnt/data/interim/df_train.gzip',compression='gzip')
df_val.to_parquet('/mnt/data/interim/df_val.gzip',compression='gzip')
#X
X_train.to_parquet('/mnt/data/interim/X_train.gzip',compression='gzip')
X_val.to_parquet('/mnt/data/interim/X_val.gzip',compression='gzip')
X_test.to_parquet('/mnt/data/interim/X_test.gzip',compression='gzip')
#y
y_train.to_parquet('/mnt/data/interim/y_train.gzip',compression='gzip')
y_val.to_parquet('/mnt/data/interim/y_val.gzip',compression='gzip')
###Output
_____no_output_____
###Markdown
Pour mémoire : code implémenté dans [sklearn](https://scikit-learn.org/stable/auto_examples/linear_model/plot_tweedie_regression_insurance_claims.htmlsphx-glr-auto-examples-linear-model-plot-tweedie-regression-insurance-claims-py) mais que nous n'avons finalement pas repris
###Code
# Insurances companies are interested in modeling the Pure Premium, that is
# the expected total claim amount per unit of exposure for each policyholder
# in their portfolio:
#df_merged["PurePremium"] = df_merged["COUT"] / df_merged["EXPO"]
# This can be indirectly approximated by a 2-step modeling: the product of the
# Frequency times the average claim amount per claim:
#df_merged["Frequency"] = df_merged["NB"] / df_merged["EXPO"]
#df_merged["AvgClaimAmount"] = df_merged["COUT"] / np.fmax(df_merged["NB"], 1)
###Output
_____no_output_____
|
Yandex data science/2/Week 4/grad_boosting.ipynb
|
###Markdown
Градиентный бустинг своими руками**Внимание:** в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр `random_state` у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции `gbm_predict`.В этом задании будет использоваться датасет `boston` из `sklearn.datasets`. Оставьте последние 25% объектов для контроля качества, разделив `X` и `y` на `X_train`, `y_train` и `X_test`, `y_test`.Целью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.
###Code
from sklearn import model_selection, datasets, ensemble, tree, metrics
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import xgboost as xgb
X, y = datasets.load_boston(return_X_y = True)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.25, random_state = None, shuffle = False)
###Output
_____no_output_____
###Markdown
Задание 1Как вы уже знаете из лекций, **бустинг** - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом. Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь `L` - квадрат отклонения ответа композиции `a(x)` от правильного ответа `y` на данном `x`.Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
###Code
def L_der(y_train, z):
return (y_train - z)
###Output
_____no_output_____
###Markdown
Задание 2Заведите массив для объектов `DecisionTreeRegressor` (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами). В цикле от обучите последовательно 50 решающих деревьев с параметрами `max_depth=5` и `random_state=42` (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом. Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке `X`:```def gbm_predict(X): return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X](считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)```Эта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью `mean_squared_error` в `sklearn.metrics`. Возведите результат в степень 0.5, чтобы получить `RMSE`. Полученное значение `RMSE` — **ответ в пункте 2**.
###Code
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(dtr_a, coef_a)]) for x in X]
dtr_a = []
coef_a = []
z = np.zeros((y_train.shape[0]))
for i in range (0, 50):
coef_a.append(0.9)
dtr = tree.DecisionTreeRegressor(random_state = 42, max_depth = 5)
dtr.fit(X_train, L_der(y_train, z))
dtr_a.append(dtr)
z = gbm_predict(X_train)
alg_pred = gbm_predict(X_test)
alg_mse = np.sqrt(metrics.mean_squared_error(y_test, alg_pred))
print(alg_mse)
with open('answer2.txt', 'w') as fout:
fout.write(str(alg_mse))
###Output
_____no_output_____
###Markdown
Задание 3Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум. Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле `0.9 / (1.0 + i)`, где `i` - номер итерации (от 0 до 49). Используйте качество работы алгоритма как **ответ в пункте 3**. В реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.
###Code
dtr_a = []
coef_a = []
z = np.zeros((y_train.shape[0]))
for i in range (0, 50):
coef_a.append(0.9/(1 + i))
dtr = tree.DecisionTreeRegressor(random_state = 42, max_depth = 5)
dtr.fit(X_train, L_der(y_train, z))
dtr_a.append(dtr)
z = gbm_predict(X_train)
alg_pred = gbm_predict(X_test)
alg_mse = np.sqrt(metrics.mean_squared_error(y_test, alg_pred))
print(alg_mse)
with open('answer3.txt', 'w') as fout:
fout.write(str(alg_mse))
###Output
_____no_output_____
###Markdown
Задание 4Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке `sklearn`, так и в сторонней библиотеке `XGBoost`, которая имеет свой питоновский интерфейс. На практике `XGBoost` работает заметно лучше `GradientBoostingRegressor` из `sklearn`, но для этого задания вы можете использовать любую реализацию. Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет **ответ в п.4**): 1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно. 2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться. 3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться. 4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться
###Code
n_trees = [1] + list(range(10, 105, 5))
xgb_scoring = []
for n_tree in n_trees:
estimator = xgb.XGBRegressor(learning_rate = 0.1, max_depth = 5, n_estimators = n_tree, min_child_weight = 3)
score = model_selection.cross_val_score(estimator, X, y, scoring = 'neg_mean_squared_error', cv = 4)
xgb_scoring.append(score)
xgb_scoring = np.asmatrix(xgb_scoring)
xgb_scoring = np.sqrt(-xgb_scoring)
plt.plot(n_trees, xgb_scoring.mean(axis = 1), marker='.', label='XGBoost')
plt.grid(True)
plt.xlabel('n_trees')
plt.ylabel('score')
plt.title('Accuracy score')
plt.legend(loc='lower right')
metrics.SCORERS.keys()
###Output
_____no_output_____
###Markdown
Задание 5Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии. Для этого обучите `LinearRegression` из `sklearn.linear_model` (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке `RMSE`. Полученное качество - ответ в **пункте 5**. В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.
###Code
from sklearn.linear_model import LinearRegression
lin_est = LinearRegression()
lin_est.fit(X_train, y_train)
pred = lin_est.predict(X_test)
lin_mse = np.sqrt(metrics.mean_squared_error(y_test, pred))
lin_mse
with open('answer5.txt', 'w') as fout:
fout.write(str(lin_mse))
p = 0.75
idx = int(p * X.shape[0]) + 1
X_train, X_test = np.split(X, [idx])
y_train, y_test = np.split(y, [idx])
X_train
X
X_train
###Output
_____no_output_____
|
spotify_proj/.ipynb_checkpoints/band_members_over_time-checkpoint.ipynb
|
###Markdown
IN progressinput (or scrape) band members active dates (like to be down to the day of year, or at least month) and record releasesplot like wikipedia does, but interactive - select time frame - note instrument changes (bassist leaves on X day, new bassist comes in at y day, for example)not every band has a graph, this could be an alternativeSpotify API seems best way to obtain album release dates that include month and day - combine with scraping for band members?
###Code
import requests
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import matplotlib.dates as mdates
import seaborn as sns
import dotenv
# from dotenv import load_dotenv
import os
import numpy as np
### Load in Spotify API credentials here
os.chdir('C:\\Users\\dwagn\\Desktop')
dotenv.load_dotenv()
CLIENT_ID = os.getenv('spotify-client-id')
CLIENT_SECRET = os.getenv('spotify-client-secret')
os.chdir('C:\\Users\\dwagn\\git\\projects')
### Setup
AUTH_URL = 'https://accounts.spotify.com/api/token'
auth_response = requests.post(AUTH_URL, {
'grant_type': 'client_credentials',
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET
})
auth_response_data = auth_response.json()
access_token = auth_response_data['access_token']
try:
auth_response.status_code == 200
print('Successfully accessed API')
except:
raise Exception('API credentials rejected.')
### Uncomment below for manual entry
# artist_share_url = input('Paste artist spotify share link: ')
# artist_id = artist_share_url.split('/')[4].split('?')[0]
artist_id = '2hl0xAkS2AIRAu23TVMBG1'
headers = {'Authorization': 'Bearer {}'.format(access_token)}
url = 'https://api.spotify.com/v1/'
artist_name = requests.get(url + 'artists/' + artist_id, headers=headers).json()['name']
albums = requests.get(url + 'artists/' + artist_id + '/albums',
headers=headers,
params={'include_groups': 'album', 'limit': 50}).json()
album_names_dates = {}
for album in albums['items']:
album_names_dates[album['name']] = album['release_date']
albums_df = pd.DataFrame.from_dict(album_names_dates, orient = 'index').reset_index()
albums_df.columns = ['album', 'release_date']
albums_df.release_date = pd.to_datetime(albums_df.release_date)
albums_df
# stem([locs,] heads, linefmt=None, markerfmt=None, basefmt=None)
dates = albums_df.release_date
names = albums_df.album
levels = np.tile([-9, 9, -7, 7, -5, 5, -3, 3, -1, 1],
int(np.ceil(len(dates)/6)))[:len(dates)]
fig, ax = plt.subplots(figsize=(20, 8), constrained_layout=True)
ax.set(title='Stem plot test')
ax.vlines(dates, 0, levels, color='tab:red')
ax.plot(dates, np.zeros_like(dates), '-o',
color='k', markerfacecolor='w')
for d, l, r in zip(dates, levels, names):
ax.annotate(r, xy=(d, l),
xytext=(-3, np.sign(l)*3), textcoords="offset points",
horizontalalignment="right",
verticalalignment="bottom" if l > 0 else "top")
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=12)) # by year
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b %Y"))
plt.setp(ax.get_xticklabels(), rotation=90, ha="right"); # semicolon stopping label output
ax.margins(y=0.1, x=0.1)
ax.get_yaxis().set_visible(False)
plt.show()
# Scrape Wikipedia Band Members Dates
import requests
import csv
from bs4 import BeautifulSoup
import datetime
headers = {'user-agent': 'Contact: [email protected] (Language=Python 3.8.2; Platform=Linux(MX-19.4 / 31))'}
url_band_name = 'Gwar'
url = 'https://en.wikipedia.org/wiki/{}'.format(url_band_name)
result = requests.get(url, headers = headers)
try:
result.status_code == 200
print('Successfully accessed {} webpage'.format(url_band_name))
except:
raise Exception('Error accessing {} webpage'.format(url_band_name))
src = result.content
soup = BeautifulSoup(src, 'html.parser')
# Check if tables are on main band page
try:
current_members = soup.find(text=lambda s: "current members" in s.lower()) \
.findNext('ul').get_text().split('\n')
print('Found current members')
except:
print('No current members listed')
try:
former_members_main_page = soup.find(text = lambda s: 'former members' in s.lower()) \
.findNext('ul').get_text().split('\n')
print('Found former members')
except:
print('No former members listed on this page')
# If applicable, get link to full members page
if 'members' not in str(soup.find(id="toc").get_text().lower()):
print('Warning: Band Members section not found on this page')
if all(x in soup.get_text() for x in ['Main article: List of', 'band members']):
members_link = soup.find(text=lambda s: "main article:" in s.lower()) \
.findNextSibling('a', href=True).get('href') # get link for full members
print('Full members in link: {}'.format(members_link))
# mems = soup.find(id='Members')
# mems.next_sibling()
url = 'https://en.wikipedia.org{}'.format(members_link)
result = requests.get(url, headers = headers)
try:
result.status_code == 200
print('Successfully accessed {} webpage'.format(members_link))
except:
raise Exception('Error accessing {} webpage'.format(members_link))
src = result.content
member_soup = BeautifulSoup(src, 'html.parser')
split_names = [mem.split(' – ') for mem in current_members]
split_names = [item for sublist in split_names for item in sublist] # back to single list
split_dates = [d.split('(') for d in split_names[1::2]]
split_dates = [item for sublist in split_dates for item in sublist]
instr = split_dates[::2]
dates = split_dates[1::2]
names = split_names[::2]
# pd.DataFrame({'names' : names,
# 'instruments' : instr,
# 'dates' : dates})
try:
tbl1 = member_soup.find('table', attrs={'class' : 'wikitable'}) # current members
tbl1_title = tbl1.previous_sibling.previous_sibling.get_text().strip('[edit]')
except:
print('No first table')
try:
tbl2 = tbl1.findNext('table', attrs={'class' : 'wikitable'}) # former members
tbl2_title = tbl2.previous_sibling.previous_sibling.get_text().strip('[edit]')
except:
print('No second table')
curr_df = pd.read_html(tbl1.prettify())[0]
curr_df['Activity'] = 'Current'
retire_df = pd.read_html(tbl2.prettify())[0]
retire_df['Activity'] = 'Former'
full_df = curr_df.append(retire_df).reset_index().drop(['Image', 'index'], axis=1)
full_df.head(10)
# Keep only universal columns
df = full_df[['Name', 'Years active', 'Instruments', 'Activity']] \
.applymap(lambda x: x.replace(' ', ', '))
df['dates_list'] = [x.split(', ') for x in df['Years active']]
# remove extra garbage
df['dates_list'] = df['dates_list'].apply(lambda x: list \
(map \
(lambda x: '' if str(x)[0:1] == "(" else x, x)))
df.head(5)
df.dates_list[22][-1]
df.dates_list[22][-1]
from datetime import datetime
# for yr in df['Years active'][0].split(' ')[::2]: #separate
# datetime.strptime(yr, '%Y-%Y')
# a = df['Years active'][0].split(' ')[::2][0]
# datetime.strptime(a, '%Y')
[x[-1] for x in df.dates_list]
import pylab
import numpy as np
import datetime
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
plt.rcdefaults()
fig, ax = plt.subplots()
ax.barh(df['Name'][0], df['Years active'][0], align='center')
years = YearLocator() # every year
ax = plt.gca()
plt.setp(ax.get_xticklabels(), rotation=45)
plt.show()
import pylab
import numpy as np
import datetime
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
plt.rcdefaults()
fig, ax = plt.subplots()
ax.barh(df['Name'], df['Years active'], align='center')
years = YearLocator() # every year
ax = plt.gca()
ax.xaxis.set_major_locator(years)
plt.setp(ax.get_xticklabels(), rotation=45)
plt.show()
# df_memspage = pd.DataFrame({'names' : names,
# 'dates' : dts,
# 'instruments' : inst})
# Retired Function (using prettify now)
names, dts, inst, act = [], [], [], [] # init lists before function
def memsTableScrape(table, activity):
for row in table.find_all('tr')[1:]:
name = row.find_all('td')[1].get_text()
dates = row.find_all('td')[2]
instruments = row.find_all('td')[3]
try:
dts2 = []
for i in dates.find('ul'):
dts2.append(i.get_text())
except:
pass
try:
inst2 = []
for i in instruments.find('ul'):
inst2.append(i.get_text())
except:
pass
names.append(name)
if len(dts2) > 0:
dts.append(dts2)
else:
dts.append(dates.get_text())
if len(inst2) > 0:
inst.append(inst2)
else:
inst.append(instruments)
act.append(activity)
# memsTableScrape(tbl1, 'current')
# # df_memspage = df_memspage.style.set_properties(subset=['instruments'], **{'width': '300px'})
# # Cast lists to strings
# df_memspage['dates'] = df_memspage.dates.apply(lambda row: ', '.join(row) if type(row) is list else row)
# df_memspage['instruments'] = df_memspage.instruments.apply(lambda row: ', '.join(row) if type(row) is list else row)
# df_memspage['names'] = df_memspage['names'].map(lambda x: x.rstrip('\n'))
# df_memspage['dates'] = df_memspage['dates'].map(lambda x: x.rstrip('\n'))
# df_memspage['instruments'] = df_memspage['instruments'].map(lambda x: x.strip('\n'))
# df_memspage['activity'] = act
### Retired function, after scraping into lists directly
def formatDates(row):
dl = df_memspage['dates'][row]
if '–' in dl:
dl = dl.split('–')
elif '-' in dl:
dl = dl.split('-')
print(dl)
date_list = []
date_list.append(dl[0]) # add first date
if len(dl) > 2:
for d in dl[1:-1]:
# splits the middle dates in half. Assumes each is even number of ints
h1 = d[:len(d)//2]
h2 = i[len(d)//2:]
date_list.append(h1)
date_list.append(h2)
date_list.append(dl[-1]) # add last date
zipped_dl = list(zip(date_list[0::2], date_list[1::2])) # pairs
df_memspage['dates'][row] = zipped_dl
return zipped_dl
###Output
_____no_output_____
|
BrnoTeaching2019/notebooks/(Questions) Day 4 and 5 - Neural Networks.ipynb
|
###Markdown
ESTIMATED TOTAL MEMORY USAGE: 2700 MB (but peaks will hit ~20 GB)
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
###Output
_____no_output_____
###Markdown
Goals of this notebookWe want to introduce the basics of neural networks and deep learning. Modern deep learning is a huge field and it's impossible to cover even all the significant developments in the last 5 years here. But the basics are straightforward.One big caveat: deep learning is a rapidly evolving field. There are new developments in neural network architectures, novel applications, better optimization techniques, theoretical results justifying why something works etc. daily. It's a great opportunity to get involved if you find research interesting and there are great online communities (pytorch, fast.ai, paperswithcode, pysyft) that you should get involved with.**Note**: Unlike the previous notebooks, this notebook has very few questions. You should study the code, tweak the data, the parameters, and poke the models to understand what's going on. **Notes**: You can install extensions (google for nbextensions) with Jupyter notebooks. I tend to use resuse to display memory usage in the top right corner which really helps.To run a cell, press: "Shift + Enter"To add a cell before your current cell, press: "Esc + a"To add a cell after your current cell, press: "Esc + b"To delete a cell, press: "Esc + x"To be able to edit a cell, press: "Enter"To see more documentation about of a function, type ?function_nameTo see source code, type ??function_nameTo quickly see possible arguments for a function, type "Shift + Tab" after typing the function name.Esc and Enter take you into different modes. Press "Esc + h" to see all shortcuts. Synthetic/Artificial Datasets We covered the basics of neural networks in the lecture. We also saw applications to two synthetic datasets. The goal in this section is to replicate those results and get a feel for using pytorch. Classification
###Code
def generate_binary_data(N_examples=1000, seed=None):
if seed is not None:
np.random.seed(seed)
features = []
target = []
for i in range(N_examples):
#class = 0
r = np.random.uniform()
theta = np.random.uniform(0, 2*np.pi)
features.append([r*np.cos(theta), r*np.sin(theta)])
target.append(0)
#class = 1
r = 3 + np.random.uniform()
theta = np.random.uniform(0, 2*np.pi)
features.append([r*np.cos(theta), r*np.sin(theta)])
target.append(1)
features = np.array(features)
target = np.array(target)
return features, target
features, target = generate_binary_data(seed=100)
def plot_binary_data(features, target):
plt.figure(figsize=(10,10))
plt.plot(features[target==0][:,0], features[target==0][:,1], 'p', color='r', label='0')
plt.plot(features[target==1][:,0], features[target==1][:,1], 'p', color='g', label='1')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plot_binary_data(features, target)
###Output
_____no_output_____
###Markdown
We have two features here - x and y. There is a binary target variable that we need to predict. This is essentially the dataset from the logistic regression discussion. Logistic regression will not do well here given that the data is not linearly separable. Transforming the data so we have two features:$$r^2 = x^2 + y^2$$and$$\theta = \arctan(\frac{y}{x})$$would make it very easy to use logistic regression (or just a cut at $r = 2$) to separate the two classes but while it is easy for us to visualize the data and guess at the transformation, in high dimensions, we can't follow the same process.Let's implement a feed-forward neural network that takes the two features as input and predicts the probabiliy of being in class 1 as output. Architecture Definition
###Code
class ClassifierNet(nn.Module): #inherit from nn.Module to define your own architecture
def __init__(self, N_inputs, N_outputs, N_hidden_layers, N_hidden_nodes, activation, output_activation):
super(ClassifierNet, self).__init__()
self.N_inputs = N_inputs #2 in our case
self.N_outputs = N_outputs #1 in our case but can be higher for multi-class classification
self.N_hidden_layers = N_hidden_layers #we'll start by using one hidden layer
self.N_hidden_nodes = N_hidden_nodes #number of nodes in each hidden layer - can extend to passing a list
#Define layers below - pytorch has a lot of layers pre-defined
#use nn.ModuleList or nn.DictList instead of [] or {} - more explanations below
self.layer_list = nn.ModuleList([]) #use just as a python list
for n in range(N_hidden_layers):
if n==0:
self.layer_list.append(nn.Linear(N_inputs, N_hidden_nodes))
else:
self.layer_list.append(nn.Linear(N_hidden_nodes, N_hidden_nodes))
self.output_layer = nn.Linear(N_hidden_nodes, N_outputs)
self.activation = activation #activations at inner nodes
self.output_activation = output_activation #activation at last layer (depends on your problem)
def forward(self, inp):
'''
every neural net in pytorch has its own forward function
this function defines how data flows through the architecture from input to output i.e. the forward propagation part
'''
out = inp
for layer in self.layer_list:
out = layer(out) #calls forward function for each layer (already implemented for us)
out = self.activation(out) #non-linear activation
#pass activations through last/output layer
out = self.output_layer(out)
if self.output_activation is not None:
pred = self.output_activation(out)
else:
pred = out
return pred
###Output
_____no_output_____
###Markdown
There are several ways of specifying a neural net architecture in pytorch. You can work at a high level of abstraction by just listing the layers that you want to getting into the fine details by constructing your own layers (as classes) that can be used in ClassifierNet above.How does pytorch work? When you define an architecture like the one above, pytorch constructs a graph (nodes and edges) where the nodes are operations on multi-indexed arrays (called tensors).
###Code
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 2
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
###Output
_____no_output_____
###Markdown
Training **Loss function**We first need to pick our loss function. Like we binary classification problems (including logistic regression), we'll use binary cross-entropy:$$\text{Loss, } L = -\Sigma_{i=1}^{N} y_i \log(p_i) + (1-y_i) \log(1-p_i)$$where $y_i \in {0,1}$ are the labels and $p_i \in [0,1]$ are the probability predictions.
###Code
#look at all available losses (you can always write your own)
torch.nn.*Loss?
criterion = nn.BCELoss()
#get a feel for the loss function
#target = 1 (label = 1)
print(criterion(torch.tensor(1e-2), torch.tensor(1.))) #pred prob = 1e-2 -> BAD
print(criterion(torch.tensor(0.3), torch.tensor(1.))) #pred prob = 0.3 -> BAd
print(criterion(torch.tensor(0.5), torch.tensor(1.))) #pred prob = 0.5 -> Bad
print(criterion(torch.tensor(1.), torch.tensor(1.))) #pred prob = 1.0 -> GREAT!
###Output
_____no_output_____
###Markdown
**Optimizer**:So we have the data, the neural net architecture, a loss function to measure how well the model does on our task. We also need a way to do gradient descent.Recall, we use gradient descent to minimize the loss by computing the first derivative (gradients) and taking a step in the direction opposite (since we are minimizing) to the gradient:$$w_{t} \rightarrow w_{t} - \eta \frac{\partial L}{\partial w_{t-1}}$$where $w_t$ = weight at time-step t, $L$ = loss, $\eta$ = learning rate.For our neural network, we first need to calculate the gradients. Thankfully, this is done automatically by pytorch using a procedure called **backpropagation**. If you are interested in more calculations details, please check "automatic differentiation" and an analytical calculation for a feed-forward network (https://treeinrandomforest.github.io/deep-learning/2018/10/30/backpropagation.html).The gradients are calculated by calling a function **backward** on the network, as we'll see below.Once the gradients are calculated, we need to update the weights. In practice, there are many heuristics/variants of the update step above that lead to better optimization behavior. A great resource to dive into details is https://ruder.io/optimizing-gradient-descent/. We won't get into the details here.We'll choose what's called the **Adam** optimizer.
###Code
optim.*?
optimizer = optim.Adam(net.parameters(), lr=1e-2)
###Output
_____no_output_____
###Markdown
We picked a constant learning rate here (which is adjusted internally by Adam) and also passed all the tunable weights in the network by using: net.parameters()
###Code
list(net.parameters())
###Output
_____no_output_____
###Markdown
There are 9 free parameters:* A 2x2 matrix (4 parameters) mapping the input layer to the 1 hidden layer.* A 2x1 matrix (2 parameters) mapping the hidden layer to the output layer with one node.* 2 biases for the 2 nodes in the hidden layer.* 1 bias for the output node in the output layer.This is a good place to explain why we need to use nn.ModuleList. If we had just used a vanilla python list, net.parameters() would only show weights that are explicitly defined in our net architecture. The weights and biases associated with the layers would NOT show up in net.parameters(). This process of a module higher up in the hierarchy (ClassifierNet) subsuming the weights and biases of modules lower in the hierarchy (layers) is called **registering**. ModuleList ensures that all the weights/biases are registered as weights and biases of ClassifierNet. Let's combine all these elements and train our first neural net.
###Code
#convert features and target to torch tensors
features = torch.from_numpy(features)
target = torch.from_numpy(target)
#if have gpu, throw the model, features and labels on it
net = net.to(device)
features = features.to(device).float()
target = target.to(device).float()
###Output
_____no_output_____
###Markdown
We need to do the following steps now:* Compute the gradients for our dataset.* Do gradient descent and update the weights.* Repeat till ??The problem is there's no way of knowing when we have converged or are close to the minimum of the loss function. In practice, this means we keep repeating the process above and monitor the loss as well as performance on a hold-out set. When we start over-fitting on the training set, we stop. There are various modifications to this procedure but this is the essence of what we are doing.Each pass through the whole dataset is called an **epoch**.
###Code
N_epochs = 100
for epoch in range(N_epochs):
out = net(features) #make predictions on the inputs
loss = criterion(out, target) #compute loss on our predictions
optimizer.zero_grad() #set all gradients to 0
loss.backward() #backprop to compute gradients
optimizer.step() #update the weights
if epoch % 10 == 0:
print(f'Loss = {loss:.4f}')
###Output
_____no_output_____
###Markdown
Let's combined all these elements into a function
###Code
def train_model(features, target, model, lr, N_epochs, criterion=nn.BCELoss(), shuffle=False):
#criterion = nn.BCELoss() #binary cross-entropy loss as before
optimizer = torch.optim.Adam(model.parameters(), lr=lr) #Adam optimizer
#if have gpu, throw the model, features and labels on it
model = model.to(device)
features = features.to(device)
target = target.to(device)
for epoch in range(N_epochs):
if shuffle: #should have no effect on gradients in this case
indices = torch.randperm(len(features))
features_shuffled = features[indices]
target_shuffled = target[indices]
else:
features_shuffled = features
target_shuffled = target
out = model(features_shuffled)
#out = out.reshape(out.size(0))
loss = criterion(out, target_shuffled)
if epoch % 1000 == 0:
print(f'epoch = {epoch} loss = {loss}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
pred = model(features_shuffled).reshape(len(target))
pred[pred>0.5] = 1
pred[pred<=0.5] = 0
#print(f'Accuracy = {accuracy}')
model = model.to('cpu')
features = features.to('cpu')
target = target.to('cpu')
return model
###Output
_____no_output_____
###Markdown
**Exercise**: Train the model and vary the number of hidden nodes and see what happens to the loss. Can you explain this behavior?
###Code
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 1 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 10000)
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 2 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 10000)
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 3 #<--- play with this
activation = nn.Sigmoid()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
net = train_model(features, target, net, 1e-3, 10000)
###Output
_____no_output_____
###Markdown
There seems to be some "magic" behavior when we increase the number of nodes in the first (and only) hidden layer from 2 to 3. Loss suddenly goes down dramatically. At this stage, we should explore why that's happening.For every node in the hidden layer, we have a mapping from the input to that node:$$\sigma(w_1 x + w_2 y + b)$$where $w_1, w_2, b$ are specific to that hidden node. We can plot the decision line in this case:$$w_1 x + w_2 y + b = 0$$Unlike logistic regression, this is not actually a decision line. Points on one side are not classified as 0 and points on the other side as 1 (if the threshold = 0.5). Instead this line should be thought of as one defining a new coordinate-system. Instead of x and y coordinates, every hidden node induces a straight line and a new coordinate, say $\alpha_i$. So if we have 3 hidden nodes, we are mapping the 2-dimensional input space into a 3-dimensional space where the coordinates $\alpha_1, \alpha_2, \alpha_3$ for each point depend on which side of the 3 lines induced as mentioned above, it lies.
###Code
params = list(net.parameters())
print(params[0]) #3x2 matrix
print(params[1]) #3 biases
features = features.detach().cpu().numpy() #detach from pytorch computational graph, bring back to cpu, convert to numpy
target = target.detach().cpu().numpy()
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot()
#plot raw data
ax.plot(features[target==0][:,0], features[target==0][:,1], 'p', color='r', label='0')
ax.plot(features[target==1][:,0], features[target==1][:,1], 'p', color='g', label='1')
plt.xlabel('x')
plt.ylabel('y')
#get weights and biases
weights = params[0].detach().numpy()
biases = params[1].detach().numpy()
#plot straight lines
x_min, x_max = features[:,0].min(), features[:,0].max()
y_lim_min, y_lim_max = features[:,1].min(), features[:,1].max()
for i in range(weights.shape[0]): #loop over each hidden node in the one hidden layer
coef = weights[i]
intercept = biases[i]
y_min = (-intercept - coef[0]*x_min)/coef[1]
y_max = (-intercept - coef[0]*x_max)/coef[1]
ax.plot([x_min, x_max], [y_min, y_max])
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_lim_min, y_lim_max)
ax.legend(framealpha=0)
###Output
_____no_output_____
###Markdown
This is the plot we showed in the lecture. For every hidden node in the hidden layer, we have a straight line. The colors of the three lines above are orange, green and blue and that's what we'll call our new coordinates.Suppose you pick a point in the red region:* It lies to the *right* of the orange line* It lies to the *bottom* of the green line* It lies to the *top* of the blue line.(These directions might change because of inherent randomness during training - weight initializations here).On the other hand, we have **6** green regions. If you start walking clockwise from the top green section, every time you cross a straight line, you walk into a new region. Each time you walk into a new region, you flip the coordinate of one of the 3 lines. Either you go from *right* to *left* of the orange line, *bottom* to *top* of the green line or *top* to *bottom* of the blue line.So instead of describing each point by two coordinates (x, y), we can describe it by (orange status, green status, blue status). We happen to have 7 such regions here - with 1 being purely occupied by the red points and the other 7 by green points.This might be become cleared from a 3-dimensional plot.
###Code
from mpl_toolkits.mplot3d import Axes3D
#get hidden layer activations for all inputs
features_layer1_3d = net.activation(net.layer_list[0](torch.tensor(features))).detach().numpy()
print(features_layer1_3d[0:10])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
ax.plot(features_layer1_3d[target==0][:,0], features_layer1_3d[target==0][:,1], features_layer1_3d[target==0][:,2], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,0], features_layer1_3d[target==1][:,1], features_layer1_3d[target==1][:,2], 'p', color ='g', label='1')
ax.legend(framealpha=0)
###Output
_____no_output_____
###Markdown
At this stage, a simple linear classifier can draw a linear decision boundary (a plane) to separate the red points from the green points. Also, these points lie in the unit cube (cube with sides of length=1) since we are using sigmoid activations. Whenever the activations get saturated (close to 0 or 1), then we see points on the edges and corners of the cube. **Question**: Switch the activation from sigmoid to relu (nn.ReLU()). Does the loss still essentially become zero on the train set? If not, try increasing N_hidden_nodes. At what point does the loss actually become close to 0?
###Code
N_inputs = 2
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 5 #<---- play with this
activation = nn.ReLU()
output_activation = nn.Sigmoid() #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(features)
target = torch.tensor(target)
net = train_model(features, target, net, 1e-3, 10000)
###Output
_____no_output_____
###Markdown
**Question**: Remake the 3d plot but by trying 3 coordinates out of the N_hidden_nodes coordinates you found above?
###Code
features = features.detach().cpu().numpy() #detach from pytorch computational graph, bring back to cpu, convert to numpy
target = target.detach().cpu().numpy()
#get hidden layer activations for all inputs
features_layer1_3d = net.activation(net.layer_list[0](torch.tensor(features))).detach().numpy()
print(features_layer1_3d[0:10])
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 1
COORD3 = 2
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 1
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 0
COORD2 = 2
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = 1
COORD2 = 2
COORD3 = 3
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
###Output
_____no_output_____
###Markdown
Draw all the plots
###Code
import itertools
for comb in itertools.combinations(np.arange(N_hidden_nodes), 3):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection='3d')
COORD1 = comb[0]
COORD2 = comb[1]
COORD3 = comb[2]
ax.plot(features_layer1_3d[target==0][:,COORD1], features_layer1_3d[target==0][:,COORD2], features_layer1_3d[target==0][:,COORD3], 'p', color ='r', label='0')
ax.plot(features_layer1_3d[target==1][:,COORD1], features_layer1_3d[target==1][:,COORD2], features_layer1_3d[target==1][:,COORD3], 'p', color ='g', label='1')
ax.legend(framealpha=0)
plt.title(f'COORDINATES = {comb}')
###Output
_____no_output_____
###Markdown
**Note**: Generally it is a good idea to use a linear layer for the output layer and use BCEWithLogitsLoss to avoid numerical instabilities. We will do this later for multi-class classification. Clear variables
###Code
features = None
features_layer1_3d = None
target = None
net = None
###Output
_____no_output_____
###Markdown
Regression
###Code
def generate_regression_data(L=10, stepsize=0.1):
x = np.arange(-L, L, stepsize)
y = np.sin(3*x) * np.exp(-x / 8.)
return x, y
def plot_regression_data(x, y):
plt.figure(figsize=(10,10))
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('y')
x, y = generate_regression_data()
plot_regression_data(x, y)
###Output
_____no_output_____
###Markdown
This is a pretty different problem in some ways. We now have one input - x and one output - y. But looked at another way, we simply change the number of inputs in our neural network to 1 and we change the output activation to be a linear function. Why linear? Because in principle, the output (y) can be unbounded i.e. any real value.We also need to change the loss function. While binary cross-entropy is appropriate for a classification problem, we need something else for a regression problem. We'll use mean-squared error:$$\frac{1}{2}(y_{\text{target}} - y_{\text{pred}})^2$$ Try modifying N_hidden_nodes from 1 through 10 and see what happens to the loss
###Code
N_inputs = 1
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 10 #<--- play with this
activation = nn.Sigmoid()
output_activation = None #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(x).float().reshape(len(x), 1)
target = torch.tensor(y).float().reshape(len(y), 1)
net = train_model(features, target, net, 1e-2, 20000, criterion=nn.MSELoss())
pred = net(features).cpu().detach().numpy().reshape(len(features))
plt.plot(x, y)
plt.plot(x, pred)
###Output
_____no_output_____
###Markdown
As before, we need to understand what the model is doing. As before, let's consider the mapping from the input node to one node of the hidden layer. In this case, we have the mapping:$$\sigma(w_i x + b_i)$$where $w_i, b_i$ are the weight and bias associated with each node of the hidden layer. This defines a "decision" boundary where:$$w_i x + b_i = 0$$This is just a value $\delta_{i} \equiv -\frac{b_i}{w_i}$. For each hidden node $i$, we can calculate one such threshold, $\delta_i$.As we walk along the x-axis from the left to right, we will cross each threshold one by one. On crossing each threshold, one hidden node switches i.e. goes from $0 \rightarrow 1$ or $1 \rightarrow 0$. What effect does this have on the output or prediction?Since the last layer is linear, its output is:$y = v_1 h_1 + v_2 h_2 + \ldots + v_n h_n + c$where $v_i$ are the weights from the hidden layer to the output node, $c$ is the bias on the output node, and $h_i$ are the activations on the hidden nodes. These activations can smoothly vary between 0 and 1 according to the sigmoid function.So, when we cross a threshold, one of the $h_j$ values eithers turns off or turns on. This has the effect of adding or subtracting constant $v_k$ values from the output if the kth hidden node, $h_k$ is switching on/off.This means that as we add more hidden nodes, we can divide the domain (the x values) into more fine-grained intervals that can be assigned a single value by the neural network. In practice, there is a smooth interpolation. **Question**: Suppose instead of the sigmoid activations, we used a binary threshold:$$\sigma(x) = \begin{cases}1 & x > 0 \\0 & x \leq 0\end{cases}$$then we would get a piece-wise constant prediction from our trained network. Plot that piecewise function as a function of $x$.
###Code
activations = net.activation(net.layer_list[0](features))
print(activations[0:10])
binary_activations = nn.Threshold(0.5, 0)(activations)/activations
print(binary_activations[0:10])
binary_pred = net.output_layer(binary_activations)
plt.figure(figsize=(10,10))
plt.plot(x,y, label='data')
plt.plot(x, binary_pred.cpu().detach().numpy(), label='binary')
plt.plot(x, pred, color='r', label='pred')
plt.legend()
###Output
_____no_output_____
###Markdown
**Question**: Why does the left part of the function fit so well but the right side is always compromised? Hint: think of the loss function. The most likely reason is that the loss function is sensitive to the scale of the $y$ values. A 10% deviation between the y-value and the prediction near x = -10 has a larger absolute value than a 10% deviation near say, x = 5. **Question**: Can you think of ways to test this hypothesis? There are a couple of things you could do. One is to flip the function from left to right and re-train the model. In this case, the right side should start fitting better.Another option is to change the loss function to percentage error i.e.:$$\frac{1}{2} \big(\frac{y_{\text{target}} - y_{\text{pred}}}{y_{\text{target}}}\big)^2$$but this is probably much harder to optimize.
###Code
y = copy.copy(y[::-1])
plt.plot(x, y)
N_inputs = 1
N_outputs = 1
N_hidden_layers = 1
N_hidden_nodes = 10
activation = nn.Sigmoid()
output_activation = None #we want one probability between 0-1
net = ClassifierNet(N_inputs,
N_outputs,
N_hidden_layers,
N_hidden_nodes,
activation,
output_activation)
features = torch.tensor(x).float().reshape(len(x), 1)
target = torch.tensor(y).float().reshape(len(y), 1)
net = train_model(features, target, net, 1e-2, 14000, criterion=nn.MSELoss())
pred = net(features).cpu().detach().numpy().reshape(len(features))
plt.figure(figsize=(10,10))
plt.plot(x, y)
plt.plot(x, pred)
###Output
_____no_output_____
###Markdown
As expected, now the right side of the function fits well.
###Code
activations = net.activation(net.layer_list[0](features))
binary_activations = nn.Threshold(0.5, 0)(activations)/activations
binary_pred = net.output_layer(binary_activations)
plt.figure(figsize=(10,10))
plt.plot(x,y, label='data')
plt.plot(x, binary_pred.cpu().detach().numpy(), label='binary')
plt.plot(x, pred, color='r', label='pred')
plt.legend()
###Output
_____no_output_____
###Markdown
Clear Memory At this stage, you should restart the kernel and clear the output since we don't need anything from before. Image Classification One of the most successful applications of deep learning has been to computer vision. A central task of computer vision is **image classification**. This is the task of assigning exactly one of multiple labels to an image. pytorch provides a package called **torchvision** which includes datasets, some modern neural network architectures as well as helper functions for images.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
from torchvision.datasets import MNIST
from torchvision import transforms
DOWNLOAD_PATH = "../data/MNIST"
mnist_train = MNIST(DOWNLOAD_PATH,
train=True,
download=True,
transform = transforms.Compose([transforms.ToTensor()]))
mnist_test = MNIST(DOWNLOAD_PATH,
train=False,
download=True,
transform = transforms.Compose([transforms.ToTensor()]))
###Output
_____no_output_____
###Markdown
You will most likely run into memory issues between the data and the weights/biases of your neural network. Let's instead sample 1/10th the dataset.
###Code
print(mnist_train.data.shape)
print(mnist_train.targets.shape)
N_choose = 6000
chosen_ids = np.random.choice(np.arange(mnist_train.data.shape[0]), N_choose)
print(chosen_ids[0:10])
print(mnist_train.data[chosen_ids, :, :].shape)
print(mnist_train.targets[chosen_ids].shape)
mnist_train.data = mnist_train.data[chosen_ids, :, :]
mnist_train.targets = mnist_train.targets[chosen_ids]
print(mnist_test.data.shape)
print(mnist_test.targets.shape)
N_choose = 1000
chosen_ids = np.random.choice(np.arange(mnist_test.data.shape[0]), N_choose)
print(chosen_ids[0:10])
print(mnist_test.data[chosen_ids, :, :].shape)
print(mnist_test.targets[chosen_ids].shape)
mnist_test.data = mnist_test.data[chosen_ids, :, :]
mnist_test.targets = mnist_test.targets[chosen_ids]
###Output
torch.Size([10000, 28, 28])
torch.Size([10000])
[8114 643 5401 4385 169 5141 9096 3678 8051 95]
torch.Size([1000, 28, 28])
torch.Size([1000])
###Markdown
MNIST is one of the classic image datasets and consists of 28 x 28 pixel images of handwritten digits. We downloaded both the train and test sets. Transforms defined under target_transform will be applied to each example. In this example, we want tensors and not images which is what the transforms do.The train set consists of 60000 images.
###Code
mnist_train.data.shape
mnist_train.data[0]
plt.imshow(mnist_train.data[0])
###Output
_____no_output_____
###Markdown
There are 10 unique labels - 0 through 9
###Code
mnist_train.targets[0:10]
###Output
_____no_output_____
###Markdown
The labels are roughly equally/uniformly distributed
###Code
np.unique(mnist_train.targets, return_counts=True)
###Output
_____no_output_____
###Markdown
The test set consists of 10000 images.
###Code
mnist_test.data.shape
plt.imshow(mnist_test.data[10])
###Output
_____no_output_____
###Markdown
Same labels
###Code
mnist_test.targets[0:10]
###Output
_____no_output_____
###Markdown
Pretty equally distributed.
###Code
np.unique(mnist_test.targets, return_counts=True)
###Output
_____no_output_____
###Markdown
**Image Classifier**:We first have to pick an architecture. The first one we'll pick is a feed-forward neural network like the one we used in the exercises above. This time I am going to use a higher abstraction to define the network.
###Code
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=2)
Flatten()(mnist_train.data[0:10]).shape
###Output
_____no_output_____
###Markdown
Architecture definition using nn.Sequential. You can just list the layers in a sequence. We carry out the following steps:* Flatten each image into a 784 dimensional vector* Map the image to a 100-dimensional vector using a linear layer* Apply a relu non-linearity* Map the 100-dimensional vector into a 10-dimensional output layer since we have 10 possible targets.* Apply a softmax activation to convert the 10 numbers into a probability distribution that assigns the probability the image belonging to each class (0 through 9)A softmax activation takes N numbers $a_1, \ldots, a_{10}$ and converts them to a probability distribution. The first step is to ensure the numbers are positive (since probabilities cannot be negative). This is done by exponentiation.$$a_i \rightarrow e^{a_i}$$The next step is to normalize the numbers i.e. ensure they add up to 1. This is very straightforward. We just divide each score by the sum of scores:$$p_i = \frac{e^{a_i}}{e^{a_1} + e^{a_2} + \ldots + e^{a_N}}$$This is the softmax function. If you have done statistical physics (physics of systems with very large number of interacting constituents), you probably have seen the Boltzmann distribution:$$p_i = \frac{e^{-\beta E_i}}{e^{-\beta E_1} + e^{-\beta E_2} + \ldots + e^{-\beta E_N}}$$which gives the probability that a system with N energy levels is in the state with energy $i$ when it is in equilibrium with a thermal bath at temperature $T = \frac{1}{k_B\beta}$. This is the only probability distribution that is invariant to constant shifts in energy: $E_i \rightarrow E_i + \Delta$. Network definition
###Code
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.Softmax(dim=1) #convert 10-dim activation to probability distribution
)
###Output
_____no_output_____
###Markdown
Let's ensure the data flows through our neural network and check the dimensions. As before, the neural net object is a python callable.
###Code
image_ff_net(mnist_train.data[0:12].float()).shape
###Output
_____no_output_____
###Markdown
We get a 10-dimensional output as expected.**Question**: Check that the outputs for each image are actually a probability distribution (the numbers add up to 1).
###Code
image_ff_net(mnist_train.data[0:10].float()).sum(dim=1)
###Output
_____no_output_____
###Markdown
**Question**: We have an architecture for our neural network but we now need to decide what loss to pick. Unlike the classification problem earlier which had two classes, we have 10 classes here. Take a look at the pytorch documentation - what loss do you think we should pick to model this problem? We used cross-entropy loss on days 2 and 3. We need the same loss here. Pytorch provides NLLLoss (negative log likelihood) as well as CrossEntropyLoss.**Question**: Look at the documentation for both of these loss functions. Which one should we pick? Do we need to make any modifications to our architecture? We will use the Cross-entropy Loss which can work with the raw scores (without a softmax layer).
###Code
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
###Output
_____no_output_____
###Markdown
Now we'll get raw unnormalized scores that were used to compute the probabilities. We should use nn.CrossEntropyLoss in this case.
###Code
image_ff_net(mnist_train.data[0:12].float())
loss = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
**Training**: We have an architecture, the data, an appropriate loss. Now we need to loop over the images, use the loss to compare the predictions to the targets, compute the gradients and update the weights.In our previous examples, we had N_epoch passes over our dataset and each time, we computed predictions for the full dataset. This is impractical as datasets gets larger. Instead, we need to split the data into **batches** of a fixed size, compute the loss, the gradients and update the weights for each batch.pytorch provides a DataLoader class that makes it easy to generate batches from your dataset.**Optional**:Let's analyze how using batches can be different from using the full dataset. Suppose our data has 10,000 rows but we use batches of size 100 (usually we pick powers of 2 for the GPU but this is just an example). Statistically, our goal is always to compute the gradient:$$\frac{\partial L}{\partial w_i}$$for all the weights $w_i$. By weights here, I mean both the weights and biases and any other free or tunable parameters in our model.In practice, the loss is a sum over all the examples in our dataset:$$L = \frac{1}{N}\Sigma_{i}^N l(p_i, t_i)$$where $p_i$ = prediction for ith example, $t_i$ = target/label for ith example. So the derivative is:$$\frac{\partial L}{\partial w_i} = \frac{1}{N}\Sigma_i^N \frac{\partial l(p_i, t_i)}{\partial w_i} $$In other words, our goal is to calculate this quantity but $N$ is too large. So we pick a randomly chosen subset of size 100 and only average the gradients over those examples. As an analogy, if our task was to measure the average height of all the people in the world which is impractical, we would pick randomly chosen subsets, say of 10,000 people and measure their average heights. Of course, as we make the subset smaller, the estimate we get will be noisier i.e. it has a greater chance of higher deviation from the actual value (height or gradient). Is this good or bad? It depends. In our case, we are optimizing a function (the loss) that has multiple local minima and saddle points. It is easy to get stuck in regions of the loss space/surface. Having noisy gradients can help with escaping those local minima just because we'll not always be moving in the direction of the true gradient but a noisy estimate.Some commonly used terminology in case you read papers/articles:* (Full) Gradient Descent - compute the gradients over the full dataset. Memory-intensive for larger datasets. This is what we did with our toy examples above.* Mini-batch Gradient Descent - use randomly chosen samples of fixed size as your data. Noisier gradients, more frequent updates to your model, memory efficient.* Stochastic Gradient Descent - Mini-batch gradient descent with batch size = 1. Very noisy estimate, "online" updates to your model, can be hard to converge.There are some fascinating papers on more theoretical investigations into the loss surface and the behavior of gradient descent. Here are some examples:* https://papers.nips.cc/paper/7875-visualizing-the-loss-landscape-of-neural-nets.pdf* https://arxiv.org/abs/1811.03804* https://arxiv.org/pdf/1904.06963.pdf**End of optional section**
###Code
BATCH_SIZE = 16 #number of examples to compute gradients over (a batch)
#python convenience classes to sample and create batches
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
idx, (data_example, target_example) = next(enumerate(train_dataloader))
print(idx)
print(data_example.shape)
print(target_example.shape)
###Output
0
torch.Size([16, 1, 28, 28])
torch.Size([16])
###Markdown
So we have batch 0 with 64 tensors of shape (1, 28, 28) and 64 targets. Let's ensure our network can forward propagate on this batch.
###Code
image_ff_net(data_example)
###Output
_____no_output_____
###Markdown
**Question**: Debug this error The first shape 1792 x 28 gives us a clue. We want the two 28 sized dimensions to be flattened. But it seems like the wrong dimensions are being flattened here.1792 = 64 * 28We need to rewrite our flatten layer.
###Code
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
Flatten()(data_example).shape
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
image_ff_net(data_example).shape
###Output
_____no_output_____
###Markdown
Let's combine all the elements together now and write our training loop.
###Code
#convert 28x28 image -> 784-dimensional flattened vector
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
#ARCHITECTURE
image_ff_net = nn.Sequential(Flatten(),
nn.Linear(784, 100),
nn.ReLU(),
nn.Linear(100, 10),
)
#LOSS CRITERION and OPTIMIZER
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(image_ff_net.parameters(), lr=1e-2)
#DATALOADERS
BATCH_SIZE = 16
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
image_ff_net.train() #don't worry about this (for this notebook)
image_ff_net.to(device)
N_EPOCHS = 20
for epoch in range(N_EPOCHS):
loss_list = []
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
data_target = data_target.to(device)
pred = image_ff_net(data_example)
loss = criterion(pred, data_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
###Output
Epoch = 0 Loss = 0.4620684621532758
Epoch = 5 Loss = 0.08651585492491722
Epoch = 10 Loss = 0.04343548361460368
Epoch = 15 Loss = 0.05812850828965505
###Markdown
**Question**: Use your trained network to compute the accuracy on both the train and test sets.
###Code
image_ff_net = image_ff_net.eval() #don't worry about this (for this notebook)
###Output
_____no_output_____
###Markdown
We'll use argmax to extract the label with the highest probability (or the least negative raw score).
###Code
image_ff_net(data_example).argmax(dim=1)
train_pred, train_targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = image_ff_net(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
train_pred = torch.cat((train_pred, label_pred))
train_targets = torch.cat((train_targets, data_target.float()))
train_pred[0:10]
train_targets[0:10]
torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
train_pred.shape[0]
assert(train_pred.shape == train_targets.shape)
train_accuracy = torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
print(f'Train Accuracy = {train_accuracy:.4f}')
###Output
Train Accuracy = 0.9930
###Markdown
Here, I want to make an elementary remark about significant figures. While interpreting numbers like accuracy, it is important to realize how big your dataset and what impact flipping one example from a wrong prediction to the right prediction would have.In our case, the train set has 60,000 examples. Suppose we were to flip one of the incorrectly predicted examples to a correct one (by changing the model, retraining etc etc.). This should change our accuracy, all other examples being the same, by $$\frac{1}{60,000} = 1.66 * 10^{-5}$$Any digits in the accuracy beyond the fifth place have no meaning! For our test set, we have 10,000 examples so we should only care at most about the 4th decimal place (10,000 being a "nice" number i.e. a power of 10 will ensure we never have more any way).
###Code
test_pred, test_targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(test_dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = image_ff_net(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
test_pred = torch.cat((test_pred, label_pred))
test_targets = torch.cat((test_targets, data_target.float()))
assert(test_pred.shape == test_targets.shape)
test_accuracy = torch.sum(test_pred == test_targets).item() / test_pred.shape[0]
print(f'Test Accuracy = {test_accuracy:.4f}')
###Output
Test Accuracy = 0.9440
###Markdown
Great! so our simple neural network already does a great job on our task. At this stage, we would do several things:* Look at the examples being classified incorrectly. Are these bad data examples? Would a person also have trouble classifying them?* Test stability - what happens if we rotate images? Translate them? Flip symmetric digits? What happens if we add some random noise to the pixel values?While we might add these to future iterations of this notebook, let's move on to some other architectural choices. One of the issues with flattening the input image is that of **locality**. Images have a notion of locality. If a pixel contains part of an object, its neighboring pixels are very likely to contain the same object. But when we flatten an image, we use all the pixels to map to each hidden node in the next layer. If we could impose locality by changing our layers, we might get much better performance.In addition, we would like image classification to be invariant to certain transformations like translation (move the digit up/down, left/right), scaling (zoom in and out without cropping the image), rotations (at least upto some angular width). Can we impose any of these by our choice of layers?The answer is yes! Convolutional layers are layers designed specifically to capture such locality and preserve translational invariance. There is a lot of material available describing what these are and we won't repeat it here. Instead, we'll repeat the training procedure above but with convolutional layers.FUTURE TODO: Add analysis of incorrectly predicted examplesFUTURE TODO: add a notebook for image filters, convolutions etc. Let's try a convolutional layer:nn.Conv2dwhich takes in the number of input channels (grayscale), number of output channels (we'll choose 20), kernel size (3x3) and run the transformations on some images.
###Code
idx, (data_example, target_example) = next(enumerate(train_dataloader))
print(data_example.shape)
print(nn.Conv2d(1, 20, 3)(data_example).shape)
###Output
torch.Size([16, 1, 28, 28])
torch.Size([16, 20, 26, 26])
###Markdown
**Question**: If you do know what convolutions are and how filters work, justify these shapes. The first dimension is the batch size which remains unchanged, as expected. In the raw data, the second dimension is the number of channels i.e. grayscale only and the last two dimensions are the size of the image - 28x28.We choose 20 channels which explains the output's second dimension. Each filter is 3x3 and since we have no padding, it can only process 26 patches in each dimension.If we label the pixels along the columns as 1, 2, ..., 28, the patch can be applied from pixels 1-3 (inclusive of both end-points), 2-5, ..., 26-28. After that, the patch "falls off" the image unless we apply some padding. This explains the dimension 26 in both directions. We can then apply a ReLU activation to all these activations.
###Code
(nn.ReLU()((nn.Conv2d(1, 20, 3)(data_example)))).shape
###Output
_____no_output_____
###Markdown
We should also apply some kind of pooling or averaging now. This reduces noise by picking disjoint, consecutive patches on the image and replacing them by some aggregate statistic like max or mean.
###Code
(nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(1, 20, 3)(data_example))))).shape
###Output
_____no_output_____
###Markdown
**A couple of notes**:* Pytorch's functions like nn.ReLU() and nn.MaxPool2d() return functions that can apply operations. So, nn.MaxPool2d(kernel_size=2) returns a function that is then applied to the argument above.* Chaining together the layers and activations and testing them out like above is very valuable as the first step in ensuring your network does what you want it to do.In general, we would suggest the following steps when you are expressing a new network architecture:* Build up your network using nn.Sequential if you are just assembling existing or user-defined layers, or by defining a new network class inheriting from nn.Module where you can define a custom forward function.* Pick a small tensor containing your features and pass it through each step/layer. Ensure the dimensions of the input and output tensors to each layer make sense.* Pick your loss and optimizer and train on a small batch. You should be able to overfit i.e. get almost zero loss on this small set. Neural networks are extremely flexible learners and if you can't overfit on a small batch, you either have a bug or need to add some more capacity (more nodes, more layers etc. -> more weights).* Now you should train on the full train set and practice the usual cross-validation practices.* Probe your model: add noise to the inputs, see where the model isn't performing well, make partial dependency plots etc. to understand characteristics of your model. This part can be very open-ended and it depends on what your final aim is. If you are building a model to predict the stock price so you can trade, you'll spend a lot of time in this step. If you are having fun predicting dogs vs cats, maybe you don't care so much. If your aim is to dive deeper into deep learning, looking at the weights, activations, effect of changing hyperparameters, removing edges/weights etc. are very valuable experiments. So we have seen one iteration of applying a convolutional layer followed by a non-linearity and then a max pooling layer. We can add more and more of these elements. As you can see, at each step, we are increasing the number of channels increase but the size of the images decreases because of the convolutions and max pooling.**Question**: Feed a small batch through two sequences of Conv -> Relu -> Max pool. What is the output size now?
###Code
print(data_example.shape)
#1 channel in, 16 channels out
out1 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(1, 16, 3)(data_example))))
print(out1.shape)
#16 channels in, 32 channels out
out2 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(16, 32, 3)(out1))))
print(out2.shape)
#32 channels in, 128 channels out
out3 = nn.MaxPool2d(kernel_size=2)(nn.ReLU()((nn.Conv2d(32, 128, 3)(out2))))
print(out3.shape)
###Output
torch.Size([16, 1, 28, 28])
torch.Size([16, 16, 13, 13])
torch.Size([16, 32, 5, 5])
torch.Size([16, 128, 1, 1])
###Markdown
Recall that we want the output layer to have 10 outputs. We can add a linear/dense layer to do that.
###Code
nn.Linear(128, 10)(out3)
###Output
_____no_output_____
###Markdown
**Question**: Debug and fix this error. Hint: look at dimensions.
###Code
nn.Linear(128, 10)(Flatten()(out3)).shape
###Output
_____no_output_____
###Markdown
It's time to put all these elements together.
###Code
#ARCHITECTURE
image_conv_net = nn.Sequential(nn.Conv2d(1, 16, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(16, 64, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 128, 3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
Flatten(),
nn.Linear(128, 10)
)
#LOSS CRITERION and OPTIMIZER
criterion = nn.CrossEntropyLoss() #ensure no softmax in the last layer above
optimizer = optim.Adam(image_conv_net.parameters(), lr=1e-2)
#DATALOADERS
BATCH_SIZE = 16
train_dataloader = torch.utils.data.DataLoader(mnist_train,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
test_dataloader = torch.utils.data.DataLoader(mnist_test,
batch_size=BATCH_SIZE,
shuffle=True, #shuffle data
num_workers=8,
pin_memory=True
)
###Output
_____no_output_____
###Markdown
Train the model. Ideally, write a function so we don't have to repeat this cell again.
###Code
def train_image_model(model, train_dataloader, loss_criterion, optimizer, N_epochs = 20):
model.train() #don't worry about this (for this notebook)
model.to(device)
for epoch in range(N_epochs):
loss_list = []
for idx, (data_example, data_target) in enumerate(train_dataloader):
data_example = data_example.to(device)
data_target = data_target.to(device)
pred = model(data_example)
loss = loss_criterion(pred, data_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
return model
image_conv_net = train_image_model(image_conv_net,
train_dataloader,
criterion,
optimizer)
###Output
Epoch = 0 Loss = 0.8495251350800196
Epoch = 5 Loss = 0.23720959245165188
Epoch = 10 Loss = 0.18100028269489607
Epoch = 15 Loss = 0.165368815228343
###Markdown
Let's also add a function to do inference and compute accuracy
###Code
def predict_image_model(model, dataloader):
pred, targets = torch.tensor([]), torch.tensor([])
with torch.no_grad(): #context manager for inference since we don't need the memory footprint of gradients
for idx, (data_example, data_target) in enumerate(dataloader):
data_example = data_example.to(device)
#make predictions
label_pred = model(data_example).argmax(dim=1).float()
#concat and store both predictions and targets
label_pred = label_pred.to('cpu')
pred = torch.cat((pred, label_pred))
targets = torch.cat((targets, data_target.float()))
return pred, targets
train_pred, train_targets = predict_image_model(image_conv_net, train_dataloader)
test_pred, test_targets = predict_image_model(image_conv_net, test_dataloader)
assert(train_pred.shape == train_targets.shape)
train_accuracy = torch.sum(train_pred == train_targets).item() / train_pred.shape[0]
print(f'Train Accuracy = {train_accuracy:.4f}')
assert(test_pred.shape == test_targets.shape)
test_accuracy = torch.sum(test_pred == test_targets).item() / test_pred.shape[0]
print(f'Test Accuracy = {test_accuracy:.4f}')
###Output
Train Accuracy = 0.9653
Test Accuracy = 0.9240
###Markdown
In my case, the test accuracy went from 96.89% to 97.28%. You might see different numbers due to random initialization of weights and different stochastic batches. Is this significant?**Note**: If you chose a small sample of the data, a convolutional neural net might actually do worse than the feed-forward network.**Question**: Do you think the increase in accuracy is significant? Justify your answer. We have 10,000 examples in the test set. With the feed-forward network, we predicted 9728 examples correctly and with the convolutional net, we predicted 9840 correctly.We can treat the model as a binomial distribution. Recall the binomial distribution describes the number of heads one gets on a coin which has probability $p$ of giving heads and $1-p$ of giving tails if the coin is tossed $N$ times. More formally, the average number of heads will be:$$Np$$and the standard deviation is:$$\sqrt{Np(1-p)}$$We'll do a rough back-of-the-envelope calculation. Suppose the true $p$ is what our feed-forward network gave us i.e. $p = 0.9728$ and $N = 10,000$.Then, the standard deviation is:$$\sqrt{10000 * 0.9728 * (1-0.9728)} \approx 17$$So, to go from 9728 to 9840, we would need ~6.6 standard deviations which is very unlikely. This strongly suggests that the convolutional neural net does give us a significant boost in accuracy as we expected.You can get a sense of the state-of-the-art on MNIST here: http://yann.lecun.com/exdb/mnist/Note: MNIST is generally considered a "solved" dataset i.e. it is no longer and hasn't been for a few years, challenging enough as a benchmark for image classification models. You can check out more datasets (CIFAR, Imagenet etc., MNIST on Kannada characters, fashion MNIST etc.) in torchvision.datasets. **A note about preprocessing**: Image pixels takes values between 0 and 255 (inclusive). In the MNIST data here, all the values are scaled down to be between 0 and 1 by dividing by 255. Often it is helpful to subtract the mean for each pixel to help gradient descent converge faster. As an **exercise**, it is highly encouraged to re-train both the feed-forward and convolutional network with zero-mean images.Ensure that the means are computed only on the train set and applied to the test set. Autoencoders We have come a long way but there's still a lot more to do and see. While we have a lot of labelled data, the vast majority of data is unlabelled. There can be various reasons for this. It might be hard to find experts who can label the data or it is very expensive to do so. So another question is whether we can learn something about a dataset without labels. This is a very broad and difficult field called **unsupervised learning** but we can explore it a bit. Suppose we had the MNIST images but no labels. We can no longer build a classification model with it. But we would still like to see if there are broad categories or groups or clusters within the data. Now, we didn't cover techniques like K-means clustering this week but they are definitely an option here. Since this is a class on deep learning, we want to use neural networks.One option is to use networks called **autoencoders**. Since we can't use the labels, we'll instead predict the image itself! In other words, the network takes an image as an input and tries to predict it again. This is the identity mapping:$$i(x) = x$$The trick is to force the network to compress the input. In other words, if we have 784 pixels in the input (and the output), we want the hidden layers to use far less than 784 values. Let's try this. **Note**: I am being sloppy here by pasting the same training code several times. Ideally, I would abstract away the training and inference pieces in functions inside a module.
###Code
#convert 28x28 image -> 784-dimensional flattened vector
#redefining for convenience
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, inp):
return inp.flatten(start_dim=1, end_dim=-1)
class AE(nn.Module):
def __init__(self, N_input, N_hidden_nodes):
super(AE, self).__init__()
self.net = nn.Sequential(Flatten(),
nn.Linear(N_input, N_hidden_nodes),
nn.ReLU(),
nn.Linear(N_hidden_nodes, N_input),
nn.Sigmoid()
)
def forward(self, inp):
out = self.net(inp)
out = out.view(-1, 28, 28).unsqueeze(1) #return [BATCH_SIZE, 1, 28, 28]
return out
image_ff_ae = AE(784, 50) #we are choosing 50 hidden activations
_, (data_example, _) = next(enumerate(train_dataloader))
print(data_example.shape)
print(image_ff_ae(data_example).shape)
criterion = nn.MSELoss()
optimizer = optim.Adam(image_ff_ae.parameters(), lr=1e-2)
criterion(image_ff_ae(data_example), data_example)
def train_image_ae(model, train_dataloader, loss_criterion, optimizer, N_epochs = 20):
model.train() #don't worry about this (for this notebook)
model.to(device)
for epoch in range(N_epochs):
loss_list = []
for idx, (data_example, _) in enumerate(train_dataloader):
#Note we don't need the targets/labels here anymore!
data_example = data_example.to(device)
pred = model(data_example)
loss = loss_criterion(pred, data_example)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if epoch % 5 == 0:
print(f'Epoch = {epoch} Loss = {np.mean(loss_list)}')
return model
image_ff_ae = train_image_ae(image_ff_ae, train_dataloader, criterion, optimizer, N_epochs=20)
###Output
Epoch = 0 Loss = 0.03667376519739628
Epoch = 5 Loss = 0.01722230292111635
Epoch = 10 Loss = 0.016654866362611452
Epoch = 15 Loss = 0.016621768022576967
###Markdown
Let's look at a few examples of outputs of our autoencoder.
###Code
image_ff_ae.to('cpu')
output_ae = image_ff_ae(data_example)
idx = 15 #change this to see different examples
plt.figure()
plt.imshow(data_example[idx][0].detach().numpy())
plt.figure()
plt.imshow(output_ae[idx][0].detach().numpy())
###Output
_____no_output_____
###Markdown
So, great - we have a neural network that can predict the input from the input. Is this useful? Recall that we had an intermediate layer that had 50 activations. Feel free to change this number around and see what happens.We are compressing 784 pixel values into 50 activations and then reconstructing the image from those 50 values. In other words, we are forcing the neural network to capture only relevant non-linear features that can help it remember what image the input was.The compression is not perfect as you can see in the reconstructed image above but it's pretty good. Training for more time or better training methods might improve this.So how exactly is this useful. Maybe:* Using an autoencoder to do lossy compression. Image storing the 50 activations instead of each image and storing the last layer (the "decoder") that constructs the image from the 50 activations.* For search: suppose we wanted to search for a target image in a database of N images. We could do N pixel-by-pixel matches but these won't work because even a slight change in position or orientation or pixel intensities will give misleading distances between images. But if we use the vector of intermediate (50, in this case) activations, then maybe we can do a search in the space of activations. Let's try that.
###Code
#full mnist data
print(mnist_train.data.float().shape)
###Output
torch.Size([6000, 28, 28])
###Markdown
Generally it's a good idea to split the forward function into an encoder and decoder function. Here we do it explicitly.
###Code
image_ff_ae.net
###Output
_____no_output_____
###Markdown
Compute the activations after the hidden relu
###Code
with torch.no_grad():
mnist_ae_act = image_ff_ae.net[2](image_ff_ae.net[1](image_ff_ae.net[0](mnist_train.data.float())))
mnist_ae_act.shape
###Output
_____no_output_____
###Markdown
Let's pick some example image
###Code
img_idx = 15 #between 0 and 60000-1
plt.imshow(mnist_train.data[img_idx])
###Output
_____no_output_____
###Markdown
Get the target image activation
###Code
target_img_act = mnist_ae_act[img_idx]
target_img_act
###Output
_____no_output_____
###Markdown
We will use the cosine distance between two vectors to find the nearest neighbors. **Question**: Can you think of an elegant matrix-operation way of implementing this (so it can also run on a GPU)?**Warning**: Always keep an eye out for memory usage. The full matrix of pairwise distances can be very large. Work with a subset of the data (even 100 images) if that's the case.
###Code
#to save memory, look at only first N images (1000 here)
mnist_ae_act = mnist_ae_act[0:1000, :]
###Output
_____no_output_____
###Markdown
The cosine distance between two points, $\vec{x}_i, \vec{x}_j$ is:$$d_{ij} = \frac{\vec{x}_i . \vec{x}_j}{\lVert \vec{x}_i \rVert \lVert \vec{x}_j \rVert}$$Now we can first normalize all the actiation vector so they have length 1.
###Code
torch.pow(mnist_ae_act, 2).sum(dim=1).shape
###Output
_____no_output_____
###Markdown
We can't divide a tensor of shape [60000, 50] (activations) by a tensor of shape [60000].So first we have to unsqueeze (add an additional dimension) to get a shape [60000,1] and then broadcast/expand as the target tensor.We should check that the first row contains the length of the first image's activations.
###Code
torch.pow(mnist_ae_act, 2).sum(dim=1).unsqueeze(1).expand_as(mnist_ae_act)
###Output
_____no_output_____
###Markdown
Now we can divide by the norm (don't forget the sqrt).
###Code
mnist_ae_act_norm = mnist_ae_act / torch.pow(torch.pow(mnist_ae_act, 2).sum(dim=1).unsqueeze(1).expand_as(mnist_ae_act), 0.5)
###Output
_____no_output_____
###Markdown
Let's check an example.
###Code
mnist_ae_act[10]
torch.pow(torch.pow(mnist_ae_act[10], 2).sum(), 0.5)
mnist_ae_act[10] / torch.pow(torch.pow(mnist_ae_act[10], 2).sum(), 0.5)
mnist_ae_act_norm[10]
###Output
_____no_output_____
###Markdown
Good! They are the same. We have confidence that we are normalizing the activation vectors correctly. So now the cosine distance is:$$d_{ij} = \vec{x}_i . \vec{x}_j$$since all the vectors are of unit length.**Question**: How would you compute this using matrix operations?
###Code
mnist_ae_act_norm.transpose(1, 0).shape
mnist_ae_act_norm.shape
ae_pairwise_cosine = torch.mm(mnist_ae_act_norm, mnist_ae_act_norm.transpose(1,0))
ae_pairwise_cosine.shape
ae_pairwise_cosine[0].shape
img_idx = 18 #between 0 and 60000-1
plt.imshow(mnist_train.data[img_idx])
plt.title("Target image")
#find closest image
top5 = torch.sort(ae_pairwise_cosine[img_idx], descending=True) #or use argsort
top5_vals = top5.values[0:5]
top5_idx = top5.indices[0:5]
for i, idx in enumerate(top5_idx):
plt.figure()
plt.imshow(mnist_train.data[idx])
if i==0:
plt.title("Sanity check : same as input")
else:
plt.title(f"match {i} : cosine = {top5_vals[i]}")
###Output
_____no_output_____
###Markdown
While this is a simple dataset and a simple autoencoder, we already have some pretty good anecdotal similarity searches. There are many variations on autoencoders from switching layers to adding noise to the inputs (denoising autoencoders) to adding sparsity penalties to the hidden layer activations to encourage sparse activations to graphical models called variational autoencoders. Delete activations and cosine distances to save memory
###Code
mnist_ae_act = None
mnist_ae_act_norm = None
ae_pairwise_cosine = None
###Output
_____no_output_____
###Markdown
ConclusionBy now, you have had quite some experience with writing your own neural networks and introspecting into what they are doing. We still haven't touched topics like recurrent neural networks, seq2seq models and more modern applications. They will get added to this notebook so if you are interested, please revisit the repo. Future ItemsReal problems MNIST + autoencoder (convnet) Trip Classification: Maybe? RNN toy problems Linear trend + noise Different data structuring strategies Quadratic trend + noise LSTM/GRUs for same problems Seq2Seq examples RNN Autoencoder What data? Recurrent Neural Networks (In progress) **Note**: You might have run into memory issues by now. Everything below is self contained so if you want to reset the notebook and start from the cell below, it should work.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pylab as plt
import pandas as pd
import numpy as np
from sklearn.neural_network import MLPClassifier, MLPRegressor
import copy
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
###Output
_____no_output_____
###Markdown
As before, let's generate some toy data.
###Code
def generate_rnn_data(N_examples=1000, noise_var = 0.1, lag=1, seed=None):
if seed is not None:
np.random.seed(seed)
ts = 4 + 3*np.arange(N_examples) + np.random.normal(0, noise_var)
features = ts[0:len(ts)-lag]
target = ts[lag:]
return features, target
features, target = generate_rnn_data()
###Output
_____no_output_____
###Markdown
This data is possibly the simplest time-series one could pick (apart from a constant value). It's a simple linear trend with a tiny bit of gaussian noise. Note that this is a **non-stationary** series!
###Code
plt.plot(features, 'p')
###Output
_____no_output_____
###Markdown
We want to predict the series at time t+1 given the value at time t (and history).Of course, we could try using a feed-forward network for this. But instead, we'll use this to introduce recurrent neural networks.Recall that the simplest possible recurrent neural network has a hidden layer that evolves in time, $h_t$, inputs $x_t$ and outputs $y_t$.$$h_t = \sigma(W_{hh} h_{t-1} + W_{hx} x_t + b_h)$$with outputs:$$y_t = W_{yh} h_t + b_y$$Since the output is an unbounded real value, we won't have an activation on the output.Let's write our simple RNN. This is not general - we don't have the flexibility of adding more layers (as discussed in the lecture), bidirectionality etc. but we are in experimental mode so it's okay. Eventually, you can use pytorch's in-built torch.nn.RNN class definition.
###Code
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 32 #number of hidden dimensions to use
hidden_activation = nn.ReLU()
#define weights and biases
w_hh = nn.Parameter(data = torch.Tensor(N_hidden, N_hidden), requires_grad = True)
w_hx = nn.Parameter(data = torch.Tensor(N_hidden, N_input), requires_grad = True)
w_yh = nn.Parameter(data = torch.Tensor(N_output, N_hidden), requires_grad = True)
b_h = nn.Parameter(data = torch.Tensor(N_hidden, 1), requires_grad = True)
b_y = nn.Parameter(data = torch.Tensor(N_output, 1), requires_grad = True)
#initialize weights and biases (in-place)
nn.init.kaiming_uniform_(w_hh)
nn.init.kaiming_uniform_(w_hx)
nn.init.kaiming_uniform_(w_yh)
nn.init.zeros_(b_h)
nn.init.zeros_(b_y)
hidden_act = hidden_activation(torch.mm(w_hx, torch.ones(N_input, 1)) + \
torch.mm(w_hh, torch.ones(N_hidden, 1)) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output.shape)
###Output
_____no_output_____
###Markdown
But the input we'll be passing will be a time-series
###Code
inp_ts = torch.Tensor([1,2,3]).unsqueeze(1).unsqueeze(2)
print(inp_ts.shape)
inp_ts[0]
inp_ts[0].shape
hidden_act = torch.zeros(N_hidden, 1)
#-----------first iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[0]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
#-----------second iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[1]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
#-----------third iter--------
hidden_act = hidden_activation(torch.mm(w_hx, inp_ts[2]) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
hidden_act = torch.zeros(N_hidden, 1)
for x in inp_ts: #input time-series
hidden_act = hidden_activation(torch.mm(w_hx, x) + \
torch.mm(w_hh, hidden_act) + \
b_h)
print(hidden_act.shape)
output = (torch.mm(w_yh, hidden_act) + b_y)
print(output)
class RNN(nn.Module):
def __init__(self, N_input, N_hidden, N_output, hidden_activation):
super(RNN, self).__init__()
self.N_input = N_input
self.N_hidden = N_hidden
self.N_output = N_output
self.hidden_activation = hidden_activation
#define weights and biases
self.w_hh = nn.Parameter(data = torch.Tensor(N_hidden, N_hidden), requires_grad = True)
self.w_hx = nn.Parameter(data = torch.Tensor(N_hidden, N_input), requires_grad = True)
self.w_yh = nn.Parameter(data = torch.Tensor(N_output, N_hidden), requires_grad = True)
self.b_h = nn.Parameter(data = torch.Tensor(N_hidden, 1), requires_grad = True)
self.b_y = nn.Parameter(data = torch.Tensor(N_output, 1), requires_grad = True)
self.init_weights()
def init_weights(self):
nn.init.kaiming_uniform_(self.w_hh)
nn.init.kaiming_uniform_(self.w_hx)
nn.init.kaiming_uniform_(self.w_yh)
nn.init.zeros_(self.b_h)
nn.init.zeros_(self.b_y)
def forward(self, inp_ts, hidden_act=None):
if hidden_act is None:
#initialize to zero if hidden not passed
hidden_act = torch.zeros(self.N_hidden, 1)
output_vals = torch.tensor([])
for x in inp_ts: #input time-series
hidden_act = self.hidden_activation(torch.mm(self.w_hx, x) + \
torch.mm(self.w_hh, hidden_act) + \
self.b_h)
output = (torch.mm(self.w_yh, hidden_act) + self.b_y)
output_vals = torch.cat((output_vals, output))
return output_vals, hidden_act
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
output_vals, hidden_act = rnn(inp_ts)
print(output_vals)
print("---------")
print(hidden_act)
###Output
_____no_output_____
###Markdown
So far so good. Now how do we actually tune the weights? As before, we want to compute a loss between the predictions from the RNN and the labels. Once we have a loss, we can do the usual backpropagation and gradient descent.Recall that our "features" are:$$x_1, x_2, x_3\ldots$$Our "targets" are:$$x_2, x_3, x_4 \ldots$$if the lag argument in generate_rnn_data is 1. More generally, it would be:$$x_{1+\text{lag}}, x_{2+\text{lag}}, x_{3+\text{lag}}, \ldots$$Now, let's focus on the operational aspects for a second. In principle, you would first feed $x_1$ as an input, generate an **estimate** for $\hat{x}_2$ as the output.Ideally, this would be close to the actual value $x_2$ but that doesn't have to be the case, especially when the weights haven't been tuned yet. Now, for the second step, we need to input $x_2$ to the RNN. The question is whether we should use $\hat{x}_2$ or $x_2$.In real-life, one can imagine forecasting a time-series into the future given values till time t. In this case, we would have to feed our prediction at time t, $\hat{x}_{t+1}$ as input at the next time-step since we don't know $x_{t+1}$.The problem with this approach is that errors start compounding really fast. While we might be a bit off at $t+1$, if our prediction $\hat{x}_{t+1}$ is inaccurate, then our prediction $\hat{x}_{t+2}$ will be even worse and so on.In our case, we'll use what's called **teacher forcing**. We'll always feed the actual known $x_t$ at time-step t instead of the prediction from the previous time-step, $\hat{x}_t$. **Question**: Split the features and target into train and test sets.
###Code
N_examples = len(features)
TRAIN_PERC = 0.70
TRAIN_SPLIT = int(TRAIN_PERC * N_examples)
features_train = features[:TRAIN_SPLIT]
target_train = target[:TRAIN_SPLIT]
features_test = features[TRAIN_SPLIT:]
target_test = target[:TRAIN_SPLIT]
plt.plot(np.concatenate([features_train, features_test]))
plt.plot(features_train, label='train')
plt.plot(np.arange(len(features_train)+1, len(features)+1), features_test, label='test')
plt.legend()
criterion = nn.MSELoss()
optimizer = optim.Adam(rnn.parameters(), lr=1e-3)
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 32 #number of hidden dimensions to use
hidden_activation = nn.ReLU()
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
features_train = torch.tensor(features_train).unsqueeze(1).unsqueeze(2)
target_train = torch.tensor(target_train).unsqueeze(1).unsqueeze(2)
features_test = torch.tensor(features_test).unsqueeze(1).unsqueeze(2)
target_test = torch.tensor(target_test).unsqueeze(1).unsqueeze(2)
output_vals, hidden_act = rnn(features_train.float())
print(len(output_vals))
print(len(target_train))
loss = criterion(torch.tensor(output_vals).double(), target_train.squeeze(2).squeeze(1))
loss.requires_grad = True
print(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
We can now put all these ingredients together
###Code
N_input = 1 #will pass only one value as input
N_output = 1 #will predict one value
N_hidden = 4 #number of hidden dimensions to use
hidden_activation = nn.Tanh()
rnn = RNN(N_input, N_hidden, N_output, hidden_activation)
criterion = nn.MSELoss()
optimizer = optim.Adam(rnn.parameters(), lr=1e-1)
N_epochs = 10000
hidden_act = None
for n in range(N_epochs):
output_vals, hidden_act = rnn(features_train.float(), hidden_act = None)
loss = criterion(output_vals, target_train.squeeze(1).float())
#loss.requires_grad = True
optimizer.zero_grad()
loss.backward()
optimizer.step()
if n % 100 == 0:
print(rnn.w_yh.grad)
print(f'loss = {loss}')
print(output_vals.requires_grad)
output_vals.shape
criterion(output_vals, target_train.squeeze(1).float())
features_train[0:10]
plt.plot([i.item() for i in output_vals])
plt.plot([i[0] for i in target_train.numpy()])
rnn.w_hh.grad
optimizer.zero_grad()
loss.requires_grad = True
loss.backward()
optimizer.step()
rnn.w_hh
rnn.w_hx.grad
###Output
_____no_output_____
|
Real_estate_analysis/real_estate_market.ipynb
|
###Markdown
Исследование объявлений о продаже квартир**Проектная работа №2 Яндекс.Практикум - Data Science** Описание проекта**Исходные данные:**Архив объявлений о продаже квартир в Санкт-Петербурге и соседних населённых пунктах за несколько лет. По каждой квартире на продажу доступны два вида данных. Первые вписаны пользователем, вторые — получены автоматически на основе картографических данных.**Цель проекта:**Обработать и исследовать исходные данные, чтобы установить параметры необходимые для определения цены. Структура проекта* 1. [Загрузка и изучение общей информации датасета](start)* 2. [Предобработка данных](preprocessing) * 2.1 [Обработка пропущенных значений](nan) * 2.2 [Приведение данных к нужным типам](typecast)* 3. [Дополнение данными](augmentation) * 3.1 [Цена квадратного метра](price_meter) * 3.2 [День недели, месяц и год публикации](date) * 3.3 [Этаж квартиры](floor) * 3.4 [Соотношение жилой и общей площади, а также отношение площади кухни к общей](ratio)* 4. [Исследовательский анализ данных](eda) * 4.1 [Изучение площади, цены, числа комнат, высоты потолков](4.1) * 4.2 [Изучение времени продажи квартир](4.2) * 4.3 [Удаление редких и выбивающихся значений](4.3) * 4.4 [Изучение факторов влияющих на цену квартиры](4.4) * 4.5 [Определение центральной зоны Санкт-Петербурга](4.5) * 4.6 [Анализ квартир в центре Санкт-Петербурга](4.6)* 5. [Общий вывод](conclusion) 1. Загрузка и изучение общей информации датасета Импортируем необходимые библиотеки
###Code
import sys as sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Читаем файл с данными
###Code
df = pd.read_csv('../datasets/real_estate_data.csv', sep='\t')
###Output
_____no_output_____
###Markdown
Узнаём общую информацию о датасета
###Code
df.info()
list_null_columns = []
for col in df.columns:
if df[col].isna().values.any():
list_null_columns.append(col)
print("Параметры квартир по которым пропущены значения:\n", list_null_columns)
df.head(5)
###Output
_____no_output_____
###Markdown
Вывод:* у нас имеются данные по 23699 квартирам с 22 параметрами* присутствуют пропущенные значения в столбцах: ceiling_height, floors_total, living_area, is_apartment, kitchen_area, balcony, locality_name, airports_nearest, cityCenters_nearest, parks_around3000, parks_nearest, ponds_around3000, ponds_nearest, days_exposition 2. Предобработка данных 2.1 Обработка пропущенных значений
###Code
# Список столбцов с пропущенными значениями опредилили ранее
count = []
percent = []
for col in list_null_columns:
count.append(df[col].isna().sum())
percent.append((count[-1] / df.shape[0])*100)
null_values_df = pd.DataFrame(data={'count':count, 'percent':percent}, index=list_null_columns)
null_values_df.sort_values('count', ascending=False)
###Output
_____no_output_____
###Markdown
В результате видим, что каких-то данных по квартирам пропущенно очень много, каких-то мало пропущено значений. Рассмотрим пропущенные значения по каждому параметру в отдельности. **Название населённого пункта** Строки с пропущеннным значением населённого пункта занимают всего 0.2% датасета. Данные о местоположении квартиры являются одним из важнейших параметров определяющих стоимость квартиры. Избавимся от данных, если пропущено название населённого пункта, так как мы не знаем чем его заменить.
###Code
df = df[~df['locality_name'].isna()]
###Output
_____no_output_____
###Markdown
**Всего этажей в доме**Пропущено 86 строк с общим количеством этажей. Заменим количество этажей на текущей этаж +1, чтобы минимально исказить данные и не сделать текущий этаж последним.
###Code
df['floors_total'] = df['floors_total'].fillna(df['floor'] + 1)
###Output
_____no_output_____
###Markdown
**Расстояние до ближайшего парка и расстояние до ближайшего водоёма**Данные по расстоянию до близжайшего парка и по расстоянию до близжайшего водоёма получены автоматически на основе картографических данных, поэтому предположим, что пропуски получены из-за невозможности посчитать расстояние по какой-либо причине (например, радиус поиска меньше, чем расстояние до близжайшего парка/водоёма).Заменим в таком случае пропуски на '0', но только при условии, что в столбцах число парков/водоёмов в радиусе 3км тоже пропуск или '0'.
###Code
park_condition = (df['parks_nearest'].isna() & (df['parks_around3000'].isna() | (df['parks_around3000'] < 1.0)))
df.loc[park_condition, ['parks_nearest', 'parks_around3000']] = df.loc[park_condition, ['parks_nearest', 'parks_around3000']].fillna(0)
pond_condition = (df['ponds_nearest'].isna() & (df['ponds_around3000'].isna() | (df['ponds_around3000'] < 1.0)))
df.loc[pond_condition, ['ponds_nearest', 'ponds_around3000']] = df.loc[pond_condition, ['ponds_nearest', 'ponds_around3000']].fillna(0)
###Output
_____no_output_____
###Markdown
Проверим остались ли ещё пропуски в этих столбцах
###Code
if df['ponds_nearest'].isna().sum() == 0 and df['parks_nearest'].isna().sum() == 0:
print("Пропусков в столбцах ponds_nearest и parks_nearest не осталось")
else:
print("Осталось {} пропусков в столбце ponds_nearest и {} пропусков в столбце parks_nearest".
format(df['ponds_nearest'].isna().sum(), df['parks_nearest'].isna().sum()))
###Output
Пропусков в столбцах ponds_nearest и parks_nearest не осталось
###Markdown
**Площадь кухни и жилая площадь**Проверим сколько у нас данных по которым пропущены площадь кухни и жилая площадь одновременно
###Code
df[df['living_area'].isna() & df['kitchen_area'].isna()].shape[0]
###Output
_____no_output_____
###Markdown
Жилую площадь и площадь кухни могут не указывать те у кого свободная планировка.
###Code
print('Число квартир со свободной планировкой у которых не указана жилая площадь:', df[df['living_area'].isna() & df['open_plan']].shape[0])
print('Число квартир со свободной планировкой у которых не указана площадь кухни:', df[df['kitchen_area'].isna() & df['open_plan']].shape[0])
###Output
Число квартир со свободной планировкой у которых не указана жилая площадь: 5
Число квартир со свободной планировкой у которых не указана площадь кухни: 67
###Markdown
Проверим количество строк с пропущенными значенями площади кухни и жилой площади, при этом комнат более 1 и указано, что это не свободная планировка
###Code
df[df['living_area'].isna() & df['kitchen_area'].isna() & (df['rooms'] > 1) & (df['open_plan'] == False)].shape[0]
###Output
_____no_output_____
###Markdown
Причиной пропусков в площади кухни или жилой площади может быть ошибка ввода данных пользователем.Заменять пропуски мы не будем, так как не можем предсказать верные значения **Cколько дней было размещено объявление (от публикации до снятия)**Проверим есть ли у нас объявления с числом дней публикации '0'
###Code
print('Количество объявлений с числом дней публикации равным 0:', df[df['days_exposition'] < 1.0].shape[0])
###Output
Количество объявлений с числом дней публикации равным 0: 0
###Markdown
Возможным причиной NaN является невозможность посчитать по какой-то причине число дней, если их меньше 1.Заменим в таком случае NaN на 0.
###Code
df['days_exposition'].fillna(0, inplace=True)
###Output
_____no_output_____
###Markdown
**Число балконов** Заменим пропуски в количестве балконов на 0, так как возможной причиной пропуска является незаполненное поле пользователем
###Code
df['balcony'].fillna(0, inplace=True)
###Output
_____no_output_____
###Markdown
Вывод:* заменили пропуски в: floors_total, balcony, locality_name, parks_around3000. parks_nearest, ponds_around3000, ponds_nearest, days_exposition* не заменили пропуски в: * is_apartment: не сможем проверить правильно замены и не используем в дальнейшем анализе * ceiling_height: исказит результаты дальнейшего анализа, значения этого параметра квартиры рассматриваются далее подробнее * airports_nearest: предположим, что пропуски вызваны тем, что поблизости не аэропорта, оставим пропуски. * cityCenters_nearest: предположим, что пропуски вызваны тем, что квартира слишком далеко от центра, оставим пропуски * kitchen_area и living_area: избегаем искажения данных, невозможно проверить правильность замены 2.2 Приведение данных к нужным типам Заменим тип данных описывающих общее количество этажей в доме и количество балконов в доме на тип данных целого числа
###Code
df['balcony'] = df['balcony'].astype('int')
df['floors_total'] = df['floors_total'].astype('int')
###Output
_____no_output_____
###Markdown
Заменим тип данных описывающих число парков в радиусе 3 км и число водоёмов в радиусе 3 км на тип данных целого числа
###Code
df['parks_around3000'] = df['parks_around3000'].astype('int')
df['ponds_around3000'] = df['ponds_around3000'].astype('int')
###Output
_____no_output_____
###Markdown
Заменим тип данных цены, так как нам точности до рубля более, чем достаточно
###Code
df['last_price'] = df['last_price'].astype('int')
###Output
_____no_output_____
###Markdown
Заменим тип данных числа дней экспозиции на тип целого
###Code
df['days_exposition'] = df['days_exposition'].astype('int')
###Output
_____no_output_____
###Markdown
Заменим поле is_apartment на булев тип
###Code
df['is_apartment'] = df['is_apartment']==True
###Output
_____no_output_____
###Markdown
Приведём дату публикации к типу datetime64
###Code
df['first_day_exposition'] = pd.to_datetime(df['first_day_exposition'], format='%Y-%m-%d')
###Output
_____no_output_____
###Markdown
Проверим получивиеся новые типы данных
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 23650 entries, 0 to 23698
Data columns (total 22 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 total_images 23650 non-null int64
1 last_price 23650 non-null int64
2 total_area 23650 non-null float64
3 first_day_exposition 23650 non-null datetime64[ns]
4 rooms 23650 non-null int64
5 ceiling_height 14490 non-null float64
6 floors_total 23650 non-null int64
7 living_area 21752 non-null float64
8 floor 23650 non-null int64
9 is_apartment 23650 non-null bool
10 studio 23650 non-null bool
11 open_plan 23650 non-null bool
12 kitchen_area 21381 non-null float64
13 balcony 23650 non-null int64
14 locality_name 23650 non-null object
15 airports_nearest 18116 non-null float64
16 cityCenters_nearest 18139 non-null float64
17 parks_around3000 23650 non-null int64
18 parks_nearest 23650 non-null float64
19 ponds_around3000 23650 non-null int64
20 ponds_nearest 23650 non-null float64
21 days_exposition 23650 non-null int64
dtypes: bool(3), datetime64[ns](1), float64(8), int64(9), object(1)
memory usage: 3.7+ MB
###Markdown
Вывод:* привели с целочисленному типу: balcony, floors_total, parks_around3000, ponds_around3000, last_price, days_exposition* привели к булев типу: is_apartment* привели к типу datetime64: first_day_exposition* некоторые поля можно было бы привести к типу числа с плавающей точкой одинарной точности для экономии места, но это было бы целесообразно, если у нас очень большой датасет 3. Дополнение данными 3.1 Цена квадратного метра
###Code
df['price_meter'] = df['last_price'] / df['total_area']
###Output
_____no_output_____
###Markdown
3.2 День недели, месяц, год публикации
###Code
df['dayofweek'] = pd.DatetimeIndex(df['first_day_exposition']).dayofweek
df['month'] = pd.DatetimeIndex(df['first_day_exposition']).month
df['year'] = pd.DatetimeIndex(df['first_day_exposition']).year
###Output
_____no_output_____
###Markdown
3.3 Этаж квартирыДобавим новый столбец с категориальной характеристикой этажа: первый, последний, другой
###Code
# Фукнция возвращающая категорию в зависимости от этажа и этажности дома
def categorization_floor(data):
if data['floor'] == 1:
return 'первый'
elif data['floor'] == data['floors_total']:
return 'последний'
elif data['floor'] < data['floors_total']:
return 'другой'
else:
return np.NaN
df['categorical_floor'] = df[['floor', 'floors_total']].apply(categorization_floor, axis=1)
###Output
_____no_output_____
###Markdown
3.3 Cоотношение жилой и общей площади, а также отношение площади кухни к общей
###Code
df['living_area_ratio'] = df['living_area'] / df['total_area']
df['kitchen_area_ratio'] = df['kitchen_area'] / df['total_area']
###Output
_____no_output_____
###Markdown
Выведем первые 5 строк датафрейма с новыми столбцами
###Code
df[['dayofweek', 'month', 'year', 'categorical_floor', 'living_area_ratio', 'kitchen_area_ratio']].head(5)
###Output
_____no_output_____
###Markdown
4. Исследовательский анализ данных Напишем универсальную функцию для отображения гистограммы
###Code
def plot_my_hist(data=None, custom_bins=None):
plt.figure(figsize=(10,4))
plt.title('Histogram of '+data.name, fontsize=16)
plt.xlabel(data.name, fontsize=14)
plt.ylabel("count", fontsize=14)
if custom_bins == None:
plt.hist(data, bins=50)
else:
plt.hist(data, bins=custom_bins)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
4.1 Изучение площади, цены, числа комнат, высоты потолков Построим гистограмму площади квартир
###Code
plot_my_hist(df['total_area'])
###Output
_____no_output_____
###Markdown
Для наглядности выведем график только для квартир с площадью меньше 150 метров, но видим, что есть квартиры площадью и больше 800 метров
###Code
plot_my_hist(df.loc[df['total_area'] < 150, 'total_area'])
###Output
_____no_output_____
###Markdown
Вывод:Больше всего квартир в диапазоне 30-60 метров, с максимумом в диапазоне 40-45 метров Построим гистограмму цены квартир
###Code
plot_my_hist(df['last_price'])
###Output
_____no_output_____
###Markdown
Видим, что больше часть значений умещается в первые столбцы гистограммы, поэтому построим график для значений до 30000000
###Code
plot_my_hist(df.loc[df['last_price'] < 30000000, 'last_price'])
###Output
_____no_output_____
###Markdown
Вывод:В приниципе, график похож на нормальное распределение с длинным хвостом в сторону аномально дорогих квартир. Видим максимум числа объявлений с ценой квартиры около 3.75 миллионов рублей Построим гистограмму числа комнат
###Code
plot_my_hist(df['rooms'])
###Output
_____no_output_____
###Markdown
Видим, что квартиры с числом комнат больше 4 очень мало, а больше 6 совсем почти нет. Число комнат больше 9 очень редки и похожи на выбросы. Сделаем график понагляднее
###Code
plot_my_hist(df.loc[df['rooms'] <= 6, 'rooms'], custom_bins=7)
###Output
_____no_output_____
###Markdown
Вывод:В основном продают 1-2 комнатные квартиры, так же видим, что есть выбросы в числе комнат до 20, а в каких-то объявлениях указано 0 комнат (это может быть свободная планировка, но не будем сейчас это проверять) Построим гистограмму высоты потолков
###Code
plot_my_hist(df.loc[~df['ceiling_height'].isna(),'ceiling_height'])
###Output
_____no_output_____
###Markdown
Очевидно у нас тут какие то выбросы. Посмотрим, что за значения высотки потолков больше 5м
###Code
df[df['ceiling_height'] > 5]['ceiling_height'].sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Предположим, что какие-то особенные квартиры, может быть двухуровневые, есть с потолками 5м, но 10м, 27м, то 100м явно что-то странное.Построим гистограмму без аномальных значений с высотой потолков больше 5м и меньше 2.3м
###Code
plot_my_hist(df.loc[(df['ceiling_height'].notna()) & (df['ceiling_height'] < 5) &(df['ceiling_height'] > 2.3), 'ceiling_height'], custom_bins=20)
###Output
_____no_output_____
###Markdown
Вывод:Основное число квартир в диапазоне от 2.4м до 2.8м, а квартир с потолками больше 3м резко меньше, а более 4м почти нет. Так же заметим, что есть объявления в которых указаны аномальные значения типа 1м или 25м. 4.2 Изучение времени продажи квартир Как мы помним, ранее заполнили пропуски во времени продажи квартир 0. Сложно представить, что квартира продалась в день подачи объявления, поэтому такие квартиры не будем рассматривать
###Code
days_exposition = df.loc[df['days_exposition'] > 0, 'days_exposition']
plot_my_hist(days_exposition)
###Output
_____no_output_____
###Markdown
Рассмотрим данные поближе
###Code
plot_my_hist(days_exposition[days_exposition < 800])
print('Среднее время продажи квартиры: {:0.0f} дней'.format(days_exposition.mean()))
print('Медианное время продажи квартиры: {:0.0f} дней'.format(days_exposition.median()))
###Output
Среднее время продажи квартиры: 181 дней
Медианное время продажи квартиры: 95 дней
###Markdown
Вывод:Видим, что среднее значение и медианное отличаются почти в 2 раза, так как есть объявления которые вывешены на сайте очень долго, но таких объявлений не так много. В оценки времени продажи лучше использовать медианное значение в 95 дней. Анализируя график можно сделать вывод, что если квартира продаётся больше года, то это необычно долго, а если менее 50 дней, то это довольно быстро. 4.3 Удаление редких и выбивающихся значений Избавимся от объявлений в которых:* 0 комнат* более 6 комнат* высота потолков более 4 метра* высота потолков менее 2.4 метра* цена более 20000000* время продажи более 365 дней
###Code
df = df.query(
'rooms <= 6 and rooms > 0 and '
'ceiling_height < 4.0 and ceiling_height > 2.3 and '
'last_price < 20000000 and '
'days_exposition < 365'
)
print('После очистки осталось {} объявлений'.format(df.shape[0]))
###Output
После очистки осталось 11988 объявлений
###Markdown
4.4 Изучение факторов влияющих на цену квартиры Зависимость цены от площади квартиры
###Code
def plot_my_scatter(data=None, x=None, y=None, alpha=None):
plt.figure(figsize=(10,4))
plt.title('Does '+x+' affect '+y+'?', fontsize=16)
plt.xlabel(x, fontsize=14)
plt.ylabel(y, fontsize=14)
plt.scatter(data[x], data[y], alpha=alpha)
plt.grid(True)
plt.show()
def plot_my_pivot(data):
plt.figure(figsize=(10,4))
plt.title('Does '+data.index.name+' affect '+data.columns[0]+'?', fontsize=16)
plt.xlabel(data.index.name, fontsize=14)
plt.ylabel(data.columns[0], fontsize=14)
ax = plt.axes()
ax.plot(data.index, data.values)
plt.grid(True)
plt.show()
plot_my_scatter(df, 'total_area', 'last_price')
###Output
_____no_output_____
###Markdown
Видна прямая положительная зависимость цены квартиры от её площади, хотя есть и выбивающиеся из этой зависимости объявления Зависимость цены от числа комнат
###Code
plot_my_scatter(df, 'rooms', 'last_price')
###Output
_____no_output_____
###Markdown
Такой график не очень наглядный. Сгруппируем данные по числу комнат.
###Code
plot_my_pivot(
data= df.pivot_table(index='rooms', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Видим прямая положительную зависимость цены квартиры от её площади. Причём после для квартир с числом комнат больше 4 видно ускорение роста стоимости, так как такие квартиры чаще являются более премиального сегмента. Зависимость цены от удалённости от центра Мы не заполняли пропущенные значения в столбце удалённость от центра, поэтому построим график исключив строки где эти значения пропущены
###Code
plot_my_scatter(df[df['cityCenters_nearest'].notna()], 'cityCenters_nearest', 'last_price')
###Output
_____no_output_____
###Markdown
Видим, что зависимость между удалённостью и ценной прослеживается, что, в целом, чем дальше от центра, тем дешевле квартира. Но по каждому населённому пункту может получится разная зависимость, поэтому рассмотрим отдельно для Санкт-Петербурга.
###Code
plot_my_scatter(
df[df['cityCenters_nearest'].notna() & (df['locality_name'] == 'Санкт-Петербург')],
'cityCenters_nearest',
'last_price'
)
###Output
_____no_output_____
###Markdown
Попробуем изменить прозрачность точек на графике
###Code
plot_my_scatter(
df[df['cityCenters_nearest'].notna() & (df['locality_name'] == 'Санкт-Петербург')],
'cityCenters_nearest',
'last_price',
alpha=0.1
)
###Output
_____no_output_____
###Markdown
Можно сделать вывод о зависимости цены от удалённости от центра, но разброс цен на каждом расстоянии очень большой. Возможно этот параметр стоит использовать сгруппировав по другому параметру, который сильнее влияет на цену (например, по площади). Зависимость цены от этажа
###Code
plot_my_pivot(
df.pivot_table(index='categorical_floor', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Делаем вывод, что, при прочих равных, квартиры на последнем этаже дешевле, а на первом этаже самые дешёвые квартиры Зависимость цены от даты размещения Построим графика цены от:* дня недели* месяца* года
###Code
plot_my_pivot(
df.pivot_table(index='dayofweek', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Можно заметить, что дороже всего квартиры размещённые во вторник и, что, в выходные размещают квартиры дешевле, чем в будни.
###Code
plot_my_pivot(
df.pivot_table(index='month', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
В общем и целом, зависимость не прослеживается, но есть некоторый спад медианной цены в июне
###Code
plot_my_pivot(
df.pivot_table(index='year', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
На графике зависимости цены от года в котором размещено объявление можно заметить резкий спад средней цены с 2014 года и постепенной восстановление цены с 2017 года 4.4 Средняя цена квартиры в топ-10 населённых пунктах по числу объявлений Выделим топ-10 населённых пунктов по числу объявлений
###Code
top10_locality = df.groupby('locality_name')['last_price'].count().sort_values(ascending=False).head(10)
top10_locality
###Output
_____no_output_____
###Markdown
Считаем именно среднюю цену квадратного метра в 10 выделенных населённых пунктах
###Code
pd.set_option('display.float_format', lambda x: '%.3f' % x)
df.query('locality_name in @top10_locality.index').pivot_table(index='locality_name', values='last_price', aggfunc='mean').sort_values(by='last_price')
###Output
_____no_output_____
###Markdown
Вывод:* cамое дорогое жильё в Санкт-Петербурге* самое дешёвое жилье в Гатчине 4.5 Определение центральной зоны Санкт-Петербурга Выделим объявления в Санкт-Петербурге в которых указано расстояние до центра
###Code
df_sp = df[(df['locality_name'] == 'Санкт-Петербург') & df['cityCenters_nearest'].notna()]
###Output
_____no_output_____
###Markdown
Создаём новый столбец с округлением расстояния до 1 км
###Code
df_sp['cityCenters_nearest_km'] = round(df_sp['cityCenters_nearest'] / 1000)
df_sp['cityCenters_nearest_km'] = df_sp['cityCenters_nearest_km'].astype('int')
###Output
_____no_output_____
###Markdown
Создадим таблицу где на каждое расстояние с точностью до 1км соответствует средняя цена за кв.м. квартиры
###Code
pivot_sp = df_sp.pivot_table(index='cityCenters_nearest_km', values='last_price', aggfunc='mean')
###Output
_____no_output_____
###Markdown
Построим график
###Code
plot_my_pivot(pivot_sp)
###Output
_____no_output_____
###Markdown
Проверим будет ли отличаться график для медианной цены
###Code
plot_my_pivot(
df_sp.pivot_table(index='cityCenters_nearest_km', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Вывод:* центральной зоной примем расстояние от центра до 8км* выявили аномальный рост средней стоимости жилья на расстоянии от центра в 26-27км 4.6 Анализ квартир в центре Санкт-Петербурга
###Code
df_sp_center = df_sp[df_sp['cityCenters_nearest_km'] <= 8]
###Output
_____no_output_____
###Markdown
Построим гистограмму общей площади
###Code
plot_my_hist(df_sp_center['total_area'])
###Output
_____no_output_____
###Markdown
Построим гистограмму числа комнат
###Code
plot_my_hist(df_sp_center.loc[df['rooms'] <= 6, 'rooms'])
###Output
_____no_output_____
###Markdown
Построим гистограмму высоты потолков
###Code
plot_my_hist(df_sp_center.loc[~df_sp_center['ceiling_height'].isna(),'ceiling_height'])
###Output
_____no_output_____
###Markdown
Проверим зависимость числа комнат на цену квартиры
###Code
plot_my_pivot(
df_sp_center.pivot_table(index='rooms', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Проверим зависимость этажа на цену квартиры в объявлении
###Code
plot_my_pivot(
df_sp_center.pivot_table(index='categorical_floor', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
###Markdown
Проверим зависимость расположения квартиры от центра на цену квартиры
###Code
plot_my_scatter(
df_sp_center,
'cityCenters_nearest',
'last_price',
alpha=0.5
)
###Output
_____no_output_____
###Markdown
Проверим влияение даты размещения на цену квартиры
###Code
plot_my_pivot(
df_sp_center.pivot_table(index='dayofweek', values='last_price', aggfunc='median')
)
plot_my_pivot(
df_sp_center.pivot_table(index='month', values='last_price', aggfunc='median')
)
plot_my_pivot(
df_sp_center.pivot_table(index='year', values='last_price', aggfunc='median')
)
###Output
_____no_output_____
|
calculations/spatial_analysis/spatial_model_sketch.ipynb
|
###Markdown
Standard style figure
###Code
fig, ax = plt.subplots(2, figsize = (3.0, 1.5), sharex=True)
for a in ax:
a.spines['right'].set_color('none')
a.spines['top'].set_color('none')
a.yaxis.set_ticks_position('left')
a.xaxis.set_ticks_position('bottom')
a.set_xticks([])
a.set_yticks([])
ax[0].set_ylim([0.75, 1.0])
ax[0].set_ylabel('growth\nfraction')
ax[0].plot(space, GF, 'k')
ax[1].set_ylim([0.02, 0.07])
ax[1].set_xlabel('AP position')
ax[1].set_ylabel('mitotic\nindex')
ax[1].plot(space, mi, 'k')
plt.show()
###Output
_____no_output_____
###Markdown
xkcd style
###Code
plt.xkcd()
mpl.rcParams['axes.labelsize'] = 8
mpl.rcParams['xtick.labelsize'] = 8
mpl.rcParams['ytick.labelsize'] = 8
mpl.rcParams['legend.fontsize'] = 8
mpl.rcParams['text.usetex'] = False
mpl.rcParams['svg.fonttype'] = 'none'
fig, ax = plt.subplots(2, figsize = (3.0, 1.5), sharex=True)
for a in ax:
a.spines['right'].set_color('none')
a.spines['top'].set_color('none')
a.yaxis.set_ticks_position('left')
a.xaxis.set_ticks_position('bottom')
a.set_xticks([])
a.set_yticks([])
ax[0].set_ylim([0.75, 1.0])
ax[0].set_ylabel('growth\nfraction')
ax[0].plot(space, GF, 'k')
ax[1].set_ylim([0.02, 0.07])
ax[1].set_xlabel('AP position')
ax[1].set_ylabel('mitotic\nindex')
ax[1].plot(space, mi, 'k')
plt.show()
###Output
_____no_output_____
###Markdown
xkcd style with Helvetica
###Code
plt.xkcd()
mpl.rcParams['axes.labelsize'] = 8
mpl.rcParams['xtick.labelsize'] = 8
mpl.rcParams['ytick.labelsize'] = 8
mpl.rcParams['legend.fontsize'] = 8
mpl.rcParams['text.usetex'] = False
mpl.rcParams['svg.fonttype'] = 'none'
mpl.rcParams['font.family'] = ['Helvetica LT Std']
mpl.rcParams['font.sans-serif'] = ['Helvetica LT Std']
mpl.rcParams['path.effects'] = None
fig, ax = plt.subplots(2, figsize = (2.0, 1.5), sharex=True)
for a in ax:
a.spines['right'].set_color('none')
a.spines['top'].set_color('none')
a.yaxis.set_ticks_position('left')
a.xaxis.set_ticks_position('bottom')
a.set_xticks([])
a.set_yticks([])
ax[0].set_ylim([0.75, 1.0])
ax[0].set_ylabel('growth\nfraction')
ax[0].plot(space, GF, 'k')
ax[1].set_ylim([0.02, 0.07])
ax[1].set_xlabel('AP position')
ax[1].set_ylabel('mitotic\nindex')
ax[1].plot(space, mi, 'k')
plt.savefig('../../figure_plots/Fig2_spatial_model_sketch.svg')
plt.show()
###Output
_____no_output_____
|
BankMarketCampaignOperationalization.ipynb
|
###Markdown
Market Campaign Prediction NoteBook SummaryMarket campaign prediction aims to predict the success of telemarketing. Description Use Case DescriptionA company, such as bank, wants to do market campaign prediction. The bank collects customer demographic data, bank account information, history telemarketing activity record from various data sources. The task is to build a pipeline that automatically analyze the bank market dataset, to predict the success of telemarketing calls for selling bank long-term deposits. The aim is to provide market intelligence for the bank and better target valuable customers and hence reduce marketing cost. Use Case DataThe data used in this use case is [BankMarket dataset](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing), a publicly available data set collected from UCI Machine Learning repository. The data contains 17 variables and 4521 rows. We shared the market data in the data folder. You can use this shared data to follow the steps in this template, or you can access the full dataset from UCI website.Each instance in the data set has 17 fields:* 1 - age (numeric)* 2 - job : type of job (categorical: "admin.","unknown","unemployed","management","housemaid","entrepreneur","student", "blue-collar","self-employed","retired","technician","services") * 3 - marital : marital status (categorical: "married","divorced","single"; note: "divorced" means divorced or widowed)* 4 - education (categorical: "unknown","secondary","primary","tertiary")* 5 - default: has credit in default? (binary: "yes","no")* 6 - balance: average yearly balance, in euros (numeric) * 7 - housing: has housing loan? (binary: "yes","no")* 8 - loan: has personal loan? (binary: "yes","no")* 9 - contact: contact communication type (categorical: "unknown","telephone","cellular") * 10 - day: last contact day of the month (numeric)* 11 - month: last contact month of year (categorical: "jan", "feb", "mar", ..., "nov", "dec")* 12 - duration: last contact duration, in seconds (numeric)* 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)* 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted)* 15 - previous: number of contacts performed before this campaign and for this client (numeric)* 16 - poutcome: outcome of the previous marketing campaign (categorical: "unknown","other","failure","success")Target variable:* 17 - y - has the client subscribed a term deposit? (binary: "yes","no") Market Campaign Operationalization Schema GenerationIn order to deploy the model as a web-service, we need first define functions to generate schema file for the service.
###Code
# This script generates the scoring and schema files
# necessary to operaitonalize the Market Campaign prediction sample
# Init and run functions
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
import pandas as pd
# Prepare the web service definition by authoring
# init() and run() functions. Test the fucntions
# before deploying the web service.
def init():
from sklearn.externals import joblib
# load the model file
global model
model = joblib.load('./code/marketcampaign/dt.pkl')
def run(input_df):
import json
df = df1.append(input_df, ignore_index=True)
columns_to_encode = list(df.select_dtypes(include=['category','object']))
for column_to_encode in columns_to_encode:
dummies = pd.get_dummies(df[column_to_encode])
one_hot_col_names = []
for col_name in list(dummies.columns):
one_hot_col_names.append(column_to_encode + '_' + col_name)
dummies.columns = one_hot_col_names
df = df.drop(column_to_encode, axis=1)
df = df.join(dummies)
pred = model.predict(df)
return json.dumps(str(pred[12]))
#return pred[12]
print('executed')
df1 = pd.DataFrame(data=[[30,'admin.','divorced','unknown','yes',1787,'no','no','telephone',19,'oct',79,1,-1,0,'unknown'],[33,'blue-collar','married','secondary','no',4789,'yes','yes','cellular',11,'may',220,1,339,4,'success'],[35,'entrepreneur','single','tertiary','no',1350,'yes','no','cellular',16,'apr',185,1,330,1,'failure'],[30,'housemaid','married','tertiary','no',1476,'yes','yes','unknown',3,'jun',199,4,-1,0,'unknown'],[59,'management','married','secondary','no',0,'yes','no','unknown',5,'jan',226,1,-1,0,'unknown'],[35,'retired','single','tertiary','no',747,'no','no','cellular',23,'feb',141,2,176,3,'failure'],[36,'self-employed','married','tertiary','no',307,'yes','no','cellular',14,'mar',341,1,330,2,'other'],[39,'services','married','secondary','no',147,'yes','no','cellular',6,'jul',151,2,-1,0,'unknown'],[41,'student','married','tertiary','no',221,'yes','no','unknown',14,'aug',57,2,-1,0,'unknown'],[43,'technician','married','primary','no',-88,'yes','yes','cellular',17,'sep',313,1,147,2,'failure'],[39,'unemployed','married','secondary','no',9374,'yes','no','unknown',20,'nov',273,1,-1,0,'unknown'],[43,'unknown','married','secondary','no',264,'yes','no','cellular',17,'dec',113,2,-1,0,'unknown']], columns=['age','job','marital','education','default','balance','housing','loan','contact','day','month','duration','campaign','pdays','previous','poutcome'])
df1.dtypes
df1
df = pd.DataFrame([[30,'unemployed','married','primary','no',1787,'no','no','cellular',19,'oct',79,1,-1,0,'unknown']], columns=['age','job','marital','education','default','balance','housing','loan','contact','day','month','duration','campaign','pdays','previous','poutcome'])
df.dtypes
df
init()
input1 = pd.DataFrame([[30,'unemployed','married','primary','no',1787,'no','no','cellular',19,'oct',79,1,-1,0,'unknown']], columns=['age','job','marital','education','default','balance','housing','loan','contact','day','month','duration','campaign','pdays','previous','poutcome'])
input1.head()
run(input1)
inputs = {"input_df": SampleDefinition(DataTypes.PANDAS, df)}
# The prepare statement writes the scoring file (main.py) and
# the schema file (service_schema.json) the the output folder.
generate_schema(run_func=run, inputs=inputs, filepath='market_service_schema.json')
###Output
_____no_output_____
###Markdown
Scoring FunctionThen, we will need to define a scoring function to score on the new instance.
###Code
import pandas as pd
def init():
from sklearn.externals import joblib
# load the model file
global model
model = joblib.load('./code/marketcampaign/dt.pkl')
def run(input_df):
import json
df = df1.append(input_df, ignore_index=True)
columns_to_encode = list(df.select_dtypes(include=['category','object']))
for column_to_encode in columns_to_encode:
dummies = pd.get_dummies(df[column_to_encode])
one_hot_col_names = []
for col_name in list(dummies.columns):
one_hot_col_names.append(column_to_encode + '_' + col_name)
dummies.columns = one_hot_col_names
df = df.drop(column_to_encode, axis=1)
df = df.join(dummies)
pred = model.predict(df)
return json.dumps(str(pred[12]))
#return pred[12]
print('executed')
df1 = pd.DataFrame(data=[[30,'admin.','divorced','unknown','yes',1787,'no','no','telephone',19,'oct',79,1,-1,0,'unknown'],[33,'blue-collar','married','secondary','no',4789,'yes','yes','cellular',11,'may',220,1,339,4,'success'],[35,'entrepreneur','single','tertiary','no',1350,'yes','no','cellular',16,'apr',185,1,330,1,'failure'],[30,'housemaid','married','tertiary','no',1476,'yes','yes','unknown',3,'jun',199,4,-1,0,'unknown'],[59,'management','married','secondary','no',0,'yes','no','unknown',5,'jan',226,1,-1,0,'unknown'],[35,'retired','single','tertiary','no',747,'no','no','cellular',23,'feb',141,2,176,3,'failure'],[36,'self-employed','married','tertiary','no',307,'yes','no','cellular',14,'mar',341,1,330,2,'other'],[39,'services','married','secondary','no',147,'yes','no','cellular',6,'jul',151,2,-1,0,'unknown'],[41,'student','married','tertiary','no',221,'yes','no','unknown',14,'aug',57,2,-1,0,'unknown'],[43,'technician','married','primary','no',-88,'yes','yes','cellular',17,'sep',313,1,147,2,'failure'],[39,'unemployed','married','secondary','no',9374,'yes','no','unknown',20,'nov',273,1,-1,0,'unknown'],[43,'unknown','married','secondary','no',264,'yes','no','cellular',17,'dec',113,2,-1,0,'unknown']], columns=['age','job','marital','education','default','balance','housing','loan','contact','day','month','duration','campaign','pdays','previous','poutcome'])
# Implement test code to run in IDE or Azure ML Workbench
if __name__ == '__main__':
init()
input = input1
print(run(input))
#input = "{}"
#run(input)
###Output
"0"
|
examples/colab/component_examples/classifiers/sentiment_classification.ipynb
|
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/component_examples/classifiers/sentiment_classification.ipynb) Sentiment Classification with NLU for Twitter 1. Setup Java 8 and NLU
###Code
import os
! apt-get update -qq > /dev/null
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! pip install nlu pyspark==2.4.7 > /dev/null
###Output
_____no_output_____
###Markdown
2. Load pipeline and get sample predictions
###Code
import nlu
sentiment_pipe = nlu.load('en.sentiment.twitter')
sentiment_pipe.predict('@elonmusk Tesla stock price is too high imo')
###Output
analyze_sentimentdl_use_twitter download started this may take some time.
Approx size to download 928.3 MB
[OK!]
###Markdown
3. Define a list of String for predictions
###Code
example_tweets = [
"@VirginAmerica Hi, Virgin! I'm on hold for 40-50 minutes -- are there any earlier flights from LA to NYC tonight; earlier than 11:50pm?",
"@VirginAmerica is there special assistance if I travel alone w/2 kids and 1 infant? Priority boarding?",
"@VirginAmerica thank you for checking in. tickets are purchased and customer is happy",
"@VirginAmerica is your website ever coming back online?",
"@VirginAmerica - Is Flight 713 from Love Field to SFO definitely Cancelled Flightled for Monday, February 23?",
"@VirginAmerica Is flight 0769 out of LGA to DFW on time?",
"@VirginAmerica my drivers license is expired by a little over a month. Can I fly Friday morning using my expired license?",
"@VirginAmerica having problems Flight Booking Problems on the web site. keeps giving me an error and to contact by phone. phone is 30 minute wait.",
"@VirginAmerica How do I reschedule my Cancelled Flightled flights online? The change button is greyed out!",
"@VirginAmerica I rang, but there is a wait for 35 minutes!! I can book the same ticket through a vendor, fix your site",
"@VirginAmerica got a flight (we were told) for 4:50 today..,checked my email and its for 4;50 TOMORROW. This is unacceptable.",
"@VirginAmerica our flight into lga was Cancelled Flighted. We're stuck in Dallas. I called to reschedule, told I could get a flight for today...(1/2)",
"@virginamerica why don't any of the pairings include red wine?! Only white is offered :( #redwineisbetter"
]
###Output
_____no_output_____
###Markdown
4. Get predictions for list of strings
###Code
sentiment_pipe.predict(example_tweets)
###Output
_____no_output_____
|
spark-training/spark-python/jupyter-pyspark/PySpark DataFrame Skeleton.ipynb
|
###Markdown
0 Spark SessionBefore working with Spark, we need an entry point. The so called *Spark Session* allows us to create DataFrames etc.
###Code
from pyspark.sql import SparkSession
if not 'spark' in locals():
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","64G") \
.getOrCreate()
spark
###Output
_____no_output_____
###Markdown
1 Creating a DataFrameFirst, let's create some DataFrame from Python objects. While this is probably not the most common thing to do, it is easy and helpful in some situations where you already have some Python objects. It is also possible to create a PySpark DataFrame from an existing PySpark RDD.
###Code
rdd = sc.parallelize([('Alice', 13), ('Bob', 12)])
df = spark.createDataFrame(rdd, ['name', 'age'])
print(df.collect())
###Output
_____no_output_____
###Markdown
PySpark also contains a small method for displaying the contents of a DataFrame. 1.1 Inspect SchemaThe `spark` object has different methods for creating a so called Spark DataFrame object. This object is similar to a table, it contains rows of records, which all conform to a common schema with named columns and specific types. On the surface it heavily borrows concepts from Pandas DataFrames or R DataFrames, although the syntax and many operations are syntactically very different.As the first step, we want to see the contents of the DataFrame. This can be easily done by using the show method. 1.2 Explicitly specify SchemaIt is also possible to explicitly specify the schema, not only by names, but also by types. This is quite useful in the next steps when we want to load data from CSV files.
###Code
from pyspark.sql.types import *
data = [('Alice', 13), ('Bob', 12)]
schema = StructType([
StructField('name', StringType(), True),
StructField('age', IntegerType(), True),
])
# YOUR CODE HERE
from pyspark.sql import Row
Person = Row('name','age')
alice = Person('Alice',23)
bob = Person('Bob',21)
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2 Reading DataOf course manually creating DataFrames from a couple of records is not the real use case. Instead we want to read data frames files.. Spark supports various file formats, we will use JSON in the following example.The entrypoint for creating Spark objects is an object called spark which is provided in the notebook and read to use. We will read a file containing some informations on a couple of persons, which will serve as the basis for the next examples 2.1 Reading JSON Data
###Code
# YOUR CODE HERE
persons = ...
###Output
_____no_output_____
###Markdown
2.2 Reading CSV Data
###Code
# YOUR CODE HERE
persons = ...
persons.show()
persons.printSchema()
###Output
_____no_output_____
###Markdown
2.3 Explicit SchemaWe already saw that we can explicitly specify a schema when we create a DataFrame from a Python list of objects. We can also specify a schema when we read data from a storage. This is highly recommended, otherwise Spark would use automatic schema inference, which might hide problems of a changed data delivery.
###Code
from pyspark.sql.types import *
schema = StructType([
StructField("age", LongType(), False),
StructField("height", LongType(), False),
StructField("name", StringType(), False),
StructField("sex", StringType(), False),
])
persons = ...
persons.toPandas()
###Output
_____no_output_____
###Markdown
3 Interacting with PandasNow that we can create and read Spark DataFrames, we also want to convert them into Pandas DataFrames. Pandas is a Python package which also offers a concept called *DataFrame*, but that has nothing to do with Spark. Actually Pandas is much older than Spark, and Spark borrowed many ideas and concepts from Pandas (and from R of course).Nevertheless it is quite useful to convert a Spark DataFrame into a Pandas DataFrame, specifically because the Jupyter Notebook directly supports Pandas DataFrames and renders them much nicer. Pandas also supports graphics, so we can create a (in thius case meaningless) graph of a Pandas DataFrame.
###Code
%matplotlib inline
pdf.plot()
###Output
_____no_output_____
###Markdown
Attention: Beware of huge DatFrames!Do not forget hat Apache Spark has been designed and built to handle really huge data sets, which do not need to fit into memory. Spark DataFrames con contain billions of rows and are stored in a distributed way on many nodes in a cluster. Actually the contents do not even need to be physically present at all, as long as the input data is accessible.But calling the toPandas() method will transfer all the records to a single machine (where the Jupyter Notebook runs on) - but maybe this computer does not have enough memory to hold all the data. In this case, you risk that the notebook process will crash with an Out-Of-Memory error (OOM). So you should only use toPandas() when you are really sure that the DataFrame contains a limited amount of records. 4 Simple Transformations 4.1 ProjectionsThe simplest thing to do is to create a new DataFrame with a subset of the available columns
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4.2 Addressing ColumnsSpark supports multiple different ways for addressing a columns. We just saw one way, but also the following methods are supported for specifying a column:* `df.column_name`* `df['column_name']`* `col('column_name')`* `df[idx]`All these methods return a Column object, which is an abstract representative of the data in the column. As we will see soon, transformations can be applied to Column in order to derive new values. Beware of Lowercase and UppercaseWhile PySpark itself is case insenstive concering column names, Python itself is case sensitive. Since the first method for addressing columns by treating them as fields of a Python object *is* Python syntax, this is also case sensitive!
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4.3 Transformations The `select` method actually accepts any column object. A column object conceptually represents a column in a DataFrame. The column may either refer directly to an existing column of the input DataFrame, or it may represent the result of a calculation or transformation of one or multiple columns of the input DataFrame. For example if we simply want to transform the name into upper case, we can do so by using a function `upper` provided by PySpark.
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Defining new Column NamesThe resulting DataFrame again has a schema, but the column names to not look very nice. But by using the `alias` method of a `Column` object, you can immediately rename the newly created column like you are already used to in SQL with `SELECT complex_operation(...) AS nice_name FROM ...`. Technically specifying a new name for the resulting column is not required (as we already saw above), if the name is not specified, PySpark will generate a name from the expression. But since this generated name tends to be rather long and contains the logic instead of the intention, it is highly recommended to always explicitly specify the name of the resulting column using `as`.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
You can also perform simple mathematical calculations like addition, multiplication etc.
###Code
# YOUR CODE HERE
result = ...
result.toPandas()
numbers = spark.createDataFrame([(x,) for x in range(0,10)], ['number'])
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
Common FunctionsYou can find the full list of available functions at [PySpark SQL Module](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Commonly used functions for example are as follows:* [`concat(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.concat) - Concatenates multiple input columns together into a single column.* [`substring(col,start,len)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.substring) - Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type.* [`instr(col,substr)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.instr) - Locate the position of the first occurrence of substr column in the given string. Returns null if either of the arguments are null.* [`locate(col,substr, pos)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.locate) - Locate the position of the first occurrence of substr in a string column, after position pos.* [`length(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.length) - Computes the character length of string data or number of bytes of binary data. * [`upper(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.upper) - Converts a string column to upper case.* [`lower(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lower) - Converts a string column to lower case.* [`coalesce(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.coalesce) - Returns the first column that is not null.* [`isnull(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnull) - An expression that returns true iff the column is null.* [`isnan(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnan) - An expression that returns true iff the column is NaN.* [`hash(cols*)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.hash) - Calculates the hash code of given columns.Spark also supports conditional expressions, like the SQL `CASE WHEN` construct* [`when(condition, value)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.when) - Evaluates a list of conditions and returns one of multiple possible result expressions.There are also some special functions often required* [`col(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.col) - Returns a Column based on the given column name.* [`lit(val)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lit) - Creates a Column of literal value.* [`expr(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.expr) - Parses the expression string into the column that it represents User Defined FunctionsUnfortunately you cannot directly use normal Python functions for transforming DataFrame columns. Although PySpark already provides many useful functions, this might not always sufficient. But fortunately you can *convert* a standard Python function into a PySpark function, thereby defining a so called *user defined function* (UDF). Details will be explained in detail in the training. 4.4 Injecting Literal ValuesSometimes it is required to inkect literal values (i.e. strings or numbers) into a transformation expression. Since using simply a string could mean wither a column with that name or the literal itself, Spark offers the function `lit` to explicitly mark a string (or any other value) as a literal. `lit` creates a PySpark column object from the value. This means that all column methods are avilable for the literal.
###Code
result = persons.select(concat(lit('Name:'), persons.name, lit(' Age:'), persons.age).alias('text'))
result.toPandas()
###Output
_____no_output_____
###Markdown
4.5 ExercisesWrite a small `select` statement, which puts either "Mr" or "Mrs" into a new column called "salutation" depending on the sex of the person
###Code
## YOU CODE HERE
###Output
_____no_output_____
###Markdown
4.6 SQL ExpressionsUsing the `expr` function, Spark also allows to use SQL expressions for calculating columns.
###Code
result = persons.select(
# YOUR CODE HERE
)
result.toPandas()
###Output
_____no_output_____
###Markdown
4.7 Adding ColumnsA special variant of a `select` statement is the `withColumn` method. While the `select` statement requires all resulting columns to be defined in as arguments, the `withColumn` method keeps all existing columns and adds a new one. This operation is quite useful since in many cases new columns are derived from the existing ones, while the old ones still should be contained in the result.Let us have a look at a simple example, which only adds the salutation as a new column:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
As you can see from the example above, `withColumn` always takes two arguments: The first one is the name of the new column (and it has to be a string), and the second argument is the expression containing the logic for calculating the actual contents. 4.8 Dropping a ColumnPySpark also supports the opposite operation which simply removes some columns from a dataframe. This is useful if you need to remove some sensitive data before saving it to disk:
###Code
result2 = ...
result2.toPandas()
###Output
_____no_output_____
###Markdown
4.9 ExerciseUsing the `persons` DataFrame, perform the following operations:* Add a new column `status` which should be `minor` if the person is younger than 21 and `adult` otherwise* Replace the column `name` by a new column `hashed_name` containing the hash value of the name* Drop the column `sex`
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
5 Filtering*Filtering* denotes the process of keeping only rows which meet a certain filter criteria. PySpark support two different approaches. The first approach specifies the filtering expression as a PySpark expression using columns:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
The second approach simply uses a string containing an SQL expression:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
5.1 ExercisePerform two different filter operations:1. Select all women with a height of at least 1602. Select all persons which are younger than 23 or older than 30
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
5.2 Limit OperationsWhen working with large datasets, it may be helpful to limit the amount of records (like an SQL LIMIT operation). 6 AggregationsSpark supports simple (non-grouped) aggregation, which will return a DataFrame containing a single row containing aggregated values from all source rows.
###Code
# Create a simple dataframe containing some numbers
df = spark.createDataFrame([(x,) for x in range(0,100)], ['value'])
df.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
The simplest aggregation is the number of records in a DataFrame
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Spark supports the usual aggregations as we know from SQL:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
7 Making Data Distinct
###Code
df = spark.createDataFrame([('Bob',),('Alice',),('Bob',)], ['name'])
df.toPandas()
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
8 Grouping & AggregatingAn important class of operation is grouping and aggregation, which is equivalnt to an SQL `SELECT aggregation GROUP BY grouping` statement. In PySpark, grouping and aggregation is always performed by first creating groups using `groupBy` immediately followed by aggregation expressions inside an `agg` method. (Actually there are also some predefined aggregations which can be used instead of `agg`, but they do not offer the flexiviliby which is required most of the time).Note that in the `agg` method you only need to specify the aggregation expression, the grouping columns are added automatically by PySpark to the resulting DataFrame.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Sometimes it may be useful to access all elements of a group as a list. But since `groupBy` does not return a normal DataFrame and requires an aggregate function as the next step, this requires a small trick. Using the `collect_list` function, you can put all elemenets of a single column of every group into a new column.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Aggregation FunctionsPySpark supports many aggregation functions, they can be found in the documentation at [PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Aggregation functions are marked as such in the documentation, unfortunately there is no simple overview. Among common aggregation functions, there are for example:* min / max* count* sum* avg* stddev* variance* corr* first* last 8.1 ExerciseUsing the `persons` DataFrame, calculate the average height and the number of records per sex.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
9 Sorting DataPySpark also supports sorting data with the `orderBy` method. For example we can sort all persons by their height as follows:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
If nothing else is specified, PySpark will sort the records in increasing order of the sort columns. If you require descending order, this can be specified by manipulating the sort column with the `desc()` method as follows:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
9.1 ExerciseAs an exercise we want to sort all persons first by their sex and then by their descening age. Sorting by multiple columns can easily be achieved by specifying multiple columns as arguments in the `orderBy` method.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
10 Joining DataEvery relation algebra also contains join operations which lets you combine multiple tables by a matching criterion. PySpark also supports joins of multiple DataFrames. In order to shed some light on that, we need a second DataFrame in addition to the `persons` DataFrame. Therefore we load some address data as follows:
###Code
addresses = spark.read.json("s3://dimajix-training/data/addresses.json")
addresses.toPandas()
###Output
_____no_output_____
###Markdown
Now that we have the addresses DataFrame, we want to combine it with the persons DataFrame such that the city of every person is added as a new column. This is achieved by the join method which essentially takes two parameters: The first parameter specifies the second DataFrame to join with, and the second parameter specifies the join condition. In this case we want to join all records, where the name column matches.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Let me make some relevant remarks:* The resulting DataFrame now contains two `name` columns - one comes from the `persons` DataFrame, the other from the `addresses` DataFrame. Since the join condition could have used some more complex expression, this behaviour is only logical since PySpark cannot assume that all joins simply use directly some column value. For example we could also have transformed the column on the fly by converting the name to upper case directly inside the join condition.* The result contains only persons where an address was found, although the original `persons` DataFrame contained more persons.* There are no records of addresses without any person, although the `addresses` DataFrame contains information about some persons not available in the `persons` DataFrame.So let us first address the first observation. We can easily get rid of the copied `name` column by either performing an explicit select of the desired columns, or by dropping the duplicate columns. Since PySpark records the lineage of every column, the duplicate `name` columns can be addressed by their original DataFrame even after the join operation:
###Code
result = persons.join(addresses,persons.name == addresses.name).select(persons.name,persons.age,addresses.city)
result.toPandas()
###Output
_____no_output_____
###Markdown
10.1 Join TypesNow let us explain the last two observations. These are due to the used join type, which was a so called *inner* join. In this case, only records with information from both DataFrames are included in the result.In addition to the *inner* join, PySpark also supports some additional joins:* *outer join* will contain records for all elements from both DataFrames. If either the left or right DataFrames doesn't contain any information, the result will contain `None` values (= `NULL` values) for the corresponding columns.* In a *right join*, the second DataFrame (the right DataFrame) as specified as an argument is the leading element. The result will contain records for every record in that DataFrame.* In a *left join*, the first DataFrame (the left DataFrame) as specified as the object iteself is the leading element. The result will contain records for every record in that DataFrame.
###Code
result = persons.join(addresses,persons.name == addresses.name, how="outer")
result.toPandas()
result = persons.join(addresses,persons.name == addresses.name, how="right")
result.toPandas()
result = persons.join(addresses,persons.name == addresses.name, how="left")
result.toPandas()
###Output
_____no_output_____
###Markdown
10.2 Join on ColumnAs a convenience, Spark also supports directly joining on a column that is present in both DataFrames. 10.3 ExerciseAs an exercise, we use another DataFrame loaded from a file called `lastnames.json`, which can be joined to the persons DataFrame again:
###Code
lastnames = spark.read.json("s3://dimajix-training/data/lastnames.json")
lastnames.toPandas()
###Output
_____no_output_____
###Markdown
Now join the lastnames DataFrame to the `persons` DataFrame whenever the `name` column of both DataFrames matches. Note what happens due to the fact that we have two last names for "Bob".
###Code
# Your Code Here
###Output
_____no_output_____
###Markdown
11 Set OperationsSpark also offers some set operations, like `UNION`, `INTERSECT` and `SUBTRACT`.First let us create two simple data frames for experiments.
###Code
df1 = spark.createDataFrame([
['Alice', 23],
['Bob', 44],
['Charlie', 31]
], ["name", "age"])
df2 = spark.createDataFrame([
['Alice', 23],
['Bob', 44],
['Henry', 31]
], ["name", "age"])
###Output
_____no_output_____
###Markdown
11.1 UnionsThe most well known operation is a `union` which actually corresponds to an SQL `UNION ALL`, i.e. it will keep duplicate records. When you do not want to keep duplicate records, you can simply run a `distinct()` transformation after the `union()`. Union and UnionByNameA simple `union` operation simply takes the schema of the first DataFrame and appends the records of the second data frame. The columns will be matched by their position and types will be changed if required.
###Code
df3 = spark.createDataFrame([
[23, 'Alice'],
[44, 'Bob'],
[31, 'Henry']
], ["age", "name"])
###Output
_____no_output_____
###Markdown
11.2 Intersect and SubtractSpark also supports additional set operations like `INTERSECT` and `SUBTRACT` 12 Caching DataIn some situations, you may want to persist intermediate results. For example iterative algorithms may benefit from *caching* intermediate results, if the same DataFrame is transformed again and again. Spark provides some capabilities to persist intermediate results using the methods `cache()` or `persist(storageLevel)`.Note that also caching is lazy, which means that records will not be created at the time when you call `cache()` or `persist()` but at the first time when the DataFrame is evaluated. This could be even a simple `count()` action. 13 Using SQLPySpark also directly supports SQL. In order to work with SQL, you only need to register a PySpark DataFrame as a *temporary view*, which provides a name to a DataFrame which can be referenced in SQL queries later.
###Code
persons = spark.read.json("s3://dimajix-training/data/persons.json")
addresses = spark.read.json("s3://dimajix-training/data/addresses.json")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
13.1 ExercisePerform the following tasks, in order to join `persons` with `addresses`in SQL:* Register `addresses` DataFrame as `addresses`* Join `persons` with `addresses`* Only select persons which are 21 years or older
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
14 User Defined FunctionsFrom time to time you hit a wall where you need a simple transformation, but Spark does not offer an appropriate function in the `pyspark.sql.functions` module. Fortunately you can simply define new functions, so called *user defined functions* or short *UDFs*.
###Code
from pyspark.sql.functions import *
from pyspark.sql.types import *
df = spark.createDataFrame([('Alice & Bob',12),('Thelma & Louise',17)],['name','age'])
import html
html.escape("Thelma & Louise")
###Output
_____no_output_____
###Markdown
14.1 Classic Python UDFFirst we will create a classical Python UDF (as opposed to a Pandas UDF).
###Code
html_encode = udf(html.escape, StringType())
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
As an alternative, you can also use a Python decorator for declaring a UDF:
###Code
@udf(StringType())
def html_encode(s):
return html.escape(s)
result = df.select(html_encode('name').alias('html_name'))
result.toPandas()
###Output
_____no_output_____
###Markdown
If you wanto to use the Python UDF inside a SQL query, you also need to register it, so PySpark knows its name.
###Code
html_encode = spark.udf.register("html_encode", html.escape, StringType())
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
14.2 Pandas UDFs"Normal" Python UDFs are pretty expensive (in terms of execution time), since for every record the following steps need to be performed:* record is serialized inside JVM* record is sent to an external Python process* record is deserialized inside Python* record is Processed in Python* result is serialized in Python* result is sent back to JVM* result is deserialized and stored inside result DataFrameThis does not only sound like a lot of work, it actually is. Therefore Python UDFs are a magnitude slower than native UDFs written in Scala or Java, which run directly inside the JVM.But since Spark 2.3 an alternative approach is available for defining Python UDFs with so called *Pandas UDFs*. Pandas is a commonly used Python framework which also offers DataFrames (but Pandas DataFrames, not Spark DataFrames). Spark 2.3 now can convert inside the JVM a Spark DataFrame into a shareable memory buffer by using a library called *Arrow*. Python then can also treat this memory buffer as a Pandas DataFrame and can directly work on this shared memory.This approach has two major advantages:* No need for serialization and deserialization, since data is shared directly in memory between the JVM and Python* Pandas has lots of very efficient implementations in C for many functionsDue to these two facts, Pandas UDFs are much faster and should be preferred over traditional Python UDFs whenever possible.
###Code
from pyspark.sql.functions import udf
# Use udf to define a row-at-a-time udf
# Input/output are both a single double value
# YOUR CODE HERE
def cm_to_inch(v):
return v*0.393701
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
Increment a value using a Pandas UDF. The Pandas UDF receives a `pandas.Series` object and also has to return a `pandas.Series` object.
###Code
from pyspark.sql.functions import pandas_udf, PandasUDFType
# Use pandas_udf to define a Pandas UDF
# Input/output are both a Pandas Series
# YOUR CODE HERE
def pandas_cm_to_inch(v):
return v*0.393701
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
14.3 Grouped Pandas Aggregate UDFsSince version 2.4.0, Spark also supports Pandas aggregation functions. This is the only way to implement custom aggregation functions in Python. Note that this type of UDF does not support partial aggregation and all data for a group or window will be loaded into memory.
###Code
# Use pandas_udf to define a Pandas UDF
# YOUR CODE HERE
def mean_udf(v):
return v.mean()
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
14.4 Grouped Pandas Map UDFsWhile the example above transforms all records independently, but only one column at a time, Spark also offers a so called *grouped Pandas UDF* which operates on complete groups of records (as created by a `groupBy` method).For example let's subtract the mean of a group from all entries of a group. In Spark this could be achieved directly by using windowed aggregations, but let's use Pandas instead:
###Code
schema = ...
# Input/output are both a pandas.DataFrame
# YOUR CODE HERE
def with_mean_height_diff(pdf):
return pdf.assign(avg_height_diff=pdf.height - pdf.height.mean())
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
15 Writing DataOf course Spark also supports writing data. Conceptionally this works similar to loading data - you were getting a `DataFrameReader` via `spark.read`. For writing data, you can get a `DataFrameWriter` by using `DataFrame.write`. The writer supports various formats like CSV, text, Parquet, ORC and JSON.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
16 Accessing Hive TablesPySpark supports accessing data in Hive tables. This enables to use Hive as a central database which takes the burden of specyfing the schema for a file over and over again.First let's retreieve the catalog containing all tables of a specific Hive database:
###Code
tables = spark.catalog.listTables(dbName='training')
tables.toPandas()
###Output
_____no_output_____
###Markdown
Now let's read in one table:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
0 Spark SessionBefore working with Spark, we need an entry point. The so called *Spark Session* allows us to create DataFrames etc.
###Code
from pyspark.sql import SparkSession
if not 'spark' in locals():
spark = (
SparkSession.builder.master("local[*]")
.config("spark.driver.memory", "64G")
.getOrCreate()
)
spark
###Output
_____no_output_____
###Markdown
1 Creating a DataFrameFirst, let's create some DataFrame from Python objects. While this is probably not the most common thing to do, it is easy and helpful in some situations where you already have some Python objects. It is also possible to create a PySpark DataFrame from an existing PySpark RDD.
###Code
rdd = sc.parallelize([('Alice', 13), ('Bob', 12)])
df = spark.createDataFrame(rdd, ['name', 'age'])
print(df.collect())
###Output
_____no_output_____
###Markdown
PySpark also contains a small method for displaying the contents of a DataFrame. 1.1 Inspect SchemaThe `spark` object has different methods for creating a so called Spark DataFrame object. This object is similar to a table, it contains rows of records, which all conform to a common schema with named columns and specific types. On the surface it heavily borrows concepts from Pandas DataFrames or R DataFrames, although the syntax and many operations are syntactically very different.As the first step, we want to see the contents of the DataFrame. This can be easily done by using the show method. 1.2 Explicitly specify SchemaIt is also possible to explicitly specify the schema, not only by names, but also by types. This is quite useful in the next steps when we want to load data from CSV files.
###Code
from pyspark.sql.types import *
data = [('Alice', 13), ('Bob', 12)]
schema = StructType(
[
StructField('name', StringType(), True),
StructField('age', IntegerType(), True),
]
)
# YOUR CODE HERE
from pyspark.sql import Row
Person = Row('name', 'age')
alice = Person('Alice', 23)
bob = Person('Bob', 21)
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
2 Reading DataOf course manually creating DataFrames from a couple of records is not the real use case. Instead we want to read data frames files.. Spark supports various file formats, we will use JSON in the following example.The entrypoint for creating Spark objects is an object called spark which is provided in the notebook and read to use. We will read a file containing some informations on a couple of persons, which will serve as the basis for the next examples 2.1 Reading JSON Data
###Code
# YOUR CODE HERE
persons = ...
###Output
_____no_output_____
###Markdown
2.2 Reading CSV Data
###Code
# YOUR CODE HERE
persons = ...
persons.show()
persons.printSchema()
###Output
_____no_output_____
###Markdown
2.3 Explicit SchemaWe already saw that we can explicitly specify a schema when we create a DataFrame from a Python list of objects. We can also specify a schema when we read data from a storage. This is highly recommended, otherwise Spark would use automatic schema inference, which might hide problems of a changed data delivery.
###Code
from pyspark.sql.types import *
schema = StructType(
[
StructField("age", LongType(), False),
StructField("height", LongType(), False),
StructField("name", StringType(), False),
StructField("sex", StringType(), False),
]
)
persons = ...
persons.toPandas()
###Output
_____no_output_____
###Markdown
3 Interacting with PandasNow that we can create and read Spark DataFrames, we also want to convert them into Pandas DataFrames. Pandas is a Python package which also offers a concept called *DataFrame*, but that has nothing to do with Spark. Actually Pandas is much older than Spark, and Spark borrowed many ideas and concepts from Pandas (and from R of course).Nevertheless it is quite useful to convert a Spark DataFrame into a Pandas DataFrame, specifically because the Jupyter Notebook directly supports Pandas DataFrames and renders them much nicer. Pandas also supports graphics, so we can create a (in thius case meaningless) graph of a Pandas DataFrame.
###Code
%matplotlib inline
pdf.plot()
###Output
_____no_output_____
###Markdown
Attention: Beware of huge DatFrames!Do not forget hat Apache Spark has been designed and built to handle really huge data sets, which do not need to fit into memory. Spark DataFrames con contain billions of rows and are stored in a distributed way on many nodes in a cluster. Actually the contents do not even need to be physically present at all, as long as the input data is accessible.But calling the toPandas() method will transfer all the records to a single machine (where the Jupyter Notebook runs on) - but maybe this computer does not have enough memory to hold all the data. In this case, you risk that the notebook process will crash with an Out-Of-Memory error (OOM). So you should only use toPandas() when you are really sure that the DataFrame contains a limited amount of records. 4 Simple Transformations 4.1 ProjectionsThe simplest thing to do is to create a new DataFrame with a subset of the available columns
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4.2 Addressing ColumnsSpark supports multiple different ways for addressing a columns. We just saw one way, but also the following methods are supported for specifying a column:* `df.column_name`* `df['column_name']`* `col('column_name')`* `df[idx]`All these methods return a Column object, which is an abstract representative of the data in the column. As we will see soon, transformations can be applied to Column in order to derive new values. Beware of Lowercase and UppercaseWhile PySpark itself is case insenstive concering column names, Python itself is case sensitive. Since the first method for addressing columns by treating them as fields of a Python object *is* Python syntax, this is also case sensitive!
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
4.3 Transformations The `select` method actually accepts any column object. A column object conceptually represents a column in a DataFrame. The column may either refer directly to an existing column of the input DataFrame, or it may represent the result of a calculation or transformation of one or multiple columns of the input DataFrame. For example if we simply want to transform the name into upper case, we can do so by using a function `upper` provided by PySpark.
###Code
from pyspark.sql.functions import *
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Defining new Column NamesThe resulting DataFrame again has a schema, but the column names to not look very nice. But by using the `alias` method of a `Column` object, you can immediately rename the newly created column like you are already used to in SQL with `SELECT complex_operation(...) AS nice_name FROM ...`. Technically specifying a new name for the resulting column is not required (as we already saw above), if the name is not specified, PySpark will generate a name from the expression. But since this generated name tends to be rather long and contains the logic instead of the intention, it is highly recommended to always explicitly specify the name of the resulting column using `as`.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
You can also perform simple mathematical calculations like addition, multiplication etc.
###Code
# YOUR CODE HERE
result = ...
result.toPandas()
numbers = spark.createDataFrame([(x,) for x in range(0, 10)], ['number'])
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
Common FunctionsYou can find the full list of available functions at [PySpark SQL Module](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Commonly used functions for example are as follows:* [`concat(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.concat) - Concatenates multiple input columns together into a single column.* [`substring(col,start,len)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.substring) - Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type.* [`instr(col,substr)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.instr) - Locate the position of the first occurrence of substr column in the given string. Returns null if either of the arguments are null.* [`locate(col,substr, pos)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.locate) - Locate the position of the first occurrence of substr in a string column, after position pos.* [`length(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.length) - Computes the character length of string data or number of bytes of binary data. * [`upper(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.upper) - Converts a string column to upper case.* [`lower(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lower) - Converts a string column to lower case.* [`coalesce(*cols)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.coalesce) - Returns the first column that is not null.* [`isnull(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnull) - An expression that returns true iff the column is null.* [`isnan(col)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.isnan) - An expression that returns true iff the column is NaN.* [`hash(cols*)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.hash) - Calculates the hash code of given columns.Spark also supports conditional expressions, like the SQL `CASE WHEN` construct* [`when(condition, value)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.when) - Evaluates a list of conditions and returns one of multiple possible result expressions.There are also some special functions often required* [`col(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.col) - Returns a Column based on the given column name.* [`lit(val)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.lit) - Creates a Column of literal value.* [`expr(str)`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.expr) - Parses the expression string into the column that it represents User Defined FunctionsUnfortunately you cannot directly use normal Python functions for transforming DataFrame columns. Although PySpark already provides many useful functions, this might not always sufficient. But fortunately you can *convert* a standard Python function into a PySpark function, thereby defining a so called *user defined function* (UDF). Details will be explained in detail in the training. 4.4 Injecting Literal ValuesSometimes it is required to inkect literal values (i.e. strings or numbers) into a transformation expression. Since using simply a string could mean wither a column with that name or the literal itself, Spark offers the function `lit` to explicitly mark a string (or any other value) as a literal. `lit` creates a PySpark column object from the value. This means that all column methods are avilable for the literal.
###Code
result = persons.select(
concat(lit('Name:'), persons.name, lit(' Age:'), persons.age).alias('text')
)
result.toPandas()
###Output
_____no_output_____
###Markdown
4.5 ExercisesWrite a small `select` statement, which puts either "Mr" or "Mrs" into a new column called "salutation" depending on the sex of the person
###Code
## YOU CODE HERE
###Output
_____no_output_____
###Markdown
4.6 SQL ExpressionsUsing the `expr` function, Spark also allows to use SQL expressions for calculating columns.
###Code
result = persons.select(
# YOUR CODE HERE
)
result.toPandas()
###Output
_____no_output_____
###Markdown
4.7 Adding ColumnsA special variant of a `select` statement is the `withColumn` method. While the `select` statement requires all resulting columns to be defined in as arguments, the `withColumn` method keeps all existing columns and adds a new one. This operation is quite useful since in many cases new columns are derived from the existing ones, while the old ones still should be contained in the result.Let us have a look at a simple example, which only adds the salutation as a new column:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
As you can see from the example above, `withColumn` always takes two arguments: The first one is the name of the new column (and it has to be a string), and the second argument is the expression containing the logic for calculating the actual contents. 4.8 Dropping a ColumnPySpark also supports the opposite operation which simply removes some columns from a dataframe. This is useful if you need to remove some sensitive data before saving it to disk:
###Code
result2 = ...
result2.toPandas()
###Output
_____no_output_____
###Markdown
4.9 ExerciseUsing the `persons` DataFrame, perform the following operations:* Add a new column `status` which should be `minor` if the person is younger than 21 and `adult` otherwise* Replace the column `name` by a new column `hashed_name` containing the hash value of the name* Drop the column `sex`
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
5 Filtering*Filtering* denotes the process of keeping only rows which meet a certain filter criteria. PySpark support two different approaches. The first approach specifies the filtering expression as a PySpark expression using columns:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
The second approach simply uses a string containing an SQL expression:
###Code
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
5.1 ExercisePerform two different filter operations:1. Select all women with a height of at least 1602. Select all persons which are younger than 23 or older than 30
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
5.2 Limit OperationsWhen working with large datasets, it may be helpful to limit the amount of records (like an SQL LIMIT operation). 6 AggregationsSpark supports simple (non-grouped) aggregation, which will return a DataFrame containing a single row containing aggregated values from all source rows.
###Code
# Create a simple dataframe containing some numbers
df = spark.createDataFrame([(x,) for x in range(0, 100)], ['value'])
df.limit(10).toPandas()
###Output
_____no_output_____
###Markdown
The simplest aggregation is the number of records in a DataFrame
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Spark supports the usual aggregations as we know from SQL:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
7 Making Data Distinct
###Code
df = spark.createDataFrame([('Bob',), ('Alice',), ('Bob',)], ['name'])
df.toPandas()
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
8 Grouping & AggregatingAn important class of operation is grouping and aggregation, which is equivalnt to an SQL `SELECT aggregation GROUP BY grouping` statement. In PySpark, grouping and aggregation is always performed by first creating groups using `groupBy` immediately followed by aggregation expressions inside an `agg` method. (Actually there are also some predefined aggregations which can be used instead of `agg`, but they do not offer the flexiviliby which is required most of the time).Note that in the `agg` method you only need to specify the aggregation expression, the grouping columns are added automatically by PySpark to the resulting DataFrame.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Sometimes it may be useful to access all elements of a group as a list. But since `groupBy` does not return a normal DataFrame and requires an aggregate function as the next step, this requires a small trick. Using the `collect_list` function, you can put all elemenets of a single column of every group into a new column.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Aggregation FunctionsPySpark supports many aggregation functions, they can be found in the documentation at [PySpark Function Documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions). Aggregation functions are marked as such in the documentation, unfortunately there is no simple overview. Among common aggregation functions, there are for example:* min / max* count* sum* avg* stddev* variance* corr* first* last 8.1 ExerciseUsing the `persons` DataFrame, calculate the average height and the number of records per sex.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
9 Sorting DataPySpark also supports sorting data with the `orderBy` method. For example we can sort all persons by their height as follows:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
If nothing else is specified, PySpark will sort the records in increasing order of the sort columns. If you require descending order, this can be specified by manipulating the sort column with the `desc()` method as follows:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
9.1 ExerciseAs an exercise we want to sort all persons first by their sex and then by their descening age. Sorting by multiple columns can easily be achieved by specifying multiple columns as arguments in the `orderBy` method.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
10 Joining DataEvery relation algebra also contains join operations which lets you combine multiple tables by a matching criterion. PySpark also supports joins of multiple DataFrames. In order to shed some light on that, we need a second DataFrame in addition to the `persons` DataFrame. Therefore we load some address data as follows:
###Code
addresses = spark.read.json("s3://dimajix-training/data/addresses.json")
addresses.toPandas()
###Output
_____no_output_____
###Markdown
Now that we have the addresses DataFrame, we want to combine it with the persons DataFrame such that the city of every person is added as a new column. This is achieved by the join method which essentially takes two parameters: The first parameter specifies the second DataFrame to join with, and the second parameter specifies the join condition. In this case we want to join all records, where the name column matches.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Let me make some relevant remarks:* The resulting DataFrame now contains two `name` columns - one comes from the `persons` DataFrame, the other from the `addresses` DataFrame. Since the join condition could have used some more complex expression, this behaviour is only logical since PySpark cannot assume that all joins simply use directly some column value. For example we could also have transformed the column on the fly by converting the name to upper case directly inside the join condition.* The result contains only persons where an address was found, although the original `persons` DataFrame contained more persons.* There are no records of addresses without any person, although the `addresses` DataFrame contains information about some persons not available in the `persons` DataFrame.So let us first address the first observation. We can easily get rid of the copied `name` column by either performing an explicit select of the desired columns, or by dropping the duplicate columns. Since PySpark records the lineage of every column, the duplicate `name` columns can be addressed by their original DataFrame even after the join operation:
###Code
result = persons.join(addresses, persons.name == addresses.name).select(
persons.name, persons.age, addresses.city
)
result.toPandas()
###Output
_____no_output_____
###Markdown
10.1 Join TypesNow let us explain the last two observations. These are due to the used join type, which was a so called *inner* join. In this case, only records with information from both DataFrames are included in the result.In addition to the *inner* join, PySpark also supports some additional joins:* *outer join* will contain records for all elements from both DataFrames. If either the left or right DataFrames doesn't contain any information, the result will contain `None` values (= `NULL` values) for the corresponding columns.* In a *right join*, the second DataFrame (the right DataFrame) as specified as an argument is the leading element. The result will contain records for every record in that DataFrame.* In a *left join*, the first DataFrame (the left DataFrame) as specified as the object iteself is the leading element. The result will contain records for every record in that DataFrame.
###Code
result = persons.join(addresses, persons.name == addresses.name, how="outer")
result.toPandas()
result = persons.join(addresses, persons.name == addresses.name, how="right")
result.toPandas()
result = persons.join(addresses, persons.name == addresses.name, how="left")
result.toPandas()
###Output
_____no_output_____
###Markdown
10.2 Join on ColumnAs a convenience, Spark also supports directly joining on a column that is present in both DataFrames. 10.3 ExerciseAs an exercise, we use another DataFrame loaded from a file called `lastnames.json`, which can be joined to the persons DataFrame again:
###Code
lastnames = spark.read.json("s3://dimajix-training/data/lastnames.json")
lastnames.toPandas()
###Output
_____no_output_____
###Markdown
Now join the lastnames DataFrame to the `persons` DataFrame whenever the `name` column of both DataFrames matches. Note what happens due to the fact that we have two last names for "Bob".
###Code
# Your Code Here
###Output
_____no_output_____
###Markdown
11 Set OperationsSpark also offers some set operations, like `UNION`, `INTERSECT` and `SUBTRACT`.First let us create two simple data frames for experiments.
###Code
df1 = spark.createDataFrame(
[['Alice', 23], ['Bob', 44], ['Charlie', 31]], ["name", "age"]
)
df2 = spark.createDataFrame([['Alice', 23], ['Bob', 44], ['Henry', 31]], ["name", "age"])
###Output
_____no_output_____
###Markdown
11.1 UnionsThe most well known operation is a `union` which actually corresponds to an SQL `UNION ALL`, i.e. it will keep duplicate records. When you do not want to keep duplicate records, you can simply run a `distinct()` transformation after the `union()`. Union and UnionByNameA simple `union` operation simply takes the schema of the first DataFrame and appends the records of the second data frame. The columns will be matched by their position and types will be changed if required.
###Code
df3 = spark.createDataFrame([[23, 'Alice'], [44, 'Bob'], [31, 'Henry']], ["age", "name"])
###Output
_____no_output_____
###Markdown
11.2 Intersect and SubtractSpark also supports additional set operations like `INTERSECT` and `SUBTRACT` 12 Caching DataIn some situations, you may want to persist intermediate results. For example iterative algorithms may benefit from *caching* intermediate results, if the same DataFrame is transformed again and again. Spark provides some capabilities to persist intermediate results using the methods `cache()` or `persist(storageLevel)`.Note that also caching is lazy, which means that records will not be created at the time when you call `cache()` or `persist()` but at the first time when the DataFrame is evaluated. This could be even a simple `count()` action. 13 Using SQLPySpark also directly supports SQL. In order to work with SQL, you only need to register a PySpark DataFrame as a *temporary view*, which provides a name to a DataFrame which can be referenced in SQL queries later.
###Code
persons = spark.read.json("s3://dimajix-training/data/persons.json")
addresses = spark.read.json("s3://dimajix-training/data/addresses.json")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
13.1 ExercisePerform the following tasks, in order to join `persons` with `addresses`in SQL:* Register `addresses` DataFrame as `addresses`* Join `persons` with `addresses`* Only select persons which are 21 years or older
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
14 User Defined FunctionsFrom time to time you hit a wall where you need a simple transformation, but Spark does not offer an appropriate function in the `pyspark.sql.functions` module. Fortunately you can simply define new functions, so called *user defined functions* or short *UDFs*.
###Code
from pyspark.sql.functions import *
from pyspark.sql.types import *
df = spark.createDataFrame(
[('Alice & Bob', 12), ('Thelma & Louise', 17)], ['name', 'age']
)
import html
html.escape("Thelma & Louise")
###Output
_____no_output_____
###Markdown
14.1 Classic Python UDFFirst we will create a classical Python UDF (as opposed to a Pandas UDF).
###Code
html_encode = udf(html.escape, StringType())
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
As an alternative, you can also use a Python decorator for declaring a UDF:
###Code
@udf(StringType())
def html_encode(s):
return html.escape(s)
result = df.select(html_encode('name').alias('html_name'))
result.toPandas()
###Output
_____no_output_____
###Markdown
If you wanto to use the Python UDF inside a SQL query, you also need to register it, so PySpark knows its name.
###Code
html_encode = spark.udf.register("html_encode", html.escape, StringType())
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
14.2 Pandas UDFs"Normal" Python UDFs are pretty expensive (in terms of execution time), since for every record the following steps need to be performed:* record is serialized inside JVM* record is sent to an external Python process* record is deserialized inside Python* record is Processed in Python* result is serialized in Python* result is sent back to JVM* result is deserialized and stored inside result DataFrameThis does not only sound like a lot of work, it actually is. Therefore Python UDFs are a magnitude slower than native UDFs written in Scala or Java, which run directly inside the JVM.But since Spark 2.3 an alternative approach is available for defining Python UDFs with so called *Pandas UDFs*. Pandas is a commonly used Python framework which also offers DataFrames (but Pandas DataFrames, not Spark DataFrames). Spark 2.3 now can convert inside the JVM a Spark DataFrame into a shareable memory buffer by using a library called *Arrow*. Python then can also treat this memory buffer as a Pandas DataFrame and can directly work on this shared memory.This approach has two major advantages:* No need for serialization and deserialization, since data is shared directly in memory between the JVM and Python* Pandas has lots of very efficient implementations in C for many functionsDue to these two facts, Pandas UDFs are much faster and should be preferred over traditional Python UDFs whenever possible.
###Code
from pyspark.sql.functions import udf
# Use udf to define a row-at-a-time udf
# Input/output are both a single double value
# YOUR CODE HERE
def cm_to_inch(v):
return v * 0.393701
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
Increment a value using a Pandas UDF. The Pandas UDF receives a `pandas.Series` object and also has to return a `pandas.Series` object.
###Code
from pyspark.sql.functions import PandasUDFType, pandas_udf
# Use pandas_udf to define a Pandas UDF
# Input/output are both a Pandas Series
# YOUR CODE HERE
def pandas_cm_to_inch(v):
return v * 0.393701
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
14.3 Grouped Pandas Aggregate UDFsSince version 2.4.0, Spark also supports Pandas aggregation functions. This is the only way to implement custom aggregation functions in Python. Note that this type of UDF does not support partial aggregation and all data for a group or window will be loaded into memory.
###Code
# Use pandas_udf to define a Pandas UDF
# YOUR CODE HERE
def mean_udf(v):
return v.mean()
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
14.4 Grouped Pandas Map UDFsWhile the example above transforms all records independently, but only one column at a time, Spark also offers a so called *grouped Pandas UDF* which operates on complete groups of records (as created by a `groupBy` method).For example let's subtract the mean of a group from all entries of a group. In Spark this could be achieved directly by using windowed aggregations, but let's use Pandas instead:
###Code
schema = ...
# Input/output are both a pandas.DataFrame
# YOUR CODE HERE
def with_mean_height_diff(pdf):
return pdf.assign(avg_height_diff=pdf.height - pdf.height.mean())
result = ...
result.toPandas()
###Output
_____no_output_____
###Markdown
15 Writing DataOf course Spark also supports writing data. Conceptionally this works similar to loading data - you were getting a `DataFrameReader` via `spark.read`. For writing data, you can get a `DataFrameWriter` by using `DataFrame.write`. The writer supports various formats like CSV, text, Parquet, ORC and JSON.
###Code
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
16 Accessing Hive TablesPySpark supports accessing data in Hive tables. This enables to use Hive as a central database which takes the burden of specyfing the schema for a file over and over again.First let's retreieve the catalog containing all tables of a specific Hive database:
###Code
tables = spark.catalog.listTables(dbName='training')
tables.toPandas()
###Output
_____no_output_____
###Markdown
Now let's read in one table:
###Code
# YOUR CODE HERE
###Output
_____no_output_____
|
notebooks/process_data_to_csv_basepair.ipynb
|
###Markdown
Part 1: Initialize
###Code
host = 'tgtg_21mer'
time_interval = '0_1us' # '0_1us', '1_2us', '2_3us', '3_4us', '4_5us'
bp_agent = BasePairAgent(rootfolder, host, time_interval)
###Output
mkdir /home/ytcdata/x3dna_data/tgtg_21mer
mkdir /home/ytcdata/x3dna_data/tgtg_21mer/0_1us
###Markdown
Part 2: Download bdna+bdna.ensemble.out from server
###Code
server_ip = '140.113.120.131'
bp_agent.download_ensesmble_out(server_ip)
###Output
Please excute the following in the terminal:
scp [email protected]:/home/yizaochen/x3dna/paper_2021/tgtg_21mer/0_1us/bdna+bdna.ensemble.out /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out
###Markdown
Part 3: Extract BasePair Parameters
###Code
bp_agent.extract_parameters()
###Output
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p shear -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/shear.dat
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p buckle -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/buckle.dat
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p stretch -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/stretch.dat
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p propeller -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/propeller.dat
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p stagger -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/stagger.dat
/home/yizaochen/opt/x3dna-v2.3/bin/x3dna_ensemble extract -f /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/bdna+bdna.ensemble.out -p opening -o /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/opening.dat
###Markdown
Part 4: Convert dat to csv
###Code
bp_agent.convert_dat_to_csv()
###Output
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/shear.csv
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/buckle.csv
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/stretch.csv
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/propeller.csv
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/stagger.csv
Dataframe to csv: /home/ytcdata/x3dna_data/tgtg_21mer/0_1us/opening.csv
###Markdown
Additional Part 1: Clean all the dat file
###Code
bp_agent.clean_dat_files()
###Output
_____no_output_____
|
_notebooks/2022-01-25-dqn-ieee.ipynb
|
###Markdown
DQN RL Model on IEEE 2021 RecSys dataset Setup
###Code
import os
project_name = "ieee21cup-recsys"; branch = "main"; account = "sparsh-ai"
project_path = os.path.join('/content', branch)
if not os.path.exists(project_path):
!cp -r /content/drive/MyDrive/git_credentials/. ~
!mkdir "{project_path}"
%cd "{project_path}"
!git init
!git remote add origin https://github.com/"{account}"/"{project_name}".git
!git pull origin "{branch}"
!git checkout -b "{branch}"
else:
%cd "{project_path}"
%cd /content
!cd /content/main && git add . && git commit -m 'commit' && git push origin main
!pip install -q wget
import io
import copy
import sys
import wget
import os
import random
import logging
import pandas as pd
from os import path as osp
import numpy as np
from tqdm.notebook import tqdm
from pathlib import Path
import math
from copy import deepcopy
from collections import OrderedDict
import multiprocessing as mp
import functools
from sklearn.preprocessing import MinMaxScaler
import pdb
from prettytable import PrettyTable
import bz2
import pickle
import _pickle as cPickle
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
%matplotlib inline
class Args:
# Paths
datapath_bronze = '/content/main/data/bronze'
datapath_silver = '/content/main/data/silver/T304746'
datapath_gold = '/content/main/data/gold/T304746'
filename_trainset = 'train.csv'
filename_iteminfo = 'item_info.csv'
filename_track1_testset = 'track1_testset.csv'
filename_track2_testset = 'track2_testset.csv'
data_sep = ' '
args = Args()
logging.basicConfig(stream=sys.stdout,
level = logging.INFO,
format='%(asctime)s [%(levelname)s] : %(message)s',
datefmt='%d-%b-%y %H:%M:%S')
logger = logging.getLogger('IEEE21 Logger')
###Output
_____no_output_____
###Markdown
Utilities
###Code
def save_pickle(data, title):
with bz2.BZ2File(title, 'w') as f:
cPickle.dump(data, f)
def load_pickle(path):
data = bz2.BZ2File(path, 'rb')
data = cPickle.load(data)
return data
def download_dataset():
# create bronze folder if not exist
Path(args.datapath_bronze).mkdir(parents=True, exist_ok=True)
# also creating silver and gold folder for later use
Path(args.datapath_silver).mkdir(parents=True, exist_ok=True)
Path(args.datapath_gold).mkdir(parents=True, exist_ok=True)
# for each of the file, download if not exist
datasets = ['train.parquet.snappy', 'item_info.parquet.snappy',
'track1_testset.parquet.snappy', 'track2_testset.parquet.snappy']
for filename in datasets:
file_savepath = osp.join(args.datapath_bronze,filename)
if not osp.exists(file_savepath):
logger.info('Downloading {}'.format(filename))
wget.download(url='https://github.com/sparsh-ai/ieee21cup-recsys/raw/main/data/bronze/{}'.format(filename),
out=file_savepath)
else:
logger.info('{} file already exists, skipping!'.format(filename))
def parquet_to_csv(path):
savepath = osp.join(str(Path(path).parent),str(Path(path).name).split('.')[0]+'.csv')
pd.read_parquet(path).to_csv(savepath, index=False, sep=args.data_sep)
def convert_dataset():
# for each of the file, convert into csv, if csv not exist
datasets = ['train.parquet.snappy', 'item_info.parquet.snappy',
'track1_testset.parquet.snappy', 'track2_testset.parquet.snappy']
datasets = {x:str(Path(x).name).split('.')[0]+'.csv' for x in datasets}
for sfilename, tfilename in datasets.items():
file_loadpath = osp.join(args.datapath_bronze,sfilename)
file_savepath = osp.join(args.datapath_bronze,tfilename)
if not osp.exists(file_savepath):
logger.info('Converting {} to {}'.format(sfilename, tfilename))
parquet_to_csv(file_loadpath)
else:
logger.info('{} file already exists, skipping!'.format(tfilename))
def normalize(array, axis=0):
_min = array.min(axis=axis, keepdims=True)
_max = array.max(axis=axis, keepdims=True)
factor = _max - _min
return (array - _min) / np.where(factor != 0, factor, 1)
def parse_click_history(history_list):
clicks = list(map(lambda user_click: list(map(lambda item: item.split(':')[0],
user_click.split(','))),
history_list))
_max_len = max(len(items) for items in clicks)
clicks = [items + [0] * (_max_len - len(items)) for items in clicks]
clicks = torch.tensor(np.array(clicks, dtype=np.long)) - 1
return clicks
def parse_user_protrait(protrait_list):
return torch.tensor(normalize(np.array(list(map(lambda x: x.split(','),
protrait_list)),
dtype=np.float32)))
def process_item():
readfilepath = osp.join(args.datapath_bronze,args.filename_iteminfo)
outfilepath = osp.join(args.datapath_silver,'items_info.pt')
if not osp.exists(outfilepath):
logger.info('processing items ...')
item_info = pd.read_csv(readfilepath, sep=args.data_sep)
item2id = np.array(item_info['item_id']) - 1
item2loc = torch.tensor(np.array(item_info['location'], dtype=np.float32)[item2id])
item2price = torch.tensor(normalize(np.array(item_info['price'], dtype=np.float32)[item2id]) * 10, dtype=torch.float32)
item2feature = torch.tensor(normalize(np.array(list(map(lambda x: x.split(','),
item_info['item_vec'])),
dtype=np.float32)[item2id]))
item2info = torch.cat([item2feature, item2price[:, None], item2loc[:, None]], dim=-1)
torch.save([item2info, item2price, item2loc], outfilepath)
logger.info('processed data saved at {}'.format(outfilepath))
else:
logger.info('{} already exists, skipping ...'.format(outfilepath))
def process_data(readfilepath, outfilepath):
if not osp.exists(outfilepath):
logger.info('processing data ...')
logger.info('loading raw file {} ...'.format(readfilepath))
dataset = pd.read_csv(readfilepath, sep=args.data_sep)
click_items = parse_click_history(dataset['user_click_history'])
user_protrait = parse_user_protrait(dataset['user_protrait'])
exposed_items = None
if 'exposed_items' in dataset.columns:
exposed_items = torch.tensor(np.array(list(map(lambda x: x.split(','),
dataset['exposed_items'])),
dtype=np.long) - 1)
torch.save([user_protrait, click_items, exposed_items], outfilepath)
logger.info('processed data saved at {}'.format(outfilepath))
else:
logger.info('{} already exists, skipping ...'.format(outfilepath))
def process_data_wrapper():
ds = {
args.filename_trainset:'train.pt',
args.filename_track1_testset:'dev.pt',
args.filename_track2_testset:'test.pt',
}
process_item()
for k,v in ds.items():
readfilepath = osp.join(args.datapath_bronze,k)
outfilepath = osp.join(args.datapath_silver,v)
process_data(readfilepath, outfilepath)
class Dataset:
def __init__(self, filename, batch_size=1024):
self.user_protrait, self.click_items, self.exposed_items \
= torch.load(filename)
self.click_mask = self.click_items != -1
self.click_items[self.click_items == -1] = 0
self.all_indexs = list(range(len(self.user_protrait)))
self.cur_index = 0
self.bz = batch_size
def reset(self):
random.shuffle(self.all_indexs)
self.cur_index = 0
def __len__(self):
return len(self.all_indexs) // self.bz + int(bool(len(self.all_indexs) % self.bz))
def __iter__(self):
return self
def __next__(self):
if self.cur_index >= len(self.all_indexs):
raise StopIteration
i = self.all_indexs[self.cur_index:self.cur_index + self.bz]
user, click_items, click_mask = \
self.user_protrait[i], self.click_items[i], self.click_mask[i]
exposed_items = self.exposed_items[i] if self.exposed_items is not None else None
self.cur_index += self.bz
return user, click_items, click_mask, exposed_items if exposed_items is not None else None
class Env:
def __init__(self, value, K=3):
self.K = K - 1
self.value = np.asarray(value)
def done(self, obs):
return obs[3] is not None and obs[3].size(1) == self.K
def __recall(self, s, t):
return sum(i in t for i in s) / len(t)
def __reward(self, s, t):
return self.__recall(s, t) * self.value[s].sum()
def new_obs(self, batch_obs, batch_actions):
batch_users, batch_click_items, batch_click_mask, \
batch_exposed_items, batch_exposed_mask = batch_obs
batch_new_exposed_items = torch.cat(
[batch_exposed_items, batch_actions.unsqueeze(1)], dim=1
) if batch_exposed_items is not None else batch_actions.unsqueeze(1)
_add_mask = torch.tensor([[True]]).expand(batch_users.size(0), -1)
batch_new_exposed_mask = torch.cat(
[batch_exposed_mask, _add_mask], dim=1
) if batch_exposed_mask is not None else _add_mask
batch_new_obs = (batch_users, batch_click_items, batch_click_mask,
batch_new_exposed_items, batch_new_exposed_mask)
return batch_new_obs
def step(self, batch_obs, batch_actions, batch_target_bundles, time):
batch_rews = torch.tensor([self.__reward(action, bundle) \
for action, bundle in zip(batch_actions, batch_target_bundles[:, time])],
dtype=torch.float32)
batch_users, batch_click_items, batch_click_mask, \
batch_exposed_items, batch_exposed_mask = batch_obs
done = batch_exposed_mask is not None and batch_exposed_mask[0].sum() == self.K
if done:
batch_new_obs = [None] * batch_users.size(0)
else:
batch_new_obs = self.new_obs(batch_obs, batch_actions)
return batch_new_obs, batch_rews, torch.tensor([done] * batch_actions.size(0))
def table_format(data, field_names=None, title=None):
tb = PrettyTable()
if field_names is not None:
tb.field_names = field_names
for i, name in enumerate(field_names):
tb.align[name] = 'r' if i else 'l'
if title is not None:
tb.title = title
tb.add_rows(data)
return tb.get_string()
def recall(batch_pred_bundles, batch_target_bundles):
rec, rec1, rec2, rec3 = [], [], [], []
for pred_bundle, target_bundle in zip(batch_pred_bundles, batch_target_bundles):
recs = []
for bundle_a, bundle_b in zip(pred_bundle, target_bundle):
recs.append(len(set(bundle_a.tolist()) & set(bundle_b.tolist())) / len(bundle_b))
rec1.append(recs[0])
rec2.append(recs[1])
rec3.append(recs[2])
rec.append((rec1[-1] + rec2[-1] + rec3[-1]) / 3)
return np.mean(rec), np.mean(rec1), np.mean(rec2), np.mean(rec3)
def nan2num(tensor, num=0):
tensor[tensor != tensor] = num
def inf2num(tensor, num=0):
tensor[tensor == float('-inf')] = num
tensor[tensor == float('inf')] = num
def tensor2device(tensors, device):
return [tensor.to(device) if tensor is not None else None \
for tensor in tensors]
def _calc_q_value(obs, net, act_mask, device):
batch_users, batch_encoder_item_ids, encoder_mask, \
batch_decoder_item_ids, decoder_mask = tensor2device(obs, device)
return net(batch_users,
batch_encoder_item_ids,
encoder_mask,
batch_decoder_item_ids,
decoder_mask,
act_mask.unsqueeze(0).expand(batch_users.size(0), -1) if act_mask is not None else None)
def build_train(q_net,
optimizer,
grad_norm_clipping,
act_mask,
gamma=0.99,
is_gpu=False):
device = torch.device('cuda') if is_gpu else torch.device('cpu')
q_net.to(device)
t_net = deepcopy(q_net)
t_net.eval()
t_net.to(device)
optim = optimizer(q_net.parameters())
act_mask = act_mask.to(device)
if is_gpu and torch.cuda.device_count() > 1:
q_net = torch.nn.DataParallel(q_net)
t_net = torch.nn.DataParallel(t_net)
def save_model(filename,
epoch,
episode_rewards,
saved_mean_reward):
torch.save({
'epoch': epoch,
'episode_rewards': episode_rewards,
'saved_mean_reward': saved_mean_reward,
'model': q_net.state_dict(),
'optim': optim.state_dict()
}, filename)
def load_model(filename):
checkpoint = torch.load(filename,
map_location=torch.device('cpu'))
#q_net.load_state_dict(checkpoint['model'])
new_state_dict = OrderedDict()
for k, v in checkpoint['model'].items():
if k.find('module.') != -1:
k = k[7:]
new_state_dict[k] = v
q_net.load_state_dict(new_state_dict)
optim.load_state_dict(checkpoint['optim'])
return checkpoint['epoch'], checkpoint['episode_rewards'], checkpoint['saved_mean_reward']
def train(obs,
act,
rew,
next_obs,
isweights,
done_mask,
topk=3):
act, rew, isweights = act.to(device), rew.to(device), isweights.to(device)
# q value at t+1 in double q
with torch.no_grad():
q_net.eval()
next_q_val = _calc_q_value(next_obs, q_net, act_mask, device).detach()
q_net.train()
_next_mask = next_obs[4].to(device).sum(dim=1, keepdim=True) + 1 == act_mask.unsqueeze(0)
assert next_q_val.size() == _next_mask.size()
next_q_val[_next_mask == False] = float('-inf')
next_action_max = next_q_val.argsort(dim=1, descending=True)[:, :topk]
next_q_val_max = _calc_q_value(next_obs, t_net, act_mask, device) \
.detach() \
.gather(dim=1, index=next_action_max) \
.sum(dim=1)
_next_q_val_max = next_q_val_max.new_zeros(done_mask.size())
_next_q_val_max[done_mask == False] = next_q_val_max
# q value at t
q_val = _calc_q_value(obs, q_net, act_mask, device)
q_val_t = q_val.gather(dim=1, index=act.to(device)).sum(dim=1)
assert q_val_t.size() == _next_q_val_max.size()
#print('done')
# Huber Loss
loss = F.smooth_l1_loss(q_val_t,
rew + gamma * _next_q_val_max,
reduction='none')
assert loss.size() == isweights.size()
#wloss = (loss * isweights).mean()
wloss = loss.mean()
wloss.backward()
torch.nn.utils.clip_grad_norm_(q_net.parameters(), grad_norm_clipping)
optim.step()
q_net.zero_grad()
return wloss.detach().data.item(), (loss.detach().mean().data.item()), loss.cpu().detach().abs()
def act(obs,
eps_greedy,
topk=3,
is_greedy=False):
return build_act(obs, act_mask, q_net, eps_greedy, topk,
is_greedy=is_greedy, device=device)
def update_target():
for target_param, local_param in zip(t_net.parameters(), q_net.parameters()):
target_param.data.copy_(local_param.data)
return q_net, act, train, update_target, save_model, load_model
def build_act(obs,
act_mask,
net,
eps_greedy,
topk=3,
is_greedy=False,
device=None):
devcie = torch.device('cpu') if device is None else device
act_mask = act_mask.to(device)
def _epsilon_greedy(size):
return torch.rand(size).to(device) < eps_greedy
def _gen_act_mask():
#if obs[3] is not None:
if obs[4] is not None:
#length = torch.tensor([len(o) + 1 if o is not None else 1 for o in obs[3]],
# dtype=torch.float).view(-1, 1).to(device)
length = obs[4].to(device).sum(dim=1, keepdim=True) + 1
else:
length = act_mask.new_ones((1,)).view(-1, 1)
return act_mask.unsqueeze(0) == length
net.eval()
with torch.no_grad():
q_val = _calc_q_value(obs, net, act_mask, device).detach()
_act_mask = _gen_act_mask()
if q_val.size() != _act_mask.size():
assert _act_mask.size(0) == 1
_act_mask = _act_mask.expand(q_val.size(0), -1)
q_val[_act_mask == False] = float('-inf')
_deterministic_acts = q_val.argsort(dim=1, descending=True)[:, :topk]
if not is_greedy:
_stochastic_acts = _deterministic_acts.new_empty(_deterministic_acts.size())
chose_random = _epsilon_greedy(_stochastic_acts.size(0))
_tmp = torch.arange(0, _act_mask.size(1), dtype=_deterministic_acts.dtype)
for i in range(_act_mask.size(0)):
_available_acts = _act_mask[i].nonzero().view(-1)
_stochastic_acts[i] = _available_acts[torch.randperm(_available_acts.size(0))[:topk]]
#if chose_random.sum() != len(chose_random):
# pdb.set_trace()
_acts = torch.where(chose_random.unsqueeze(1).expand(-1, _stochastic_acts.size(1)),
_stochastic_acts,
_deterministic_acts)
# TODO 去重
else:
_acts = _deterministic_acts
eps_greedy = 0.
net.train()
return _acts, eps_greedy
###Output
_____no_output_____
###Markdown
Jobs
###Code
logger.info('JOB START: DOWNLOAD_RAW_DATASET')
download_dataset()
logger.info('JOB END: DOWNLOAD_RAW_DATASET')
logger.info('JOB START: DATASET_CONVERSION_PARQUET_TO_CSV')
convert_dataset()
logger.info('JOB END: DATASET_CONVERSION_PARQUET_TO_CSV')
logger.info('JOB START: DATASET_PREPROCESSING')
process_data_wrapper()
logger.info('JOB END: DATASET_PREPROCESSING')
###Output
_____no_output_____
|
[03 - Results]/dos results ver 4/models/iter-2/fft_r15-less-features.ipynb
|
###Markdown
Module Imports for Data Fetiching and Visualization
###Code
import time
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Module Imports for Data Processing
###Code
from sklearn import preprocessing
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import pickle
###Output
_____no_output_____
###Markdown
Importing Dataset from GitHub Train Data
###Code
df1 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-15-m-1-r15.csv?token=AKVFSOGFDC75YKXKWNBM67C63JJVE')
df2 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-15-m-11-r15.csv?token=AKVFSOAFEHDGNJ6UICOJZQS63JJVK')
df3 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-4-m-1-r15.csv?token=AKVFSOBNA22AKZ5A7GS2KHC63JJVM')
df4 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-4-m-11-r15.csv?token=AKVFSOHBRFY5B2I4JOQNCKK63JJVS')
df5 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-6-m-1-r15.csv?token=AKVFSOAO6OGH5SOSF4HPWXK63JJVW')
df6 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-6-m-11-r15.csv?token=AKVFSOHAYSAW7BRAVDOG6TS63JJV4')
df7 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-9-m-1-r15.csv?token=AKVFSOHEWRSRFLKWFEVUENC63JJV6')
df8 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-9-m-11-r15.csv?token=AKVFSOCWPRWAMASKNUZN2UK63JJWC')
df9 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-normal-n-0-15-r15.csv?token=AKVFSOCRVU5HHXZ5Z37GBOK63JJWI')
df10 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-normal-n-0-4-r15.csv?token=AKVFSOC5RYQJMU2AIZ6UGJC63JJWM')
df11 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-normal-n-0-6-r15.csv?token=AKVFSODRSWN6WZDROVLWPPK63JJWQ')
df12 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-normal-n-0-9-r15.csv?token=AKVFSOFSNSJUT6DCGV5BNT263JJWY')
print(df1.shape)
print(df2.shape)
print(df3.shape)
print(df4.shape)
print(df5.shape)
print(df6.shape)
print(df7.shape)
print(df8.shape)
print(df9.shape)
print(df10.shape)
print(df11.shape)
print(df12.shape)
df = df1.append(df2, ignore_index=True,sort=False)
df = df.append(df3, ignore_index=True,sort=False)
df = df.append(df4, ignore_index=True,sort=False)
df = df.append(df5, ignore_index=True,sort=False)
df = df.append(df6, ignore_index=True,sort=False)
df = df.append(df7, ignore_index=True,sort=False)
df = df.append(df8, ignore_index=True,sort=False)
df = df.append(df9, ignore_index=True,sort=False)
df = df.append(df10, ignore_index=True,sort=False)
df = df.append(df11, ignore_index=True,sort=False)
df = df.append(df12, ignore_index=True,sort=False)
df = df.sort_values('timestamp')
df.to_csv('fft-r12-train.csv',index=False)
df = pd.read_csv('fft-r12-train.csv')
df
df.shape
###Output
_____no_output_____
###Markdown
Test Data
###Code
df13 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-15-m-12-r15.csv?token=AKVFSOFCIIATN53BXPZZYAS63JK5C')
df14 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-15-m-7-r15.csv?token=AKVFSOGNEHOF57IWO2H7FXC63JK5G')
df15 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-4-m-12-r15.csv?token=AKVFSOCPIIFOLQNDQMMWWZK63JK5I')
df16 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-4-m-7-r15.csv?token=AKVFSOCB4H2XK6OOQSGGKMK63JK5M')
df17 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-6-m-12-r15.csv?token=AKVFSOF6O7SLN7ZGKZCOWK263JK5S')
df18 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-6-m-7-r15.csv?token=AKVFSOBBCKN5ZT36DWFVJ5263JK5W')
df19 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-9-m-12-r15.csv?token=AKVFSODMBWR2W3K6FF2UXBK63JK52')
df20 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r15/2-fft-malicious-n-0-9-m-7-r15.csv?token=AKVFSOFKHHH63FKPM45JO6S63JK6A')
print(df13.shape)
print(df14.shape)
print(df15.shape)
print(df16.shape)
print(df17.shape)
print(df18.shape)
print(df19.shape)
print(df20.shape)
df5
###Output
_____no_output_____
###Markdown
Processing
###Code
df.isnull().sum()
df = df.drop(columns=['timestamp','src_ni','src_router','dst_ni','dst_router'])
df.corr()
plt.figure(figsize=(25,25))
sns.heatmap(df.corr(), annot = True)
plt.show()
def find_correlation(data, threshold=0.9):
corr_mat = data.corr()
corr_mat.loc[:, :] = np.tril(corr_mat, k=-1)
already_in = set()
result = []
for col in corr_mat:
perfect_corr = corr_mat[col][abs(corr_mat[col])> threshold].index.tolist()
if perfect_corr and col not in already_in:
already_in.update(set(perfect_corr))
perfect_corr.append(col)
result.append(perfect_corr)
select_nested = [f[1:] for f in result]
select_flat = [i for j in select_nested for i in j]
return select_flat
columns_to_drop = find_correlation(df.drop(columns=['target']))
columns_to_drop
df = df.drop(columns=['inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
plt.figure(figsize=(11,11))
sns.heatmap(df.corr(), annot = True)
plt.show()
plt.figure(figsize=(11,11))
sns.heatmap(df.corr())
plt.show()
###Output
_____no_output_____
###Markdown
Processing Dataset for Training
###Code
train_X = df.drop(columns=['target'])
train_Y = df['target']
#standardization
x = train_X.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = train_X.columns
x_scaled = min_max_scaler.fit_transform(x)
train_X = pd.DataFrame(x_scaled)
train_X.columns = columns
train_X
train_X[train_X.duplicated()].shape
test_X = df13.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y = df13['target']
x = test_X.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X = pd.DataFrame(x_scaled)
test_X.columns = columns
print(test_X[test_X.duplicated()].shape)
test_X
test_X1 = df14.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y1 = df14['target']
x = test_X1.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X1.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X1 = pd.DataFrame(x_scaled)
test_X1.columns = columns
print(test_X1[test_X1.duplicated()].shape)
test_X2 = df15.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y2 = df15['target']
x = test_X2.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X2.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X2 = pd.DataFrame(x_scaled)
test_X2.columns = columns
print(test_X2[test_X2.duplicated()].shape)
test_X3 = df16.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y3 = df16['target']
x = test_X3.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X3.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X3 = pd.DataFrame(x_scaled)
test_X3.columns = columns
print(test_X3[test_X3.duplicated()].shape)
test_X4 = df17.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y4 = df17['target']
x = test_X4.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X4.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X4 = pd.DataFrame(x_scaled)
test_X4.columns = columns
print(test_X4[test_X4.duplicated()].shape)
test_X5 = df18.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y5 = df18['target']
x = test_X5.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X5.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X5 = pd.DataFrame(x_scaled)
test_X5.columns = columns
print(test_X5[test_X5.duplicated()].shape)
test_X6 = df19.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y6 = df19['target']
x = test_X6.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X6.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X6 = pd.DataFrame(x_scaled)
test_X6.columns = columns
print(test_X6[test_X6.duplicated()].shape)
test_X7 = df20.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index'])
test_Y7 = df20['target']
x = test_X7.values
min_max_scaler = preprocessing.MinMaxScaler()
columns = test_X7.columns
x_scaled = min_max_scaler.fit_transform(x)
test_X7 = pd.DataFrame(x_scaled)
test_X7.columns = columns
print(test_X7[test_X7.duplicated()].shape)
###Output
(0, 10)
###Markdown
Machine Learning Models Module Imports for Data Processing and Report Generation in Machine Learning Models
###Code
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Labels1. 0 - malicious2. 1 - good
###Code
train_Y = df['target']
train_Y.value_counts()
###Output
_____no_output_____
###Markdown
Training and Validation Splitting of the Dataset
###Code
seed = 5
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(train_X, train_Y, test_size=0.2, random_state=seed, shuffle=True)
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
#SelectKBest for feature selection
bf = SelectKBest(score_func=chi2, k='all')
fit = bf.fit(X_train,y_train)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(columns)
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score']
print(featureScores.nlargest(10,'Score'))
featureScores.plot(kind='barh')
###Output
Specs Score
2 traversal_id 5075.526899
9 traversal_index 1863.504212
1 vc 669.218004
8 packet_count_index 177.704138
7 max_packet_count 164.498233
6 packet_count_incr 83.700013
5 packet_count_decr 80.810432
3 hop_count 53.455620
0 outport 28.257010
4 enqueue_time 10.310229
###Markdown
Decision Tree Classifier
###Code
#decisiontreee
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
dt = DecisionTreeClassifier(max_depth=20,max_features=10,random_state = 42)
dt.fit(X_train,y_train)
pickle.dump(dt, open("dt-r14.pickle.dat", 'wb'))
y_pred_dt= dt.predict(X_test)
dt_score_train = dt.score(X_train,y_train)
print("Train Prediction Score",dt_score_train*100)
dt_score_test = accuracy_score(y_test,y_pred_dt)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X)
dt_score_test = accuracy_score(test_Y,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X1)
dt_score_test = accuracy_score(test_Y1,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X2)
dt_score_test = accuracy_score(test_Y2,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X3)
dt_score_test = accuracy_score(test_Y3,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X4)
dt_score_test = accuracy_score(test_Y4,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X5)
dt_score_test = accuracy_score(test_Y5,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X6)
dt_score_test = accuracy_score(test_Y6,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
y_pred_dt_test= dt.predict(test_X7)
dt_score_test = accuracy_score(test_Y7,y_pred_dt_test)
print("Test Prediction Score",dt_score_test*100)
feat_importances = pd.Series(dt.feature_importances_, index=columns)
feat_importances.plot(kind='barh')
cm = confusion_matrix(y_test, y_pred_dt)
class_label = ["Anomalous", "Normal"]
df_cm = pd.DataFrame(cm, index=class_label,columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
print(classification_report(y_test,y_pred_dt))
dt_roc_auc = roc_auc_score(y_test, y_pred_dt)
fpr, tpr, thresholds = roc_curve(y_test, dt.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='DTree (area = %0.2f)' % dt_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('DT_ROC')
plt.show()
###Output
_____no_output_____
###Markdown
XGB Classifier
###Code
from xgboost import XGBClassifier
from xgboost import plot_importance
xgbc = XGBClassifier(max_depth=20,min_child_weight=1,n_estimators=500,random_state=42,learning_rate=0.2)
xgbc.fit(X_train,y_train)
pickle.dump(xgbc, open("xgbc-r15l.pickle.dat", 'wb'))
y_pred_xgbc= xgbc.predict(X_test)
xgbc_score_train = xgbc.score(X_train,y_train)
print("Train Prediction Score",xgbc_score_train*100)
xgbc_score_test = accuracy_score(y_test,y_pred_xgbc)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X)
xgbc_score_test = accuracy_score(test_Y,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X1)
xgbc_score_test = accuracy_score(test_Y1,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X2)
xgbc_score_test = accuracy_score(test_Y2,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X3)
xgbc_score_test = accuracy_score(test_Y3,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X4)
xgbc_score_test = accuracy_score(test_Y4,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X5)
xgbc_score_test = accuracy_score(test_Y5,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X6)
xgbc_score_test = accuracy_score(test_Y6,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
y_pred_xgbc_test= xgbc.predict(test_X7)
xgbc_score_test = accuracy_score(test_Y7,y_pred_xgbc_test)
print("Test Prediction Score",xgbc_score_test*100)
plot_importance(xgbc)
plt.show()
cm = confusion_matrix(y_test, y_pred_xgbc)
class_label = ["Anomalous", "Normal"]
df_cm = pd.DataFrame(cm, index=class_label,columns=class_label)
sns.heatmap(df_cm, annot=True, fmt='d')
plt.title("Confusion Matrix")
plt.xlabel("Predicted Label")
plt.ylabel("True Label")
plt.show()
print(classification_report(y_test,y_pred_xgbc))
xgb_roc_auc = roc_auc_score(y_test, y_pred_xgbc)
fpr, tpr, thresholds = roc_curve(y_test, xgbc.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='XGBoost (area = %0.2f)' % xgb_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.savefig('XGB_ROC')
plt.show()
###Output
_____no_output_____
|
notebook/Unit5-1-GCC_Make.ipynb
|
###Markdown
GNU GCC and Make**The working folder of demo GCC projects is the `./demo`**In this and following noetebooks,we move to the folder of **`./demo`** from the current folders of Jupyter notebook **`/notebook.`**magic command * `%cd` : Change the current working directory.* `%pwd`: Return the absolute path of current working directory
###Code
%cd ./demo
%pwd
###Output
_____no_output_____
###Markdown
1The Brief Introduction to GCC Background: GNU and Free SoftwareThe original GNU C Compiler (GCC) is developed by **Richard Stallman(理查德·斯托曼)**, the founder of the `GNU Project`. * [The GNU Project](https://www.gnu.org/) GNU is a Unit-like operating system that is >The name **GNU** is a recursive acronym for “GNU's Not Unix.” “GNU” is pronounced *g'noo*, as one syllable, like saying “grew” but replacing the r with n.* **free software**—that is, it **respects** users' **freedom**.The development of GNU, started in January 1984, is known as the **GNU Project**.**The GNU Project** aim is to give computer users **freedom** and control in their use of their computers and computing devices, by collaboratively developing and providing software that is based on the following freedom rights: * users are free to run the software, share it (copy, distribute), study it and modify it.GNU software guarantees these freedom-rights legally (via its license), and is therefore free software; the use of the word **free** always being taken to refer to **freedom**. Thus. `free software` is a matter of liberty, `not price`.The development of GNU made it possible to **use a computer without software that would trample your freedom.** Many of the programs in GNU are released under the auspices of the GNU Project. **The Free Software Foundation** : https://www.fsf.org/ * The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user `freedom`. We defend the rights of all software users. Free Software and Educationhttps://www.gnu.org/education/education.html**How Does Free Software Relate to Education?**Software freedom plays a **fundamental role** in education. Educational institutions of all levels **should use and teach Free Software** because It is the only software that allows them to **accomplish their essential missions**: * to disseminate human `knowledge` and to prepare students to be good members of their `community`. The **source code and the methods** of Free Software are part of human knowledge.On the contrary, **proprietary software** is `secret, restricted knowledge`, which is the opposite of the mission of educational institutions.Free Software supports education, `proprietary software forbids education.`Free Software is not just a technical question; it is an **ethical, social, and political question**.It is a question of the **human rights** that the **users of software ought to have**.**Freedom** and **cooperation** are **essential values** of Free Software.The GNU System implements these values and the principle of **sharing**, since **sharing** is good and beneficial to **human progress**. 1.1 Installing GCC**GCC: GNU Compiler Collection** : http://gcc.gnu.org/ GCC, formerly for "GNU C Compiler", has grown over times to support many languages such as `C++`, Objective-C, Java, `Fortran` and Ada. It is now referred to as **"GNU Compiler Collection"**. GCC is portable and run in many operating platforms. GCC (and GNU Toolchain) is currently available on all Unixes. They are also ported to **Windows** by `MinGW` and Cygwin. LinuxGCC (GNU Toolchain) is included in all Linux(Unixes). Windows **TDM-GCC**TDM-GCC is a compiler suite for Windows.It combines the most recent stable release of the GCC compiler, a few patches for Windows-friendliness, and the free and open-source `MinGW.org` or `MinGW-w64` runtime APIs, to create a more lightweight open-source alternative to Microsoft’s compiler and platform SDK.https://jmeubank.github.io/tdm-gcc/* https://github.com/jmeubank 1.2 Getting StartedThe GNU C and C++ compiler are gcc and g++, respectively.* **gcc** to compile `C` program* **g++** to compile `C++` program **The Simple Demo: Compile/Link a Simple C Program - hello.c** 1.2.1 Make the folders for the GCC projectsGeneraly,We should put all files of `one project under one folder.`and use the folder of project as **current working folder**In the folder,we should setup `the meaningful folders` to management of `different types` of documents conveniently.We set the folder **./demo/** for the **C/C++ programming** .In the `./demo/`,we set the sub-folder of `src`,`obj` and `bin` for the different type files* ./src: the source code * ./include: the header file* ./obj: the compiled output object code/* ./bin: the linked output ```bash ├── │ │ │ │── │ │── │ │ │ │─ *.c/cpp (the Source code) │ │── │ │ │ │─ *.h (the header ) │ │── │ │ │ │─ *.obj/o (the compiled output) │ │── │ │─ *.exe (the linked output) ``` 1.2.2 Compile/Link a Simple C ProgramWe save the code to the location of source code* source code `hello.c` in `./src` Below is the Hello-world C program `hello.c`
###Code
%%file ./src/hello.c
/*
gcc -o hello hello.c
*/
#include <stdio.h>
int main() {
printf("C says Hello, world!\n");
return 0;
}
###Output
Overwriting ./src/hello.c
###Markdown
You need to use **gcc** to `compile` C program:`hello.c`,then `link` to build the output* **-c: source files** `compiles` source files without linking.* **-o: output file** writes the link to build output to the specifies output file name.Use `-o` to set `the specifie output file` to the location`* the Compiled output `hello.o in ./obj/`* the Linked output `hello.exe in the ./bin/`
###Code
!gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
We have put `hello.o` in `.\obj\` folder.`
###Code
!gcc -o ./bin/hello ./obj/hello.o
###Output
_____no_output_____
###Markdown
We have put `hello.exe` in the `\bin\` folder.` **Compile and Link to build output at the command**
###Code
!gcc -o ./bin/hello ./src/hello.c
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
1.2.3 Compile/Link a Simple C++ Program - hello.cpp Compile/Link the C++ Program : g++Below is the Hello-world C++ program hello.cpp:
###Code
%%file ./src/hello.cpp
/*
g++ -o hello hello.cpp
*/
#include <iostream>
using namespace std;
int main() {
cout << "C++ Hello, world!" << endl;
return 0;
}
###Output
Overwriting ./src/hello.cpp
###Markdown
use **g++** to compile and link C++ program at one command, as follows
###Code
!g++ -o ./bin/hello ./src/hello.cpp
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
2. GNU Makehttps://www.gnu.org/software/make/The **"make"** utility automates the mundane aspects of building executable from source code.**"make"** uses a so-called **makefile**, which contains **rules** on how to build the executables.You can issue "make --help" to list the command-line options.
###Code
!make --help
###Output
_____no_output_____
###Markdown
2.1 Create `makefile` fileCreate the following file named **"makefile"** : contains rules and save in the `current` directory. * **`without` any file `extension`**A makefile consists of `a set of rules`(规则) to build the executable. 2.1.1 The rule**A rule** consists of 3 parts:* **a target**(目标), * **a list of pre-requisites**(条件) * **a command**(命令)as follows:```bashtarget: pre-req-1 pre-req-2 ... command```* The **target** and **pre-requisites** are separated by a colon ** : ** .* The **command** must be preceded by **a Tab** (NOT spaces).```bashtarget: pre-req-1 pre-req-2 ...command```The `target` is the file or thing that `must be made`. The `prerequisites or dependents` are those files that must exist before the target can be successfully created. And the `commands` are those shell commands that will create the target from the prerequisites. The `first rule` seen by make is used as the `default` rule.* The standard first target in many makefiles is called **all**. 2.1.2 Comments **``** in a line of a makefile starts a comment. It and the rest of the line are ignored, except that `a trailing backslash`(`\`) not escaped by another backslash will continue the comment across multiple lines. 2.1.3 Makeflie for building helloLet's begin to build the same Hello-world program (`hello.c`) into executable (hello.exe) via make utility.
###Code
%%file ./makefile
# makefile for the hello
all: helloexe
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
###Output
Overwriting ./makefile
###Markdown
Here is a rule for compiling a C file, `./src/hello.c` into an object file, `./obj/hello.o````bashhelloobj: ./src/hello.c gcc -c -o ./obj/hello.o ./src/hello.c```* 1 The `target` helloobj appears before the colon(: ) * 2 The `prerequisites` `./src/hello.c` appear after the colon(:) * 3 The `command script` appears on the following lines and is preceded by a tab character. tab gcc -c -o ./obj/hello.o ./src/hello.c **NOTE** * [VS Code中makefile报错分隔符](https://gitee.com/thermalogic/home/blob/B2021/guide/doc/Problem_Solution.mdvs-code%E4%B8%ADmakefile%E6%8A%A5%E9%94%99%E5%88%86%E9%9A%94%E7%AC%A6) 2.2 Invoking Make When make is invoked, it automatically creates the **first** `target` it sees in makefile 2.2.1 make without argumentThe default nakefile name is: **makefile**, `Makefile`, or `GNUMakefile`. The `makefile` resides in the **same** user’s `current` directory when executing the `make` command. When make is invoked under these conditions, it automatically creates the `default` first target `all` it sees. **invoking make in the terminal of `/demo/` without argument**
###Code
!make
!.\bin\hello
###Output
C says Hello, world!
###Markdown
2.2.3 Running make with target argument We start the target **helloobj** in the makefile
###Code
!make helloobj
###Output
gcc -c -o ./obj/hello.o ./src/hello.c
###Markdown
2.2.4 Compile, Link , `Run` and `clean` **Add `clean` target**```bashclean: del .\obj\hello.o ``` **The target `all`*** Add prerequisites `clean`* Add `running command` ```bashall: helloexe clean ./bin/hello``` then save the makefile as `./makerun.mk`
###Code
%%file ./makerun.mk
all: helloexe clean
./bin/hello
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
clean:
del .\obj\hello.o
###Output
Writing ./makerun.mk
###Markdown
2.2.5 Invoking make with the specified FILE as a makefile* `-f FILE`: Read FILE as a makefile.
###Code
!make -f makerun.mk
###Output
gcc -c -o ./obj/hello.o ./src/hello.c
gcc -o ./bin/hello ./obj/hello.o
del .\obj\hello.o
./bin/hello
C says Hello, world!
###Markdown
3 Compile, Link : Multiple Source Files* 1) The codes of Computing the **mean** of an array * statistics.h * statistics.c * 2) The code file of the caller * statdemo.c **Reference** GNU:GSL* https://github.com/CNMAT/gsl/blob/master/statistics/mean_source.c
###Code
%%file ./include/statistics.h
#ifndef STATISTICS_H
#define STATISTICS_H
double mean(double data[], int size);
//double mean(double *data, int size);
#endif
%%file ./src/statistics.c
#include "statistics.h"
//double mean(double *data, int size)
double mean(double data[], int size)
{
/*
Compute the arithmetic mean of a dataset using the recurrence relation
mean_(n) = mean(n-1) + (data[n] - mean(n-1))/(n+1)
*/
double mean = 0;
for(int i = 0; i < size; i++)
{
mean += (data[i] - mean) / (i + 1);
}
return mean;
}
%%file ./src/statdemo.c
#include <stdio.h>
#include "statistics.h"
int main() {
double a[] = {8, 4, 5, 3, 2};
int length = sizeof(a)/sizeof(double);
printf("mean is %f\n", mean(a,length));
return 0;
}
###Output
Overwriting ./src/statdemo.c
###Markdown
3.3 Preprocessor Directives & Once-Only Headers Preprocessor Directiveshttp://www.cplusplus.com/doc/tutorial/preprocessor/Preprocessor directives(预处理指令) are lines included in the code of programs preceded by a hash sign (****). These lines are not program statements but directives for the preprocessor. The preprocessor examines the code **before actual compilation** of code begins and resolves all these directives before any code is actually generated by regular statements. Once-Only HeadersBecause header files sometimes include one another, it can easily happen that the same file is included **more than once.**For example, suppose the file `statistics.h` contains the line:```cinclude ```Then the source file `statdemo.c` that contains the following `include` directives would include the file `stdio.h` twice, once directly and once indirectly:```cinclude include "statistics.h````If a header file happens to be included **twice**, the compiler will process its contents twice. * This is very likely to cause an error, e.g. when the compiler sees the same structure definition twice. * Even if it does not, it will certainly waste time. you can easily guard the contents of a header file against **multiple inclusions**using the directives for `conditional compiling `The standard way to prevent this is to enclose the entire real contents of the file in a conditional, like this:```cifndef STATISTICS_Hdefine STATISTICS_H/* ... The actual contents of the header file statistics.h are here… */endif /* !STATISTICS_H */```This construct is commonly known as a wrapper **ifndef**. At the first occurrence of a directive to include the file `SumArray.h`, the macro `SUMARRAY_H` isnot yet defined. The preprocessor therefore inserts the contents of the block between`ifndef and endif` — including the definition of the macro `SUMARRAY_H`.When the header is included again, the `ifndef` condition is false, because **SUMARRAY_H** is defined. The preprocessor will skip over the entire contents of the file, and the compiler will not see it twice.> **All header files should have `ifndef` and `endif` guards to prevent multiple inclusion** Compile,Link and RunWe usually compile each of the source files **separately** into object file, and link them together in the later stage
###Code
!gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
!gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!gcc -o ./bin/statdemo.exe ./obj/statdemo.o ./obj/statistics.o
###Output
_____no_output_____
###Markdown
You could compile **all of them** in a single command:
###Code
!gcc -o ./bin/statdemo ./src/statdemo.c ./src/statistics.c -I./include/
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
Header Files `-I./include/`When compiling the program, the compiler needs the header files to compile the source codes;For each of the headers used in your source (via `include directives`), Since the header's filename is known, the compiler only needs the `directories(path)`.The compiler searches the so-called **include-paths** for these headers. specified via `-Idir` option (or environment variable CPATH). * `-Idir`: The **include-paths** are specified (uppercase `I` followed by the directory path). List the **default** include-paths in your system used by the "GNU C Preprocessor" via `gcc -print-search-dirs`
###Code
!gcc -print-search-dirs
###Output
_____no_output_____
###Markdown
3.4 Makefile
###Code
%%file ./makestatdemo.mk
all: statdemo
statdemo: statobj
gcc -o ./bin/statdemo ./obj/statdemo.o ./obj/statistics.o
statobj:
gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!make -f ./makestatdemo.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
Using the `variable` in makefileA `variable` begins with a **`$`** and is enclosed within parentheses **(...)** * $(...) Makefile with variable
###Code
%%file ./makestatdemo-var.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR=./include/
all: statdemo
statdemo: statobj
$(CC) -o $(BINDIR)statdemo $(OBJDIR)statdemo.o $(OBJDIR)statistics.o
statobj:
$(CC) -c -o $(OBJDIR)statdemo.o $(SRCDIR)statdemo.c -I$(INCDIR)
$(CC) -c -o $(OBJDIR)statistics.o $(SRCDIR)statistics.c -I$(INCDIR)
!make -f makestatdemo-var.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
4 The Advenced Makefile 4.1 Automatic Variables**Automatic variables** are set by make after a rule is matched. There include:```bash$@: the target filename.$<: the first prerequisite filename.$^: All prerequisites with duplicates eliminated``` 4.2 Pattern Rules**A pattern rule**, which uses `pattern matching character` **`%`** as the `filename`, can be applied to create a target, if there is no explicit rule.```bash'%' matches filename.``` 4.3 Functions 4.3.1 Functions for File Nameshttp://www.gnu.org/software/make/manual/html_node/File-Name-Functions.html**$(notdir names…)**Extracts all `but the directory-part` of each file name in names. * remove the `path` of a file nameFor example, ```$(notdir src/statistics.c src/statdemo.c)```produces the result without `path`* `statistics.c statdemo.c`. 4.3.2 String Substitution and Analysishttp://www.gnu.org/software/make/manual/html_node/Text-Functions.html**$(patsubst pattern, replacement, text)**Finds `whitespace-separated` words in text that match pattern and replaces them with replacement. * `text`(whitespace-separated words) -> **match** `pattern` -> **replaces** with `replacement`Here pattern may contain a **`%`** which acts as a `wildcard`, matching any number of any characters within a word. For example, ```$(patsubst %.c, %.o, x.c y.c)```* pattern: %.c* replacement to: %.o* text: `x.c y.c` produces the value `x.o y.o`. 4.5 Example The Advanced makefile for statistics
###Code
%%file ./makestatdemo-adv.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR= ./include/
#SRCS=$(SRCDIR)statistics.c \
# $(SRCDIR)statdemo.c
SRCS=$(wildcard $(SRCDIR)stat*.c)
# non-path filename
filename=$(notdir $(SRCS))
# the obj target of a source code using the pattern rule
OBJS=$(patsubst %.c,$(OBJDIR)%.o,$(filename))
all:statdemo
statdemo: $(OBJS)
$(CC) -o $(BINDIR)$@ $^
del .\obj\*.o
$(OBJS):$(SRCS)
$(CC) -o $@ -c $(SRCDIR)$(patsubst %.o,%.c,$(notdir $@)) -I$(INCDIR)
!make -f makestatdemo-adv.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
GNU GCC and Make**The working folder of demo GCC projects is the `./demo`**In this and following noetebooks,we move to the folder of **`./demo`** from the current folders of Jupyter notebook **`/notebook.`**magic command * `%cd` : Change the current working directory.* `%pwd`: Return the absolute path of current working directory
###Code
%pwd
%cd ./demo
%pwd
###Output
_____no_output_____
###Markdown
1The Brief Introduction to GCC Background: GNU and Free SoftwareThe original GNU C Compiler (GCC) is developed by **Richard Stallman(理查德·斯托曼)**, the founder of the `GNU Project`. * [The GNU Project](https://www.gnu.org/) GNU is a Unit-like operating system that is >The name **GNU** is a recursive acronym for “GNU's Not Unix.” “GNU” is pronounced *g'noo*, as one syllable, like saying “grew” but replacing the r with n.* **free software**—that is, it **respects** users' **freedom**.The development of GNU, started in January 1984, is known as the **GNU Project**.**The GNU Project** aim is to give computer users **freedom** and control in their use of their computers and computing devices, by collaboratively developing and providing software that is based on the following freedom rights: * users are free to run the software, share it (copy, distribute), study it and modify it.GNU software guarantees these freedom-rights legally (via its license), and is therefore free software; the use of the word **free** always being taken to refer to **freedom**. Thus. `free software` is a matter of liberty, `not price`.The development of GNU made it possible to **use a computer without software that would trample your freedom.** Many of the programs in GNU are released under the auspices of the GNU Project. **The Free Software Foundation** : https://www.fsf.org/ * The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user `freedom`. We defend the rights of all software users. Free Software and Educationhttps://www.gnu.org/education/education.html**How Does Free Software Relate to Education?**Software freedom plays a **fundamental role** in education. Educational institutions of all levels **should use and teach Free Software** because It is the only software that allows them to **accomplish their essential missions**: * to disseminate human `knowledge` and to prepare students to be good members of their `community`. The **source code and the methods** of Free Software are part of human knowledge.On the contrary, **proprietary software** is `secret, restricted knowledge`, which is the opposite of the mission of educational institutions.Free Software supports education, `proprietary software forbids education.`Free Software is not just a technical question; it is an **ethical, social, and political question**.It is a question of the **human rights** that the **users of software ought to have**.**Freedom** and **cooperation** are **essential values** of Free Software.The GNU System implements these values and the principle of **sharing**, since **sharing** is good and beneficial to **human progress**. 1.1 Installing GCC**GCC: GNU Compiler Collection** : http://gcc.gnu.org/ GCC, formerly for "GNU C Compiler", has grown over times to support many languages such as `C++`, Objective-C, Java, `Fortran` and Ada. It is now referred to as **"GNU Compiler Collection"**. GCC is portable and run in many operating platforms. GCC (and GNU Toolchain) is currently available on all Unixes. They are also ported to **Windows** by `MinGW` and Cygwin. LinuxGCC (GNU Toolchain) is included in all Linux(Unixes). Windows **TDM-GCC**TDM-GCC is a compiler suite for Windows.It combines the most recent stable release of the GCC compiler, a few patches for Windows-friendliness, and the free and open-source `MinGW.org` or `MinGW-w64` runtime APIs, to create a more lightweight open-source alternative to Microsoft’s compiler and platform SDK.https://jmeubank.github.io/tdm-gcc/* https://github.com/jmeubank 1.2 Getting StartedThe GNU C and C++ compiler are gcc and g++, respectively.* **gcc** to compile `C` program* **g++** to compile `C++` program **The Simple Demo: Compile/Link a Simple C Program - hello.c** 1.2.1 Make the folders for the GCC projectsGeneraly,We should put all files of `one project under one folder.`and use the folder of project as **current working folder**In the folder,we should setup `the meaningful folders` to management of `different types` of documents conveniently.We set the folder **./demo/** for the **C/C++ programming** .In the `./demo/`,we set the sub-folder of `src`,`obj` and `bin` for the different type files* ./src: the source code * ./include: the header file* ./obj: the compiled output object code/* ./bin: the linked output ```bash ├── │ │ │ │── │ │── │ │ │ │─ *.c/cpp (the Source code) │ │── │ │ │ │─ *.h (the header ) │ │── │ │ │ │─ *.obj/o (the compiled output) │ │── │ │─ *.exe (the linked output) ``` 1.2.2 Compile/Link a Simple C ProgramWe save the code to the location of source code* source code `hello.c` in `./src` Below is the Hello-world C program `hello.c`
###Code
%pwd
%%file ./src/hello.c
/*
gcc -o hello hello.c
*/
#include <stdio.h>
int main() {
printf("C says Hello, world!\n");
return 0;
}
###Output
Overwriting ./src/hello.c
###Markdown
You need to use **gcc** to `compile` C program:`hello.c`,then `link` to build the output* **-c: source files** `compiles` source files without linking.* **-o: output file** writes the link to build output to the specifies output file name.Use `-o` to set `the specifie output file` to the location`* the Compiled output `hello.o in ./obj/`* the Linked output `hello.exe in the ./bin/`
###Code
!gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
We have put `hello.o` in `.\obj\` folder.`
###Code
!gcc -o ./bin/hello ./obj/hello.o
###Output
_____no_output_____
###Markdown
We have put `hello.exe` in the `\bin\` folder.` **Compile and Link to build output at the command**
###Code
!gcc -o ./bin/hello ./src/hello.c
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
1.2.3 Compile/Link a Simple C++ Program - hello.cpp Compile/Link the C++ Program : g++Below is the Hello-world C++ program hello.cpp:
###Code
%%file ./src/hello.cpp
/*
g++ -o hello hello.cpp
*/
#include <iostream>
using namespace std;
int main() {
cout << "C++ Hello, world!" << endl;
return 0;
}
###Output
_____no_output_____
###Markdown
use **g++** to compile and link C++ program at one command, as follows
###Code
!g++ -o ./bin/hello ./src/hello.cpp
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
2. GNU Makehttps://www.gnu.org/software/make/The **"make"** utility automates the mundane aspects of building executable from source code.**"make"** uses a so-called **makefile**, which contains **rules** on how to build the executables.You can issue "make --help" to list the command-line options.
###Code
!make --help
###Output
_____no_output_____
###Markdown
2.1 Create `makefile` fileCreate the following file named **"makefile"** : contains rules and save in the `current` directory. * **`without` any file `extension`**A makefile consists of `a set of rules`(规则) to build the executable. 2.1.1 The rule**A rule** consists of 3 parts:* **a target**(目标), * **a list of pre-requisites**(条件) * **a command**(命令)as follows:```bashtarget: pre-req-1 pre-req-2 ... command```* The **target** and **pre-requisites** are separated by a colon ** : ** .* The **command** must be preceded by **a Tab** (NOT spaces).```bashtarget: pre-req-1 pre-req-2 ...command```The `target` is the file or thing that `must be made`. The `prerequisites or dependents` are those files that must exist before the target can be successfully created. And the `commands` are those shell commands that will create the target from the prerequisites. The `first rule` seen by make is used as the `default` rule.* The standard first target in many makefiles is called **all**. 2.1.2 Comments **``** in a line of a makefile starts a comment. It and the rest of the line are ignored, except that `a trailing backslash`(`\`) not escaped by another backslash will continue the comment across multiple lines. 2.1.3 Makeflie for building helloLet's begin to build the same Hello-world program (`hello.c`) into executable (hello.exe) via make utility.
###Code
%%file ./makefile
# makefile for the hello
all: helloexe
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
Here is a rule for compiling a C file, `./src/hello.c` into an object file, `./obj/hello.o````bashhelloobj: ./src/hello.c gcc -c -o ./obj/hello.o ./src/hello.c```* 1 The `target` helloobj appears before the colon(: ) * 2 The `prerequisites` `./src/hello.c` appear after the colon(:) * 3 The `command script` appears on the following lines and is preceded by a tab character. tab gcc -c -o ./obj/hello.o ./src/hello.c **NOTE** * [VS Code中makefile报错分隔符](https://gitee.com/thermalogic/sees/blob/B2022/guide/doc/Problem_Solution.mdvs-code%E4%B8%ADmakefile%E6%8A%A5%E9%94%99%E5%88%86%E9%9A%94%E7%AC%A6) 2.2 Invoking Make When make is invoked, it automatically creates the **first** `target` it sees in makefile 2.2.1 make without argumentThe default nakefile name is: **makefile**, `Makefile`, or `GNUMakefile`. The `makefile` resides in the **same** user’s `current` directory when executing the `make` command. When make is invoked under these conditions, it automatically creates the `default` first target `all` it sees. **invoking make in the terminal of `/demo/` without argument**
###Code
!make
!.\bin\hello
###Output
_____no_output_____
###Markdown
2.2.3 Running make with target argument We start the target **helloobj** in the makefile
###Code
!make helloobj
###Output
_____no_output_____
###Markdown
2.2.4 Compile, Link , `Run` and `clean` **Add `clean` target**```bashclean: del .\obj\hello.o ``` **The target `all`*** Add prerequisites `clean`* Add `running command` ```bashall: helloexe clean ./bin/hello``` then save the makefile as `./makerun.mk`
###Code
%%file ./makerun.mk
all: helloexe clean
./bin/hello
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
clean:
del .\obj\hello.o
###Output
_____no_output_____
###Markdown
2.2.5 Invoking make with the specified FILE as a makefile* `-f FILE`: Read FILE as a makefile.
###Code
!make -f makerun.mk
###Output
_____no_output_____
###Markdown
3 Compile, Link : Multiple Source Files* 1) The codes of Computing the **mean** of an array * statistics.h * statistics.c * 2) The code file of the caller * statdemo.c **Reference** GNU:GSL* https://github.com/CNMAT/gsl/blob/master/statistics/mean_source.c
###Code
%%file ./include/statistics.h
#ifndef STATISTICS_H
#define STATISTICS_H
double mean(double data[], int size);
//double mean(double *data, int size);
#endif
%%file ./src/statistics.c
#include "statistics.h"
//double mean(double *data, int size)
double mean(double data[], int size)
{
/*
Compute the arithmetic mean of a dataset using the recurrence relation
mean_(n) = mean(n-1) + (data[n] - mean(n-1))/(n+1)
*/
double mean = 0;
for(int i = 0; i < size; i++)
{
mean += (data[i] - mean) / (i + 1);
}
return mean;
}
%%file ./src/statdemo.c
#include <stdio.h>
#include "statistics.h"
int main() {
double a[] = {8, 4, 5, 3, 2};
int length = sizeof(a)/sizeof(double);
printf("mean is %f\n", mean(a,length));
return 0;
}
###Output
_____no_output_____
###Markdown
3.3 Preprocessor Directives & Once-Only Headers Preprocessor Directiveshttp://www.cplusplus.com/doc/tutorial/preprocessor/Preprocessor directives(预处理指令) are lines included in the code of programs preceded by a hash sign (****). These lines are not program statements but directives for the preprocessor. The preprocessor examines the code **before actual compilation** of code begins and resolves all these directives before any code is actually generated by regular statements. Once-Only HeadersBecause header files sometimes include one another, it can easily happen that the same file is included **more than once.**For example, suppose the file `statistics.h` contains the line:```cinclude ```Then the source file `statdemo.c` that contains the following `include` directives would include the file `stdio.h` twice, once directly and once indirectly:```cinclude include "statistics.h````If a header file happens to be included **twice**, the compiler will process its contents twice. * This is very likely to cause an error, e.g. when the compiler sees the same structure definition twice. * Even if it does not, it will certainly waste time. you can easily guard the contents of a header file against **multiple inclusions**using the directives for `conditional compiling `The standard way to prevent this is to enclose the entire real contents of the file in a conditional, like this:```cifndef STATISTICS_Hdefine STATISTICS_H/* ... The actual contents of the header file statistics.h are here… */endif /* !STATISTICS_H */```This construct is commonly known as a wrapper **ifndef**. At the first occurrence of a directive to include the file `SumArray.h`, the macro `SUMARRAY_H` isnot yet defined. The preprocessor therefore inserts the contents of the block between`ifndef and endif` — including the definition of the macro `SUMARRAY_H`.When the header is included again, the `ifndef` condition is false, because **SUMARRAY_H** is defined. The preprocessor will skip over the entire contents of the file, and the compiler will not see it twice.> **All header files should have `ifndef` and `endif` guards to prevent multiple inclusion** Compile,Link and RunWe usually compile each of the source files **separately** into object file, and link them together in the later stage
###Code
!gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
!gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!gcc -o ./bin/statdemo.exe ./obj/statdemo.o ./obj/statistics.o
###Output
_____no_output_____
###Markdown
You could compile **all of them** in a single command:
###Code
!gcc -o ./bin/statdemo ./src/statdemo.c ./src/statistics.c -I./include/
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Header Files `-I./include/`When compiling the program, the compiler needs the header files to compile the source codes;For each of the headers used in your source (via `include directives`), Since the header's filename is known, the compiler only needs the `directories(path)`.The compiler searches the so-called **include-paths** for these headers. specified via `-Idir` option (or environment variable CPATH). * `-Idir`: The **include-paths** are specified (uppercase `I` followed by the directory path). List the **default** include-paths in your system used by the "GNU C Preprocessor" via `gcc -print-search-dirs`
###Code
!gcc -print-search-dirs
###Output
_____no_output_____
###Markdown
3.4 Makefile
###Code
%%file ./makestatdemo.mk
all: statdemo
statdemo: statobj
gcc -o ./bin/statdemo ./obj/statdemo.o ./obj/statistics.o
statobj:
gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!make -f ./makestatdemo.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Using the `variable` in makefileA `variable` begins with a **`$`** and is enclosed within parentheses **(...)** * $(...) Makefile with variable
###Code
%%file ./makestatdemo-var.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR=./include/
all: statdemo
statdemo: statobj
$(CC) -o $(BINDIR)statdemo $(OBJDIR)statdemo.o $(OBJDIR)statistics.o
statobj:
$(CC) -c -o $(OBJDIR)statdemo.o $(SRCDIR)statdemo.c -I$(INCDIR)
$(CC) -c -o $(OBJDIR)statistics.o $(SRCDIR)statistics.c -I$(INCDIR)
!make -f makestatdemo-var.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
4 The Advenced Makefile 4.1 Automatic Variables**Automatic variables** are set by make after a rule is matched. There include:```bash$@: the target filename.$<: the first prerequisite filename.$^: All prerequisites with duplicates eliminated``` 4.2 Pattern Rules**A pattern rule**, which uses `pattern matching character` **`%`** as the `filename`, can be applied to create a target, if there is no explicit rule.```bash'%' matches filename.``` 4.3 Functions 4.3.1 Functions for File Nameshttp://www.gnu.org/software/make/manual/html_node/File-Name-Functions.html**$(notdir names…)**Extracts all `but the directory-part` of each file name in names. * remove the `path` of a file nameFor example, ```$(notdir src/statistics.c src/statdemo.c)```produces the result without `path`* `statistics.c statdemo.c`. 4.3.2 String Substitution and Analysishttp://www.gnu.org/software/make/manual/html_node/Text-Functions.html**$(patsubst pattern, replacement, text)**Finds `whitespace-separated` words in text that match pattern and replaces them with replacement. * `text`(whitespace-separated words) -> **match** `pattern` -> **replaces** with `replacement`Here pattern may contain a **`%`** which acts as a `wildcard`, matching any number of any characters within a word. For example, ```$(patsubst %.c, %.o, x.c y.c)```* pattern: %.c* replacement to: %.o* text: `x.c y.c` produces the value `x.o y.o`. 4.5 Example The Advanced makefile for statistics
###Code
%%file ./makestatdemo-adv.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR= ./include/
#SRCS=$(SRCDIR)statistics.c \
# $(SRCDIR)statdemo.c
SRCS=$(wildcard $(SRCDIR)stat*.c)
# non-path filename
filename=$(notdir $(SRCS))
# the obj target of a source code using the pattern rule
OBJS=$(patsubst %.c,$(OBJDIR)%.o,$(filename))
all:statdemo
statdemo: $(OBJS)
$(CC) -o $(BINDIR)$@ $^
del .\obj\*.o
$(OBJS):$(SRCS)
$(CC) -o $@ -c $(SRCDIR)$(patsubst %.o,%.c,$(notdir $@)) -I$(INCDIR)
!make -f makestatdemo-adv.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
GNU GCC and Make**The working folder of demo GCC projects is the `./demo`**In this and following noetebooks,we move to the folder of **`./demo`** from the current folders of Jupyter notebook **`/notebook.`**magic command * `%cd` : Change the current working directory.* `%pwd`: Return the absolute path of current working directory
###Code
%cd ./demo
%pwd
###Output
_____no_output_____
###Markdown
1The Brief Introduction to GCC Background: GNU and Free SoftwareThe original GNU C Compiler (GCC) is developed by **Richard Stallman(理查德·斯托曼)**, the founder of the `GNU Project`. * [The GNU Project](https://www.gnu.org/) GNU is a Unit-like operating system that is >The name **GNU** is a recursive acronym for “GNU's Not Unix.” “GNU” is pronounced *g'noo*, as one syllable, like saying “grew” but replacing the r with n.* **free software**—that is, it **respects** users' **freedom**.The development of GNU, started in January 1984, is known as the **GNU Project**.**The GNU Project** aim is to give computer users **freedom** and control in their use of their computers and computing devices, by collaboratively developing and providing software that is based on the following freedom rights: * users are free to run the software, share it (copy, distribute), study it and modify it.GNU software guarantees these freedom-rights legally (via its license), and is therefore free software; the use of the word **free** always being taken to refer to **freedom**. Thus. `free software` is a matter of liberty, `not price`.The development of GNU made it possible to **use a computer without software that would trample your freedom.** Many of the programs in GNU are released under the auspices of the GNU Project. **The Free Software Foundation** : https://www.fsf.org/ * The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user `freedom`. We defend the rights of all software users. Free Software and Educationhttps://www.gnu.org/education/education.html**How Does Free Software Relate to Education?**Software freedom plays a **fundamental role** in education. Educational institutions of all levels **should use and teach Free Software** because It is the only software that allows them to **accomplish their essential missions**: * to disseminate human `knowledge` and to prepare students to be good members of their `community`. The **source code and the methods** of Free Software are part of human knowledge.On the contrary, **proprietary software** is `secret, restricted knowledge`, which is the opposite of the mission of educational institutions.Free Software supports education, `proprietary software forbids education.`Free Software is not just a technical question; it is an **ethical, social, and political question**.It is a question of the **human rights** that the **users of software ought to have**.**Freedom** and **cooperation** are **essential values** of Free Software.The GNU System implements these values and the principle of **sharing**, since **sharing** is good and beneficial to **human progress**. 1.1 Installing GCC**GCC: GNU Compiler Collection** : http://gcc.gnu.org/ GCC, formerly for "GNU C Compiler", has grown over times to support many languages such as `C++`, Objective-C, Java, `Fortran` and Ada. It is now referred to as **"GNU Compiler Collection"**. GCC is portable and run in many operating platforms. GCC (and GNU Toolchain) is currently available on all Unixes. They are also ported to **Windows** by `MinGW` and Cygwin. LinuxGCC (GNU Toolchain) is included in all Linux(Unixes). Windows **TDM-GCC**TDM-GCC is a compiler suite for Windows.It combines the most recent stable release of the GCC compiler, a few patches for Windows-friendliness, and the free and open-source `MinGW.org` or `MinGW-w64` runtime APIs, to create a more lightweight open-source alternative to Microsoft’s compiler and platform SDK.https://jmeubank.github.io/tdm-gcc/* https://github.com/jmeubank 1.2 Getting StartedThe GNU C and C++ compiler are gcc and g++, respectively.* **gcc** to compile `C` program* **g++** to compile `C++` program **The Simple Demo: Compile/Link a Simple C Program - hello.c** 1.2.1 Make the folders for the GCC projectsGeneraly,We should put all files of `one project under one folder.`and use the folder of project as **current working folder**In the folder,we should setup `the meaningful folders` to management of `different types` of documents conveniently.We set the folder **./demo/** for the **C/C++ programming** .In the `./demo/`,we set the sub-folder of `src`,`obj` and `bin` for the different type files* ./src: the source code * ./include: the header file* ./obj: the compiled output object code/* ./bin: the linked output ```bash ├── │ │ │ │── │ │── │ │ │ │─ *.c/cpp (the Source code) │ │── │ │ │ │─ *.h (the header ) │ │── │ │ │ │─ *.obj/o (the compiled output) │ │── │ │─ *.exe (the linked output) ``` 1.2.2 Compile/Link a Simple C ProgramWe save the code to the location of source code* source code `hello.c` in `./src` Below is the Hello-world C program `hello.c`
###Code
%%file ./src/hello.c
/*
gcc -o hello hello.c
*/
#include <stdio.h>
int main() {
printf("C says Hello, world!\n");
return 0;
}
###Output
Overwriting ./src/hello.c
###Markdown
You need to use **gcc** to `compile` C program:`hello.c`,then `link` to build the output* **-c: source files** `compiles` source files without linking.* **-o: output file** writes the link to build output to the specifies output file name.Use `-o` to set `the specifie output file` to the location`* the Compiled output `hello.o in ./obj/`* the Linked output `hello.exe in the ./bin/`
###Code
!gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
We have put `hello.o` in `.\obj\` folder.`
###Code
!gcc -o ./bin/hello ./obj/hello.o
###Output
_____no_output_____
###Markdown
We have put `hello.exe` in the `\bin\` folder.` **Compile and Link to build output at the command**
###Code
!gcc -o ./bin/hello ./src/hello.c
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
1.2.3 Compile/Link a Simple C++ Program - hello.cpp Compile/Link the C++ Program : g++Below is the Hello-world C++ program hello.cpp:
###Code
%%file ./src/hello.cpp
/*
g++ -o hello hello.cpp
*/
#include <iostream>
using namespace std;
int main() {
cout << "C++ Hello, world!" << endl;
return 0;
}
###Output
Overwriting ./src/hello.cpp
###Markdown
use **g++** to compile and link C++ program at one command, as follows
###Code
!g++ -o ./bin/hello ./src/hello.cpp
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
2. GNU Makehttps://www.gnu.org/software/make/The **"make"** utility automates the mundane aspects of building executable from source code.**"make"** uses a so-called **makefile**, which contains **rules** on how to build the executables.You can issue "make --help" to list the command-line options.
###Code
!make --help
###Output
_____no_output_____
###Markdown
2.1 Create `makefile` fileCreate the following file named **"makefile"** : contains rules and save in the `current` directory. * **`without` any file `extension`**A makefile consists of `a set of rules`(规则) to build the executable. 2.1.1 The rule**A rule** consists of 3 parts:* **a target**(目标), * **a list of pre-requisites**(条件) * **a command**(命令)as follows:```bashtarget: pre-req-1 pre-req-2 ... command```* The **target** and **pre-requisites** are separated by a colon ** : ** .* The **command** must be preceded by **a Tab** (NOT spaces).```bashtarget: pre-req-1 pre-req-2 ...command```The `target` is the file or thing that `must be made`. The `prerequisites or dependents` are those files that must exist before the target can be successfully created. And the `commands` are those shell commands that will create the target from the prerequisites. The `first rule` seen by make is used as the `default` rule.* The standard first target in many makefiles is called **all**. 2.1.2 Comments **``** in a line of a makefile starts a comment. It and the rest of the line are ignored, except that `a trailing backslash`(`\`) not escaped by another backslash will continue the comment across multiple lines. 2.1.3 Makeflie for building helloLet's begin to build the same Hello-world program (`hello.c`) into executable (hello.exe) via make utility.
###Code
%%file ./makefile
# makefile for the hello
all: helloexe
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
###Output
Overwriting ./makefile
###Markdown
Here is a rule for compiling a C file, `./src/hello.c` into an object file, `./obj/hello.o````bashhelloobj: ./src/hello.c gcc -c -o ./obj/hello.o ./src/hello.c```* 1 The `target` helloobj appears before the colon(: ) * 2 The `prerequisites` `./src/hello.c` appear after the colon(:) * 3 The `command script` appears on the following lines and is preceded by a tab character. tab gcc -c -o ./obj/hello.o ./src/hello.c **NOTE** * [VS Code中makefile报错分隔符](https://gitee.com/thermalogic/home/blob/B2021/guide/doc/Problem_Solution.mdvs-code%E4%B8%ADmakefile%E6%8A%A5%E9%94%99%E5%88%86%E9%9A%94%E7%AC%A6) 2.2 Invoking Make When make is invoked, it automatically creates the **first** `target` it sees in makefile 2.2.1 make without argumentThe default nakefile name is: **makefile**, `Makefile`, or `GNUMakefile`. The `makefile` resides in the **same** user’s `current` directory when executing the `make` command. When make is invoked under these conditions, it automatically creates the `default` first target `all` it sees. **invoking make in the terminal of `/demo/` without argument**
###Code
!make
!.\bin\hello
###Output
C says Hello, world!
###Markdown
2.2.3 Running make with target argument We start the target **helloobj** in the makefile
###Code
!make helloobj
###Output
gcc -c -o ./obj/hello.o ./src/hello.c
###Markdown
2.2.4 Compile, Link , `Run` and `clean` **Add `clean` target**```bashclean: del .\obj\hello.o ``` **The target `all`*** Add prerequisites `clean`* Add `running command` ```bashall: helloexe clean ./bin/hello``` then save the makefile as `./makerun.mk`
###Code
%%file ./makerun.mk
all: helloexe clean
./bin/hello
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
clean:
del .\obj\hello.o
###Output
Writing ./makerun.mk
###Markdown
2.2.5 Invoking make with the specified FILE as a makefile* `-f FILE`: Read FILE as a makefile.
###Code
!make -f makerun.mk
###Output
gcc -c -o ./obj/hello.o ./src/hello.c
gcc -o ./bin/hello ./obj/hello.o
del .\obj\hello.o
./bin/hello
C says Hello, world!
###Markdown
3 Compile, Link : Multiple Source Files* 1) The codes of Computing the **mean** of an array * statistics.h * statistics.c * 2) The code file of the caller * statdemo.c **Reference** GNU:GSL* https://github.com/CNMAT/gsl/blob/master/statistics/mean_source.c
###Code
%%file ./include/statistics.h
#ifndef STATISTICS_H
#define STATISTICS_H
double mean(double data[], int size);
//double mean(double *data, int size);
#endif
%%file ./src/statistics.c
#include "statistics.h"
//double mean(double *data, int size)
double mean(double data[], int size)
{
/*
Compute the arithmetic mean of a dataset using the recurrence relation
mean_(n) = mean(n-1) + (data[n] - mean(n-1))/(n+1)
*/
double mean = 0;
for(int i = 0; i < size; i++)
{
mean += (data[i] - mean) / (i + 1);
}
return mean;
}
%%file ./src/statdemo.c
#include <stdio.h>
#include "statistics.h"
int main() {
double a[] = {8, 4, 5, 3, 2};
int length = sizeof(a)/sizeof(double);
printf("mean is %f\n", mean(a,length));
return 0;
}
###Output
Overwriting ./src/statdemo.c
###Markdown
3.3 Preprocessor Directives & Once-Only Headers Preprocessor Directiveshttp://www.cplusplus.com/doc/tutorial/preprocessor/Preprocessor directives(预处理指令) are lines included in the code of programs preceded by a hash sign (****). These lines are not program statements but directives for the preprocessor. The preprocessor examines the code **before actual compilation** of code begins and resolves all these directives before any code is actually generated by regular statements. Once-Only HeadersBecause header files sometimes include one another, it can easily happen that the same file is included **more than once.**For example, suppose the file `statistics.h` contains the line:```cinclude ```Then the source file `statdemo.c` that contains the following `include` directives would include the file `stdio.h` twice, once directly and once indirectly:```cinclude include "statistics.h````If a header file happens to be included **twice**, the compiler will process its contents twice. * This is very likely to cause an error, e.g. when the compiler sees the same structure definition twice. * Even if it does not, it will certainly waste time. you can easily guard the contents of a header file against **multiple inclusions**using the directives for `conditional compiling `The standard way to prevent this is to enclose the entire real contents of the file in a conditional, like this:```cifndef STATISTICS_Hdefine STATISTICS_H/* ... The actual contents of the header file statistics.h are here… */endif /* !STATISTICS_H */```This construct is commonly known as a wrapper **ifndef**. At the first occurrence of a directive to include the file `SumArray.h`, the macro `SUMARRAY_H` isnot yet defined. The preprocessor therefore inserts the contents of the block between`ifndef and endif` — including the definition of the macro `SUMARRAY_H`.When the header is included again, the `ifndef` condition is false, because **SUMARRAY_H** is defined. The preprocessor will skip over the entire contents of the file, and the compiler will not see it twice.> **All header files should have `ifndef` and `endif` guards to prevent multiple inclusion** Compile,Link and RunWe usually compile each of the source files **separately** into object file, and link them together in the later stage
###Code
!gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
!gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!gcc -o ./bin/statdemo.exe ./obj/statdemo.o ./obj/statistics.o
###Output
_____no_output_____
###Markdown
You could compile **all of them** in a single command:
###Code
!gcc -o ./bin/statdemo ./src/statdemo.c ./src/statistics.c -I./include/
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
Header Files `-I./include/`When compiling the program, the compiler needs the header files to compile the source codes;For each of the headers used in your source (via `include directives`), Since the header's filename is known, the compiler only needs the `directories(path)`.The compiler searches the so-called **include-paths** for these headers. specified via `-Idir` option (or environment variable CPATH). * `-Idir`: The **include-paths** are specified (uppercase `I` followed by the directory path). List the **default** include-paths in your system used by the "GNU C Preprocessor" via `gcc -print-search-dirs`
###Code
!gcc -print-search-dirs
###Output
_____no_output_____
###Markdown
3.4 Makefile
###Code
%%file ./makestatdemo.mk
all: statdemo
statdemo: statobj
gcc -o ./bin/statdemo ./obj/statdemo.o ./obj/statistics.o
statobj:
gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!make -f ./makestatdemo.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
Using the `variable` in makefileA `variable` begins with a **`$`** and is enclosed within parentheses **(...)** * $(...) Makefile with variable
###Code
%%file ./makestatdemo-var.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR=./include/
all: statdemo
statdemo: statobj
$(CC) -o $(BINDIR)statdemo $(OBJDIR)statdemo.o $(OBJDIR)statistics.o
statobj:
$(CC) -c -o $(OBJDIR)statdemo.o $(SRCDIR)statdemo.c -I$(INCDIR)
$(CC) -c -o $(OBJDIR)statistics.o $(SRCDIR)statistics.c -I$(INCDIR)
!make -f makestatdemo-var.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
4 The Advenced Makefile 4.1 Automatic Variables**Automatic variables** are set by make after a rule is matched. There include:```bash$@: the target filename.$<: the first prerequisite filename.$^: All prerequisites with duplicates eliminated``` 4.2 Pattern Rules**A pattern rule**, which uses `pattern matching character` **`%`** as the `filename`, can be applied to create a target, if there is no explicit rule.```bash'%' matches filename.``` 4.3 Functions 4.3.1 Functions for File Nameshttp://www.gnu.org/software/make/manual/html_node/File-Name-Functions.html**$(notdir names…)**Extracts all `but the directory-part` of each file name in names. * remove the `path` of a file nameFor example, ```$(notdir src/statistics.c src/statdemo.c)```produces the result without `path`* `statistics.c statdemo.c`. 4.3.2 String Substitution and Analysishttp://www.gnu.org/software/make/manual/html_node/Text-Functions.html**$(patsubst pattern, replacement, text)**Finds `whitespace-separated` words in text that match pattern and replaces them with replacement. * `text`(whitespace-separated words) -> **match** `pattern` -> **replaces** with `replacement`Here pattern may contain a **`%`** which acts as a `wildcard`, matching any number of any characters within a word. For example, ```$(patsubst %.c, %.o, x.c y.c)```* pattern: %.c* replacement to: %.o* text: `x.c y.c` produces the value `x.o y.o`. 4.5 Example The Advanced makefile for statistics
###Code
%%file ./makestatdemo-adv.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR= ./include/
#SRCS=$(SRCDIR)statistics.c \
# $(SRCDIR)statdemo.c
SRCS=$(wildcard $(SRCDIR)stat*.c)
# non-path filename
filename=$(notdir $(SRCS))
# the obj target of a source code using the pattern rule
OBJS=$(patsubst %.c,$(OBJDIR)%.o,$(filename))
all:statdemo
statdemo: $(OBJS)
$(CC) -o $(BINDIR)$@ $^
del .\obj\*.o
$(OBJS):$(SRCS)
$(CC) -o $@ -c $(SRCDIR)$(patsubst %.o,%.c,$(notdir $@)) -I$(INCDIR)
!make -f makestatdemo-adv.mk
!.\bin\statdemo
###Output
mean is 4.400000
###Markdown
GNU GCC and Make**The working folder of demo GCC projects is the `./demo`**In this and following noetebooks,we move to the folder of **`./demo`** from the current folders of Jupyter notebook **`/notebook.`**magic command * `%pwd`: Return the absolute path of current working directory * `%cd` : Change the current working directory.
###Code
%pwd
%cd ./demo
###Output
_____no_output_____
###Markdown
1The Brief Introduction to GCC Background: GNU and Free SoftwareThe original GNU C Compiler (GCC) is developed by **Richard Stallman(理查德·斯托曼)**, the founder of the `GNU Project`. * [The GNU Project](https://www.gnu.org/) GNU is a Unit-like operating system that is >The name **GNU** is a recursive acronym for “GNU's Not Unix.” “GNU” is pronounced *g'noo*, as one syllable, like saying “grew” but replacing the r with n.* **free software**—that is, it **respects** users' **freedom**.The development of GNU, started in January 1984, is known as the **GNU Project**.**The GNU Project** aim is to give computer users **freedom** and control in their use of their computers and computing devices, by collaboratively developing and providing software that is based on the following freedom rights: * users are free to run the software, share it (copy, distribute), study it and modify it.GNU software guarantees these freedom-rights legally (via its license), and is therefore free software; the use of the word **free** always being taken to refer to **freedom**. Thus. `free software` is a matter of liberty, `not price`.The development of GNU made it possible to **use a computer without software that would trample your freedom.** Many of the programs in GNU are released under the auspices of the GNU Project. **The Free Software Foundation** : https://www.fsf.org/ * The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user `freedom`. We defend the rights of all software users. Free Software and Educationhttps://www.gnu.org/education/education.html**How Does Free Software Relate to Education?**Software freedom plays a **fundamental role** in education. Educational institutions of all levels **should use and teach Free Software** because It is the only software that allows them to **accomplish their essential missions**: * to disseminate human `knowledge` and to prepare students to be good members of their `community`. The **source code and the methods** of Free Software are part of human knowledge.On the contrary, **proprietary software** is `secret, restricted knowledge`, which is the opposite of the mission of educational institutions.Free Software supports education, `proprietary software forbids education.`Free Software is not just a technical question; it is an **ethical, social, and political question**.It is a question of the **human rights** that the **users of software ought to have**.**Freedom** and **cooperation** are **essential values** of Free Software.The GNU System implements these values and the principle of **sharing**, since **sharing** is good and beneficial to **human progress**. 1.1 Installing GCC**GCC: GNU Compiler Collection** : http://gcc.gnu.org/ GCC, formerly for "GNU C Compiler", has grown over times to support many languages such as `C++`, Objective-C, Java, `Fortran` and Ada. It is now referred to as **"GNU Compiler Collection"**. GCC is portable and run in many operating platforms. GCC (and GNU Toolchain) is currently available on all Unixes. They are also ported to **Windows** by `MinGW` and Cygwin. LinuxGCC (GNU Toolchain) is included in all Linux(Unixes). Windows **TDM-GCC**TDM-GCC is a compiler suite for Windows.It combines the most recent stable release of the GCC compiler, a few patches for Windows-friendliness, and the free and open-source `MinGW.org` or `MinGW-w64` runtime APIs, to create a more lightweight open-source alternative to Microsoft’s compiler and platform SDK.https://jmeubank.github.io/tdm-gcc/* https://github.com/jmeubank 1.2 Getting StartedThe GNU C and C++ compiler are gcc and g++, respectively.* **gcc** to compile `C` program* **g++** to compile `C++` program **The Simple Demo: Compile/Link a Simple C Program - hello.c** 1.2.1 Make the folders for the GCC projectsGeneraly,We should put all files of `one project under one folder.`and use the folder of project as **current working folder**In the folder,we should setup `the meaningful folders` to management of `different types` of documents conveniently.We set the folder **./demo/** for the **C/C++ programming** .In the `./demo/`,we set the sub-folder of `src`,`obj` and `bin` for the different type files* ./src: the source code * ./include: the header file* ./obj: the compiled output object code/* ./bin: the linked output ```bash ├── │ │ │ │── │ │── │ │ │ │─ *.c/cpp (the Source code) │ │── │ │ │ │─ *.h (the header ) │ │── │ │ │ │─ *.obj/o (the compiled output) │ │── │ │─ *.exe (the linked output) ``` 1.2.2 Compile/Link a Simple C ProgramWe save the code to the location of source code* source code `hello.c` in `./src` Below is the Hello-world C program `hello.c`
###Code
%%file ./src/hello.c
/*
gcc -o hello hello.c
*/
#include <stdio.h>
int main() {
printf("C says Hello, world!\n");
return 0;
}
###Output
_____no_output_____
###Markdown
You need to use **gcc** to `compile` C program:`hello.c`,then `link` to build the output* **-c: source files** `compiles` source files **without** linking.* **-o: output file** writes the link to build output to the specifies output file name.Use `-o` to set `the specifie output file` to the location`* the Compiled output `hello.o in ./obj/`* the Linked output `hello.exe in the ./bin/`
###Code
!gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
We have put `hello.o` in `.\obj\` folder.`
###Code
!gcc -o ./bin/hello ./obj/hello.o
###Output
_____no_output_____
###Markdown
We have put `hello.exe` in the `\bin\` folder.` **Compile and Link to build output at the command**
###Code
!gcc -o ./bin/hello ./src/hello.c
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
1.2.3 Compile/Link a Simple C++ Program - hello.cpp Compile/Link the C++ Program : g++Below is the Hello-world C++ program hello.cpp:
###Code
%%file ./src/hello.cpp
/*
g++ -o hello hello.cpp
*/
#include <iostream>
using namespace std;
int main() {
cout << "C++ Hello, world!" << endl;
return 0;
}
###Output
_____no_output_____
###Markdown
use **g++** to compile and link C++ program at one command, as follows
###Code
!g++ -o ./bin/hello ./src/hello.cpp
###Output
_____no_output_____
###Markdown
Running Under Windows
###Code
!.\bin\hello
###Output
_____no_output_____
###Markdown
2. GNU Makehttps://www.gnu.org/software/make/The **"make"** utility automates the mundane aspects of building executable from source code.**"make"** uses a so-called **makefile**, which contains **rules** on how to build the executables.You can issue "make --help" to list the command-line options.
###Code
!make --help
###Output
_____no_output_____
###Markdown
2.1 Create `makefile` fileCreate the following file named **"makefile"** : contains rules and save in the `current` directory. * **`without` any file `extension`**A makefile consists of `a set of rules`(规则) to build the executable. 2.1.1 The rule**A rule** consists of 3 parts:* **a target**(目标), * **a list of pre-requisites**(条件) * **a command**(命令)as follows:```bashtarget: pre-req-1 pre-req-2 ... command command```* The **target** and **pre-requisites** are separated by a colon ** : ** .* The **command** must be preceded by **a Tab** (NOT spaces).```bashtarget: pre-req-1 pre-req-2 ...commandcommand```The `target` is the file or thing that `must be made`. The `prerequisites or dependents` are those files that must exist before the target can be successfully created. And the `commands` are those shell commands that will create the target from the prerequisites. The `first rule` seen by make is used as the `default` rule.* The standard first target in many makefiles is called **all**. 2.1.2 Comments **``** in a line of a makefile starts a comment. It and the rest of the line are ignored, except that `a trailing backslash`(`\`) not escaped by another backslash will continue the comment across multiple lines. 2.1.3 Makeflie for building helloLet's begin to build the same Hello-world program (`hello.c`) into executable (hello.exe) via make utility.
###Code
%%file ./makefile
# makefile for the hello
all: helloexe
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
###Output
_____no_output_____
###Markdown
Here is a rule for compiling a C file, `./src/hello.c` into an object file, `./obj/hello.o````bashhelloobj: ./src/hello.c gcc -c -o ./obj/hello.o ./src/hello.c```* 1 The `target` helloobj appears before the colon(: ) * 2 The `prerequisites` `./src/hello.c` appear after the colon(:) * 3 The `command script` appears on the following lines and is preceded by a tab character. tab gcc -c -o ./obj/hello.o ./src/hello.c **NOTE** * [VS Code中makefile报错分隔符](https://gitee.com/thermalogic/sees/blob/B2022/guide/doc/Problem_Solution.mdvs-code%E4%B8%ADmakefile%E6%8A%A5%E9%94%99%E5%88%86%E9%9A%94%E7%AC%A6)```*** missing separator. Stop.``` 2.2 Invoking Make When make is invoked, it automatically creates the **first** `target` it sees in makefile 2.2.1 make without argumentThe default nakefile name is: **makefile**, `Makefile`, or `GNUMakefile`. The `makefile` resides in the **same** user’s `current` directory when executing the `make` command. When make is invoked under these conditions, it automatically creates the `default` first target `all` it sees. **invoking make in the terminal of `/demo/` without argument**
###Code
!make
!.\bin\hello
###Output
_____no_output_____
###Markdown
2.2.3 Running make with target argument We start the target **helloobj** in the makefile
###Code
!make helloobj
###Output
_____no_output_____
###Markdown
2.2.4 Compile, Link , `Run` and `clean` **Add `clean` target**```bashclean: del .\obj\hello.o ``` **The target `all`*** Add prerequisites `clean`* Add `running command` ```bashall: helloexe clean ./bin/hello``` then save the makefile as `./makerun.mk`
###Code
%%file ./makerun.mk
all: helloexe clean
./bin/hello
helloexe: helloobj
gcc -o ./bin/hello ./obj/hello.o
helloobj: ./src/hello.c
gcc -c -o ./obj/hello.o ./src/hello.c
clean:
del .\obj\hello.o
###Output
_____no_output_____
###Markdown
2.2.5 Invoking make with the specified FILE as a makefile* `-f FILE`: Read FILE as a makefile.
###Code
!make -f makerun.mk
###Output
_____no_output_____
###Markdown
3 Compile, Link : Multiple Source Files* 1) The codes of Computing the **mean** of an array * statistics.h * statistics.c * 2) The code file of the caller * statdemo.c **Reference** GNU:GSL* https://github.com/CNMAT/gsl/blob/master/statistics/mean_source.c
###Code
%%file ./include/statistics.h
#ifndef STATISTICS_H
#define STATISTICS_H
double mean(double data[], int size);
#endif
%%file ./src/statistics.c
#include "statistics.h"
double mean(double data[], int size)
{
/*
Compute the arithmetic mean of a dataset using the recurrence relation
mean_(n) = mean(n-1) + (data[n] - mean(n-1))/(n+1)
*/
double mean = 0;
for(int i = 0; i < size; i++)
{
mean += (data[i] - mean) / (i + 1);
}
return mean;
}
%%file ./src/statdemo.c
#include <stdio.h>
#include "statistics.h"
int main() {
double a[] = {8, 4, 5, 3, 2};
int length = sizeof(a)/sizeof(double);
printf("mean is %f\n", mean(a,length));
return 0;
}
###Output
_____no_output_____
###Markdown
3.3 Preprocessor Directives & Once-Only Headers Preprocessor Directiveshttp://www.cplusplus.com/doc/tutorial/preprocessor/Preprocessor directives(预处理指令) are lines included in the code of programs preceded by a hash sign (****). These lines are not program statements but directives for the preprocessor. The preprocessor examines the code **before actual compilation** of code begins and resolves all these directives before any code is actually generated by regular statements. Once-Only HeadersBecause header files sometimes include one another, it can easily happen that the same file is included **more than once.**For example, suppose the file `statistics.h` contains the line:```cinclude ```Then the source file `statdemo.c` that contains the following `include` directives would include the file `stdio.h` twice, once directly and once indirectly:```cinclude include "statistics.h````If a header file happens to be included **twice**, the compiler will process its contents twice. * This is very likely to cause an error, e.g. when the compiler sees the same structure definition twice. * Even if it does not, it will certainly waste time. you can easily guard the contents of a header file against **multiple inclusions**using the directives for `conditional compiling `The standard way to prevent this is to enclose the entire real contents of the file in a conditional, like this:```cifndef STATISTICS_Hdefine STATISTICS_H/* ... The actual contents of the header file statistics.h are here… */endif /* !STATISTICS_H */```This construct is commonly known as a wrapper **ifndef**. At the first occurrence of a directive to include the file `statistics.h`, the macro `STATISTICS_H` isnot yet defined. The preprocessor therefore inserts the contents of the block between`ifndef and endif` — including the definition of the macro `STATISTICSY_H`.When the header is included again, the `ifndef` condition is false, because **STATISTICS_H** is defined. The preprocessor will skip over the entire contents of the file, and the compiler will not see it twice.> **All header files should have `ifndef` and `endif` guards to prevent multiple inclusion** Compile,Link and RunWe usually compile each of the source files **separately** into object file, and link them together in the later stage
###Code
!gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
!gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!gcc -o ./bin/statdemo.exe ./obj/statdemo.o ./obj/statistics.o
###Output
_____no_output_____
###Markdown
ALSO, You could compile **all of them** to exe in a single command:
###Code
!gcc -o ./bin/statdemo ./src/statdemo.c ./src/statistics.c -I./include/
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Header Files `-I./include/`When compiling the program, the compiler needs the header files to compile the source codes;For each of the headers used in your source (via `include directives`), Since the header's filename is known, the compiler only needs the `directories(path)`.The compiler searches the so-called **include-paths** for these headers. specified via `-Idir` option (or environment variable CPATH). * `-Idir`: The **include-paths** are specified (uppercase `I` followed by the directory path). List the **default** include-paths in your system used by the "GNU C Preprocessor" via `gcc -print-search-dirs`
###Code
!gcc -print-search-dirs
###Output
_____no_output_____
###Markdown
3.4 Makefile
###Code
%%file ./makestatdemo.mk
all: statdemo
statdemo: statobj
gcc -o ./bin/statdemo ./obj/statdemo.o ./obj/statistics.o
statobj: ./src/statistics.c ./src/statdemo.c
gcc -c -o ./obj/statdemo.o ./src/statdemo.c -I./include/
gcc -c -o ./obj/statistics.o ./src/statistics.c -I./include/
!make -f ./makestatdemo.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Variable A `variable` begins with a **`$`** and is enclosed within parentheses **(...)** * $(...)
###Code
%%file ./makestatdemo-var.mk
CC=gcc
SRCDIR= ./src/
OBJDIR= ./obj/
BINDIR= ./bin/
INCDIR=./include/
all: statdemo
statdemo: statobj
$(CC) -o $(BINDIR)statdemo $(OBJDIR)statdemo.o $(OBJDIR)statistics.o
statobj: $(SRCDIR)statdemo.c $(SRCDIR)statistics.c
$(CC) -c -o $(OBJDIR)statistics.o $(SRCDIR)statistics.c -I$(INCDIR)
$(CC) -c -o $(OBJDIR)statdemo.o $(SRCDIR)statdemo.c -I$(INCDIR)
!make -f makestatdemo-var.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Automatic VariablesThere include:```bash$@: the target filename.$^: All prerequisites with duplicates eliminated```**Automatic variables** are set by make after a rule is matched.
###Code
%%file ./makestatdemo-autovar.mk
CC=gcc
SRCDIR= ./src/
BINDIR= ./bin/
INCDIR=./include/
SRC=$(SRCDIR)statdemo.c $(SRCDIR)statistics.
#SRCS=$(SRCDIR)statistics.c \
# $(SRCDIR)statdemo.c
EXE=$(BINDIR)statdemo
all: $(EXE)
$(EXE): $(SRC)
$(CC) -o $@ $^ -I$(INCDIR)
!make -f makestatdemo-autovar.mk
!.\bin\statdemo
###Output
_____no_output_____
###Markdown
Wildcard```$(wildcard pattern)```The argument **pattern** is a file name pattern, typically containing **wildcard** characters. The result of wildcard is a space-separated list of the names of existing files that match the **pattern**.For example, ```./src/statistics.c ./src/statdemo.c ``````SRCS=$(wildcard ./src/stat*.c)```
###Code
%%file ./makestatdemo-wildcard.mk
CC=gcc
SRCDIR= ./src/
BINDIR= ./bin/
INCDIR= ./include/
#SRCS=$(SRCDIR)statistics.c \
# $(SRCDIR)statdemo.c
SRCS=$(wildcard $(SRCDIR)stat*.c)
all:statdemo
statdemo: $(SRCS)
$(CC) -o $(BINDIR)$@ $^ -I$(INCDIR)
!make -f makestatdemo-wildcard.mk
!.\bin\statdemo
###Output
_____no_output_____
|
code/7_Decision_tree.ipynb
|
###Markdown
Decision tree vs Logistic Regression for classification problemshttps://dzone.com/articles/logistic-regression-vs-decision-tree 
###Code
df = pd.read_csv('../datasets/titanic.csv')
df.head()
inputs = df[['Pclass','Sex','Age','Fare']]
target = df['Survived']
le_sex = LabelEncoder()
inputs['sex_n'] = le_sex.fit_transform(inputs['Sex'])
inputs.head()
inputs_n = inputs.drop('Sex',axis='columns')
inputs_n.head()
model_ex = tree.DecisionTreeClassifier()
from sklearn.model_selection import train_test_split
inputs_n.isna().any()
inputs_n['Age'].fillna(inputs_n['Age'].mean(),inplace=True)
inputs_n.isna().any()
X_train, X_test, y_train, y_test = train_test_split(inputs_n,target,test_size=0.2)
model_ex.fit(X_train,y_train)
model_ex.score(X_test,y_test)
model_entr = tree.DecisionTreeClassifier(criterion='entropy')
model_entr.fit(X_train,y_train)
model_entr.score(X_test,y_test)
###Output
_____no_output_____
|
BFS/1218/127. Word Ladder.ipynb
|
###Markdown
把 beginword 的每个字母都转换一下,转换的候选者从 wordList中挑选, 之后放于 deque 中
###Code
from collections import deque, defaultdict
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList):
def check(a, b):
# 是否两个单词只差一个字母不同
diff = 0
for i in range(len(a)):
if a[i] != b[i]:
diff += 1
if diff > 1:
return False
return True
if endWord not in wordList:
return 0
# 找到所有单词只转换一个字母所有的可能性
word_dict = defaultdict(list)
for w in wordList:
if check(beginWord, w):
word_dict[beginWord].append(w)
for i in range(len(wordList)):
for j in range(i+1, len(wordList)):
if check(wordList[i], wordList[j]):
word_dict[wordList[i]].append(wordList[j])
word_dict[wordList[j]].append(wordList[i])
n = len(beginWord)
dq = deque([beginWord])
cnt = 1
seen = set()
while dq:
for _ in range(len(dq)):
word = dq.popleft()
seen.add(word)
for w in word_dict[word]:
if w in seen:
continue
if w == endWord:
return cnt + 1
dq.append(w)
if dq:
cnt += 1
return 0
from collections import deque, defaultdict
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList):
if endWord not in wordList:
return 0
word_dict = defaultdict(list) # 找到所有单词只转换一个字母所有的可能性
n = len(beginWord)
k = len(wordList)
word_set = set(wordList)
beg_word = list(beginWord)
for i in range(n):
for j in range(97, 123):
if chr(j) == beginWord[i]:
continue
beg_word[i] = chr(j)
res = ''.join(beg_word)
if res in word_set:
word_dict[beginWord].append(res)
beg_word[i] = beginWord[i]
for word in wordList:
w_list = list(word)
for i in range(n):
for j in range(97, 123):
if chr(j) == word[i]:
continue
w_list[i] = chr(j)
res = ''.join(w_list)
if res in word_set:
word_dict[word].append(res)
w_list[i] = word[i]
dq = deque([beginWord])
cnt = 1
seen = set()
while dq:
for _ in range(len(dq)):
word = dq.popleft()
seen.add(word)
for w in word_dict[word]:
if w in seen:
continue
if w == endWord:
return cnt + 1
dq.append(w)
if dq:
cnt += 1
print(cnt)
return 0
from collections import deque, defaultdict
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList):
if endWord not in wordList:
return 0
word_dict = defaultdict(list) # 找到所有单词只转换一个字母所有的可能性
n = len(beginWord)
k = len(wordList)
word_set = set(wordList)
wordList.append(beginWord)
for word in wordList:
w_list = list(word)
for i in range(n):
for j in range(97, 123):
if chr(j) == word[i]:
continue
w_list[i] = chr(j)
res = ''.join(w_list)
if res in word_set:
word_dict[word].append(res)
w_list[i] = word[i]
dq = deque([beginWord])
cnt = 1
seen = set()
while dq:
for _ in range(len(dq)):
word = dq.popleft()
seen.add(word)
for w in word_dict[word]:
if w in seen:
continue
if w == endWord:
return cnt + 1
dq.append(w)
if dq:
cnt += 1
return 0
solution = Solution()
solution.ladderLength("sand",
"acne",
["slit","bunk","wars","ping","viva","wynn","wows","irks","gang","pool","mock","fort","heel","send","ship","cols","alec","foal","nabs","gaze","giza","mays","dogs","karo","cums","jedi","webb","lend","mire","jose","catt","grow","toss","magi","leis","bead","kara","hoof","than","ires","baas","vein","kari","riga","oars","gags","thug","yawn","wive","view","germ","flab","july","tuck","rory","bean","feed","rhee","jeez","gobs","lath","desk","yoko","cute","zeus","thus","dims","link","dirt","mara","disc","limy","lewd","maud","duly","elsa","hart","rays","rues","camp","lack","okra","tome","math","plug","monk","orly","friz","hogs","yoda","poop","tick","plod","cloy","pees","imps","lead","pope","mall","frey","been","plea","poll","male","teak","soho","glob","bell","mary","hail","scan","yips","like","mull","kory","odor","byte","kaye","word","honk","asks","slid","hopi","toke","gore","flew","tins","mown","oise","hall","vega","sing","fool","boat","bobs","lain","soft","hard","rots","sees","apex","chan","told","woos","unit","scow","gilt","beef","jars","tyre","imus","neon","soap","dabs","rein","ovid","hose","husk","loll","asia","cope","tail","hazy","clad","lash","sags","moll","eddy","fuel","lift","flog","land","sigh","saks","sail","hook","visa","tier","maws","roeg","gila","eyes","noah","hypo","tore","eggs","rove","chap","room","wait","lurk","race","host","dada","lola","gabs","sobs","joel","keck","axed","mead","gust","laid","ends","oort","nose","peer","kept","abet","iran","mick","dead","hags","tens","gown","sick","odis","miro","bill","fawn","sumo","kilt","huge","ores","oran","flag","tost","seth","sift","poet","reds","pips","cape","togo","wale","limn","toll","ploy","inns","snag","hoes","jerk","flux","fido","zane","arab","gamy","raze","lank","hurt","rail","hind","hoot","dogy","away","pest","hoed","pose","lose","pole","alva","dino","kind","clan","dips","soup","veto","edna","damp","gush","amen","wits","pubs","fuzz","cash","pine","trod","gunk","nude","lost","rite","cory","walt","mica","cart","avow","wind","book","leon","life","bang","draw","leek","skis","dram","ripe","mine","urea","tiff","over","gale","weir","defy","norm","tull","whiz","gill","ward","crag","when","mill","firs","sans","flue","reid","ekes","jain","mutt","hems","laps","piss","pall","rowe","prey","cull","knew","size","wets","hurl","wont","suva","girt","prys","prow","warn","naps","gong","thru","livy","boar","sade","amok","vice","slat","emir","jade","karl","loyd","cerf","bess","loss","rums","lats","bode","subs","muss","maim","kits","thin","york","punt","gays","alpo","aids","drag","eras","mats","pyre","clot","step","oath","lout","wary","carp","hums","tang","pout","whip","fled","omar","such","kano","jake","stan","loop","fuss","mini","byrd","exit","fizz","lire","emil","prop","noes","awed","gift","soli","sale","gage","orin","slur","limp","saar","arks","mast","gnat","port","into","geed","pave","awls","cent","cunt","full","dint","hank","mate","coin","tars","scud","veer","coax","bops","uris","loom","shod","crib","lids","drys","fish","edit","dick","erna","else","hahs","alga","moho","wire","fora","tums","ruth","bets","duns","mold","mush","swop","ruby","bolt","nave","kite","ahem","brad","tern","nips","whew","bait","ooze","gino","yuck","drum","shoe","lobe","dusk","cult","paws","anew","dado","nook","half","lams","rich","cato","java","kemp","vain","fees","sham","auks","gish","fire","elam","salt","sour","loth","whit","yogi","shes","scam","yous","lucy","inez","geld","whig","thee","kelp","loaf","harm","tomb","ever","airs","page","laud","stun","paid","goop","cobs","judy","grab","doha","crew","item","fogs","tong","blip","vest","bran","wend","bawl","feel","jets","mixt","tell","dire","devi","milo","deng","yews","weak","mark","doug","fare","rigs","poke","hies","sian","suez","quip","kens","lass","zips","elva","brat","cosy","teri","hull","spun","russ","pupa","weed","pulp","main","grim","hone","cord","barf","olav","gaps","rote","wilt","lars","roll","balm","jana","give","eire","faun","suck","kegs","nita","weer","tush","spry","loge","nays","heir","dope","roar","peep","nags","ates","bane","seas","sign","fred","they","lien","kiev","fops","said","lawn","lind","miff","mass","trig","sins","furl","ruin","sent","cray","maya","clog","puns","silk","axis","grog","jots","dyer","mope","rand","vend","keen","chou","dose","rain","eats","sped","maui","evan","time","todd","skit","lief","sops","outs","moot","faze","biro","gook","fill","oval","skew","veil","born","slob","hyde","twin","eloy","beat","ergs","sure","kobe","eggo","hens","jive","flax","mons","dunk","yest","begs","dial","lodz","burp","pile","much","dock","rene","sago","racy","have","yalu","glow","move","peps","hods","kins","salk","hand","cons","dare","myra","sega","type","mari","pelt","hula","gulf","jugs","flay","fest","spat","toms","zeno","taps","deny","swag","afro","baud","jabs","smut","egos","lara","toes","song","fray","luis","brut","olen","mere","ruff","slum","glad","buds","silt","rued","gelt","hive","teem","ides","sink","ands","wisp","omen","lyre","yuks","curb","loam","darn","liar","pugs","pane","carl","sang","scar","zeds","claw","berg","hits","mile","lite","khan","erik","slug","loon","dena","ruse","talk","tusk","gaol","tads","beds","sock","howe","gave","snob","ahab","part","meir","jell","stir","tels","spit","hash","omit","jinx","lyra","puck","laue","beep","eros","owed","cede","brew","slue","mitt","jest","lynx","wads","gena","dank","volt","gray","pony","veld","bask","fens","argo","work","taxi","afar","boon","lube","pass","lazy","mist","blot","mach","poky","rams","sits","rend","dome","pray","duck","hers","lure","keep","gory","chat","runt","jams","lays","posy","bats","hoff","rock","keri","raul","yves","lama","ramp","vote","jody","pock","gist","sass","iago","coos","rank","lowe","vows","koch","taco","jinn","juno","rape","band","aces","goal","huck","lila","tuft","swan","blab","leda","gems","hide","tack","porn","scum","frat","plum","duds","shad","arms","pare","chin","gain","knee","foot","line","dove","vera","jays","fund","reno","skid","boys","corn","gwyn","sash","weld","ruiz","dior","jess","leaf","pars","cote","zing","scat","nice","dart","only","owls","hike","trey","whys","ding","klan","ross","barb","ants","lean","dopy","hock","tour","grip","aldo","whim","prom","rear","dins","duff","dell","loch","lava","sung","yank","thar","curl","venn","blow","pomp","heat","trap","dali","nets","seen","gash","twig","dads","emmy","rhea","navy","haws","mite","bows","alas","ives","play","soon","doll","chum","ajar","foam","call","puke","kris","wily","came","ales","reef","raid","diet","prod","prut","loot","soar","coed","celt","seam","dray","lump","jags","nods","sole","kink","peso","howl","cost","tsar","uric","sore","woes","sewn","sake","cask","caps","burl","tame","bulk","neva","from","meet","webs","spar","fuck","buoy","wept","west","dual","pica","sold","seed","gads","riff","neck","deed","rudy","drop","vale","flit","romp","peak","jape","jews","fain","dens","hugo","elba","mink","town","clam","feud","fern","dung","newt","mime","deem","inti","gigs","sosa","lope","lard","cara","smug","lego","flex","doth","paar","moon","wren","tale","kant","eels","muck","toga","zens","lops","duet","coil","gall","teal","glib","muir","ails","boer","them","rake","conn","neat","frog","trip","coma","must","mono","lira","craw","sled","wear","toby","reel","hips","nate","pump","mont","died","moss","lair","jibe","oils","pied","hobs","cads","haze","muse","cogs","figs","cues","roes","whet","boru","cozy","amos","tans","news","hake","cots","boas","tutu","wavy","pipe","typo","albs","boom","dyke","wail","woke","ware","rita","fail","slab","owes","jane","rack","hell","lags","mend","mask","hume","wane","acne","team","holy","runs","exes","dole","trim","zola","trek","puma","wacs","veep","yaps","sums","lush","tubs","most","witt","bong","rule","hear","awry","sots","nils","bash","gasp","inch","pens","fies","juts","pate","vine","zulu","this","bare","veal","josh","reek","ours","cowl","club","farm","teat","coat","dish","fore","weft","exam","vlad","floe","beak","lane","ella","warp","goth","ming","pits","rent","tito","wish","amps","says","hawk","ways","punk","nark","cagy","east","paul","bose","solo","teed","text","hews","snip","lips","emit","orgy","icon","tuna","soul","kurd","clod","calk","aunt","bake","copy","acid","duse","kiln","spec","fans","bani","irma","pads","batu","logo","pack","oder","atop","funk","gide","bede","bibs","taut","guns","dana","puff","lyme","flat","lake","june","sets","gull","hops","earn","clip","fell","kama","seal","diaz","cite","chew","cuba","bury","yard","bank","byes","apia","cree","nosh","judo","walk","tape","taro","boot","cods","lade","cong","deft","slim","jeri","rile","park","aeon","fact","slow","goff","cane","earp","tart","does","acts","hope","cant","buts","shin","dude","ergo","mode","gene","lept","chen","beta","eden","pang","saab","fang","whir","cove","perk","fads","rugs","herb","putt","nous","vane","corm","stay","bids","vela","roof","isms","sics","gone","swum","wiry","cram","rink","pert","heap","sikh","dais","cell","peel","nuke","buss","rasp","none","slut","bent","dams","serb","dork","bays","kale","cora","wake","welt","rind","trot","sloe","pity","rout","eves","fats","furs","pogo","beth","hued","edam","iamb","glee","lute","keel","airy","easy","tire","rube","bogy","sine","chop","rood","elbe","mike","garb","jill","gaul","chit","dons","bars","ride","beck","toad","make","head","suds","pike","snot","swat","peed","same","gaza","lent","gait","gael","elks","hang","nerf","rosy","shut","glop","pain","dion","deaf","hero","doer","wost","wage","wash","pats","narc","ions","dice","quay","vied","eons","case","pour","urns","reva","rags","aden","bone","rang","aura","iraq","toot","rome","hals","megs","pond","john","yeps","pawl","warm","bird","tint","jowl","gibe","come","hold","pail","wipe","bike","rips","eery","kent","hims","inks","fink","mott","ices","macy","serf","keys","tarp","cops","sods","feet","tear","benz","buys","colo","boil","sews","enos","watt","pull","brag","cork","save","mint","feat","jamb","rubs","roxy","toys","nosy","yowl","tamp","lobs","foul","doom","sown","pigs","hemp","fame","boor","cube","tops","loco","lads","eyre","alta","aged","flop","pram","lesa","sawn","plow","aral","load","lied","pled","boob","bert","rows","zits","rick","hint","dido","fist","marc","wuss","node","smog","nora","shim","glut","bale","perl","what","tort","meek","brie","bind","cake","psst","dour","jove","tree","chip","stud","thou","mobs","sows","opts","diva","perm","wise","cuds","sols","alan","mild","pure","gail","wins","offs","nile","yelp","minn","tors","tran","homy","sadr","erse","nero","scab","finn","mich","turd","then","poem","noun","oxus","brow","door","saws","eben","wart","wand","rosa","left","lina","cabs","rapt","olin","suet","kalb","mans","dawn","riel","temp","chug","peal","drew","null","hath","many","took","fond","gate","sate","leak","zany","vans","mart","hess","home","long","dirk","bile","lace","moog","axes","zone","fork","duct","rico","rife","deep","tiny","hugh","bilk","waft","swig","pans","with","kern","busy","film","lulu","king","lord","veda","tray","legs","soot","ells","wasp","hunt","earl","ouch","diem","yell","pegs","blvd","polk","soda","zorn","liza","slop","week","kill","rusk","eric","sump","haul","rims","crop","blob","face","bins","read","care","pele","ritz","beau","golf","drip","dike","stab","jibs","hove","junk","hoax","tats","fief","quad","peat","ream","hats","root","flak","grit","clap","pugh","bosh","lock","mute","crow","iced","lisa","bela","fems","oxes","vies","gybe","huff","bull","cuss","sunk","pups","fobs","turf","sect","atom","debt","sane","writ","anon","mayo","aria","seer","thor","brim","gawk","jack","jazz","menu","yolk","surf","libs","lets","bans","toil","open","aced","poor","mess","wham","fran","gina","dote","love","mood","pale","reps","ines","shot","alar","twit","site","dill","yoga","sear","vamp","abel","lieu","cuff","orbs","rose","tank","gape","guam","adar","vole","your","dean","dear","hebe","crab","hump","mole","vase","rode","dash","sera","balk","lela","inca","gaea","bush","loud","pies","aide","blew","mien","side","kerr","ring","tess","prep","rant","lugs","hobo","joke","odds","yule","aida","true","pone","lode","nona","weep","coda","elmo","skim","wink","bras","pier","bung","pets","tabs","ryan","jock","body","sofa","joey","zion","mace","kick","vile","leno","bali","fart","that","redo","ills","jogs","pent","drub","slaw","tide","lena","seep","gyps","wave","amid","fear","ties","flan","wimp","kali","shun","crap","sage","rune","logs","cain","digs","abut","obit","paps","rids","fair","hack","huns","road","caws","curt","jute","fisk","fowl","duty","holt","miss","rude","vito","baal","ural","mann","mind","belt","clem","last","musk","roam","abed","days","bore","fuze","fall","pict","dump","dies","fiat","vent","pork","eyed","docs","rive","spas","rope","ariz","tout","game","jump","blur","anti","lisp","turn","sand","food","moos","hoop","saul","arch","fury","rise","diss","hubs","burs","grid","ilks","suns","flea","soil","lung","want","nola","fins","thud","kidd","juan","heps","nape","rash","burt","bump","tots","brit","mums","bole","shah","tees","skip","limb","umps","ache","arcs","raft","halo","luce","bahs","leta","conk","duos","siva","went","peek","sulk","reap","free","dubs","lang","toto","hasp","ball","rats","nair","myst","wang","snug","nash","laos","ante","opal","tina","pore","bite","haas","myth","yugo","foci","dent","bade","pear","mods","auto","shop","etch","lyly","curs","aron","slew","tyro","sack","wade","clio","gyro","butt","icky","char","itch","halt","gals","yang","tend","pact","bees","suit","puny","hows","nina","brno","oops","lick","sons","kilo","bust","nome","mona","dull","join","hour","papa","stag","bern","wove","lull","slip","laze","roil","alto","bath","buck","alma","anus","evil","dumb","oreo","rare","near","cure","isis","hill","kyle","pace","comb","nits","flip","clop","mort","thea","wall","kiel","judd","coop","dave","very","amie","blah","flub","talc","bold","fogy","idea","prof","horn","shoo","aped","pins","helm","wees","beer","womb","clue","alba","aloe","fine","bard","limo","shaw","pint","swim","dust","indy","hale","cats","troy","wens","luke","vern","deli","both","brig","daub","sara","sued","bier","noel","olga","dupe","look","pisa","knox","murk","dame","matt","gold","jame","toge","luck","peck","tass","calf","pill","wore","wadi","thur","parr","maul","tzar","ones","lees","dark","fake","bast","zoom","here","moro","wine","bums","cows","jean","palm","fume","plop","help","tuba","leap","cans","back","avid","lice","lust","polo","dory","stew","kate","rama","coke","bled","mugs","ajax","arts","drug","pena","cody","hole","sean","deck","guts","kong","bate","pitt","como","lyle","siam","rook","baby","jigs","bret","bark","lori","reba","sups","made","buzz","gnaw","alps","clay","post","viol","dina","card","lana","doff","yups","tons","live","kids","pair","yawl","name","oven","sirs","gyms","prig","down","leos","noon","nibs","cook","safe","cobb","raja","awes","sari","nerd","fold","lots","pete","deal","bias","zeal","girl","rage","cool","gout","whey","soak","thaw","bear","wing","nagy","well","oink","sven","kurt","etna","held","wood","high","feta","twee","ford","cave","knot","tory","ibis","yaks","vets","foxy","sank","cone","pius","tall","seem","wool","flap","gird","lore","coot","mewl","sere","real","puts","sell","nuts","foil","lilt","saga","heft","dyed","goat","spew","daze","frye","adds","glen","tojo","pixy","gobi","stop","tile","hiss","shed","hahn","baku","ahas","sill","swap","also","carr","manx","lime","debs","moat","eked","bola","pods","coon","lacy","tube","minx","buff","pres","clew","gaff","flee","burn","whom","cola","fret","purl","wick","wigs","donn","guys","toni","oxen","wite","vial","spam","huts","vats","lima","core","eula","thad","peon","erie","oats","boyd","cued","olaf","tams","secs","urey","wile","penn","bred","rill","vary","sues","mail","feds","aves","code","beam","reed","neil","hark","pols","gris","gods","mesa","test","coup","heed","dora","hied","tune","doze","pews","oaks","bloc","tips","maid","goof","four","woof","silo","bray","zest","kiss","yong","file","hilt","iris","tuns","lily","ears","pant","jury","taft","data","gild","pick","kook","colt","bohr","anal","asps","babe","bach","mash","biko","bowl","huey","jilt","goes","guff","bend","nike","tami","gosh","tike","gees","urge","path","bony","jude","lynn","lois","teas","dunn","elul","bonn","moms","bugs","slay","yeah","loan","hulk","lows","damn","nell","jung","avis","mane","waco","loin","knob","tyke","anna","hire","luau","tidy","nuns","pots","quid","exec","hans","hera","hush","shag","scot","moan","wald","ursa","lorn","hunk","loft","yore","alum","mows","slog","emma","spud","rice","worn","erma","need","bags","lark","kirk","pooh","dyes","area","dime","luvs","foch","refs","cast","alit","tugs","even","role","toed","caph","nigh","sony","bide","robs","folk","daft","past","blue","flaw","sana","fits","barr","riot","dots","lamp","cock","fibs","harp","tent","hate","mali","togs","gear","tues","bass","pros","numb","emus","hare","fate","wife","mean","pink","dune","ares","dine","oily","tony","czar","spay","push","glum","till","moth","glue","dive","scad","pops","woks","andy","leah","cusp","hair","alex","vibe","bulb","boll","firm","joys","tara","cole","levy","owen","chow","rump","jail","lapp","beet","slap","kith","more","maps","bond","hick","opus","rust","wist","shat","phil","snow","lott","lora","cary","mote","rift","oust","klee","goad","pith","heep","lupe","ivan","mimi","bald","fuse","cuts","lens","leer","eyry","know","razz","tare","pals","geek","greg","teen","clef","wags","weal","each","haft","nova","waif","rate","katy","yale","dale","leas","axum","quiz","pawn","fend","capt","laws","city","chad","coal","nail","zaps","sort","loci","less","spur","note","foes","fags","gulp","snap","bogs","wrap","dane","melt","ease","felt","shea","calm","star","swam","aery","year","plan","odin","curd","mira","mops","shit","davy","apes","inky","hues","lome","bits","vila","show","best","mice","gins","next","roan","ymir","mars","oman","wild","heal","plus","erin","rave","robe","fast","hutu","aver","jodi","alms","yams","zero","revs","wean","chic","self","jeep","jobs","waxy","duel","seek","spot","raps","pimp","adan","slam","tool","morn","futz","ewes","errs","knit","rung","kans","muff","huhs","tows","lest","meal","azov","gnus","agar","sips","sway","otis","tone","tate","epic","trio","tics","fade","lear","owns","robt","weds","five","lyon","terr","arno","mama","grey","disk","sept","sire","bart","saps","whoa","turk","stow","pyle","joni","zinc","negs","task","leif","ribs","malt","nine","bunt","grin","dona","nope","hams","some","molt","smit","sacs","joan","slav","lady","base","heck","list","take","herd","will","nubs","burg","hugs","peru","coif","zoos","nick","idol","levi","grub","roth","adam","elma","tags","tote","yaws","cali","mete","lula","cubs","prim","luna","jolt","span","pita","dodo","puss","deer","term","dolt","goon","gary","yarn","aims","just","rena","tine","cyst","meld","loki","wong","were","hung","maze","arid","cars","wolf","marx","faye","eave","raga","flow","neal","lone","anne","cage","tied","tilt","soto","opel","date","buns","dorm","kane","akin","ewer","drab","thai","jeer","grad","berm","rods","saki","grus","vast","late","lint","mule","risk","labs","snit","gala","find","spin","ired","slot","oafs","lies","mews","wino","milk","bout","onus","tram","jaws","peas","cleo","seat","gums","cold","vang","dewy","hood","rush","mack","yuan","odes","boos","jami","mare","plot","swab","borg","hays","form","mesh","mani","fife","good","gram","lion","myna","moor","skin","posh","burr","rime","done","ruts","pays","stem","ting","arty","slag","iron","ayes","stub","oral","gets","chid","yens","snub","ages","wide","bail","verb","lamb","bomb","army","yoke","gels","tits","bork","mils","nary","barn","hype","odom","avon","hewn","rios","cams","tact","boss","oleo","duke","eris","gwen","elms","deon","sims","quit","nest","font","dues","yeas","zeta","bevy","gent","torn","cups","worm","baum","axon","purr","vise","grew","govs","meat","chef","rest","lame"])
ord('a')
ord('z')
chr(122)
import collections
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList) -> int:
transform_hash = collections.defaultdict(list)
for word in wordList:
for idx in range(len(word)):
hash_word = word[:idx] + "*" + word[idx+1:]
transform_hash[hash_word].append(word)
print(transform_hash)
seen = set()
queue = [(beginWord, 1)]
while len(queue):
word, length = queue[0]
for idx in range(len(word)):
hash_word = word[:idx]+ "*" + word[idx+1:]
mapped_words = transform_hash[hash_word]
for m_word in mapped_words:
if m_word == endWord:
return length+1
if m_word not in seen:
queue.append((m_word, length+1))
seen.add(m_word)
return 0
solution = Solution()
solution.ladderLength("sand", "acne", ["slit","bunk", "elms","deon","sims","quit","nest","font","dues"])
from collections import defaultdict, deque
class Solution:
def ladderLength(self, beginWord: str, endWord: str, wordList: List[str]) -> int:
if endWord not in wordList:
return 0
hash_word = defaultdict(list)
for word in wordList:
for i in range(len(word)):
form_word = word[:i] + '*' + word[i+1:]
hash_word[form_word].append(word)
cnt = 1
dq = deque([beginWord])
seen = set()
while dq:
for _ in range(len(dq)):
word = dq.popleft()
for i in range(len(word)):
form_word = word[:i] + '*' + word[i+1:]
for w in hash_word[form_word]:
if w not in seen:
seen.add(w)
dq.append(w)
if w == endWord:
return cnt + 1
if dq:
cnt += 1
return 0
###Output
_____no_output_____
|
docs/ipython/OpenCV_test.ipynb
|
###Markdown
OpenCV TestKevin J. Walchko, created 12 Nov 2016
###Code
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
import cv2
print('OpenCV version {}'.format(cv2.__version__))
###Output
OpenCV version 3.1.0
###Markdown
Gif test
###Code
cam = cv2.VideoCapture(0)
ret, frame = cam.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(frame, interpolation = 'bicubic')
cam.release()
plt.imshow(frame);
###Output
_____no_output_____
|
Prediction using Supervised ML/ Linear Regression TSF.ipynb
|
###Markdown
Linear Regression Project Using Python A simple linear regression tasks with only 2 variables. Libraries Used in this Project
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing Data from URL
###Code
url="http://bit.ly/w-data"
study = pd.read_csv(url)
study.head()
###Output
_____no_output_____
###Markdown
Extracting Dependent and Independent values
###Code
x=study.iloc[:, :-1].values
y=study.iloc[:, 1].values
###Output
_____no_output_____
###Markdown
Visualising Data When values are the same, the colors are lighter. So darker colors shows less connections between the data. Example; Hours on X-axis and Y-axis gives lighter color. Note: Sometimes, the color scheme will be opposite in occurence.
###Code
sns.heatmap(study.corr())
###Output
_____no_output_____
###Markdown
Splitting the Dataset into the Training set and Test set
###Code
#test size = 20% and train set = 80%
from sklearn.model_selection import train_test_split
X_train,X_test, y_train, y_test = train_test_split(x, y,test_size = 0.2,random_state= 0)
#y_test
###Output
_____no_output_____
###Markdown
Modelling
###Code
#creating linear regression model
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Getting the Prediction
###Code
y_pred = regressor.predict(X_test)
y_pred
###Output
_____no_output_____
###Markdown
Checking Behind the Scene
###Code
#coefficeient calc
#what goes on behind the scene
print(regressor.coef_)
#calculating the intercept
print(regressor.intercept_)
###Output
2.018160041434683
###Markdown
Plotting the Regression Line(Visualisation)
###Code
line = regressor.coef_*x+regressor.intercept_
# Plotting for the test data
plt.scatter(x, y)
plt.plot(x, line);
plt.show()
###Output
_____no_output_____
###Markdown
Actual Value and Predicted Value
###Code
act = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
act
###Output
_____no_output_____
###Markdown
Evaluating the Model
###Code
# R squared value is 0.945 which is 94.5% it shows that the model is not perfect and the values do not match
# MSE is 4.18 shows that the model is not perfect.
# Perfect R squared value and MSE is 1 and 0 respectively
from sklearn import metrics
r2_score=r2_score(y_test, y_pred)
mse=metrics.mean_absolute_error(y_test, y_pred)
print('R2 Score:', r2_score)
print('Mean Square Error:', mse)
###Output
R2 Score: 0.9454906892105356
Mean Square Error: 4.183859899002975
###Markdown
The predicted score if a student studies 9.25 hours per day
###Code
#You can also test with your own data
hours = 9.25
own_pred = regressor.predict(np.array([9.25]).reshape(-1, 1))
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(own_pred[0]))
###Output
No of Hours = 9.25
Predicted Score = 93.69173248737538
|
xrayvis/xrayvis.datahub.ipynb
|
###Markdown
X-Ray Microbeam Database visualizationThis app is designed to visualize the datafiles in the University of Wisconsin X-Ray Microbeam Speech Production Database:```Westbury, John with Greg Turner and Jim Dembowski (1994) X-Ray Microbeam Speech Production Database User’s Handbook, v. 1.0, Waisman Center on Mental Retardation & Human Development, Univ. of Wisconsin, Madison, WI.```[Time-aligned word and phone labels](https://github.com/rsprouse/xray_microbeam_database) have been added to the audio and articulation data by a project led by Keith Johnson at UC Berkeley. Run the following cell to pull in the time-aligned labels and speaker demographics used in the visualization.
###Code
%%bash
bash ../datahub_start.sh
bash xrayvis_start.sh
###Output
_____no_output_____
###Markdown
Run the next cell to launch the visualization.
###Code
import os
os.environ['BOKEH_RESOURCES'] = 'inline' # To ensure we load monkeypatched version of bokeh rather than from cdn
from xrayvis_app import xrayvis_app
from bokeh_phon.utils import remote_jupyter_proxy_url_callback, set_default_jupyter_url
from bokeh.io import show
set_default_jupyter_url('https://datahub.berkeley.edu/')
show(xrayvis_app, notebook_url=remote_jupyter_proxy_url_callback)
###Output
_____no_output_____
|
IntroEstatistica.ipynb
|
###Markdown
###Code
###Output
_____no_output_____
|
src/applylut.ipynb
|
###Markdown
Table of Contents 1 Function applylut1.1 Synopse1.2 Description1.3 Examples1.3.1 Example 11.3.2 Example 21.3.3 Example 31.3.4 Example 41.4 Equation1.5 See Also: Function applylut SynopseIntensity image transform.- **g = applylut(fi, it)** - **g**: Image. - **fi**: Image. input image, gray scale or index image. - **it**: Image. Intensity transform. Table of one or three columns.
###Code
import numpy as np
def applylut(fi, it):
g = it[fi]
if len(g.shape) == 3:
g = np.swapaxes(g, 0,2)
g = np.swapaxes(g, 1,2)
return g
###Output
_____no_output_____
###Markdown
DescriptionApply an intensity image transform to the input image. The input image can be seen as an gray scale image or an index image. The intensity transform is represented by a table where the input (gray scale) color address the table line and its column contents indicates the output (gray scale) image color. The table can have one or three columns. If it has three columns, the output image is a three color band image. This intensity image transformation is very powerful and can be use in many applications involving gray scale and color images. If the input image has an index (gray scale color) that is greater than the size of the intensity table, an error is reported. Examples
###Code
testing = (__name__ == "__main__")
if testing:
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
###Output
_____no_output_____
###Markdown
Example 1This first example shows a simple numeric 2 lines, 3 columns image with sequential pixel values. First the identitytable is applied and image g is generated with same values of f. Next, a new table, itn = 5 - it is generated creatinga negation table. The resultant image gn has the values of f negated.
###Code
if testing:
f = np.array([[0,1,2],
[3,4,5]])
print('f=\n',f)
it = np.array(list(range(6))) # identity transform
print('it=',it)
g = ia.applylut(f, it)
print('g=\n',g)
itn = 5 - it # negation
print('itn=',itn)
gn = ia.applylut(f, itn)
print('gn=\n',gn)
###Output
f=
[[0 1 2]
[3 4 5]]
it= [0 1 2 3 4 5]
g=
[[0 1 2]
[3 4 5]]
itn= [5 4 3 2 1 0]
gn=
[[5 4 3]
[2 1 0]]
###Markdown
Example 2This example shows the negation operation applying the intensity transform through a negation grayscale table: it = 255 - i.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
it = (255 - np.arange(256)).astype('uint8')
g = ia.applylut(f, it)
ia.adshow(f,'f')
ia.adshow(g,'g')
###Output
_____no_output_____
###Markdown
Example 3In this example, the colortable has 3 columns and the application of the colortable to an scalar image results in an image with 3 bands.
###Code
if testing:
f = np.array([[0,1,2],
[2,0,1]])
ct = np.array([[100,101,102],
[110,111,112],
[120,121,122]])
#print iaimginfo(ct)
g = ia.applylut(f,ct)
print(g)
###Output
[[[100 110 120]
[120 100 110]]
[[101 111 121]
[121 101 111]]
[[102 112 122]
[122 102 112]]]
###Markdown
Example 4In this example, the colortable has 3 columns, R, G and B, where G and B are zeros and R is identity.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
aux = np.resize(np.arange(256).astype('uint8'), (256,1))
ct = np.concatenate((aux, np.zeros((256,2),'uint8')), 1)
g = ia.applylut(f, ct) # generate (bands,H,W)
g = g.transpose(1,2,0) # convert to (H,W,bands)
ia.adshow(f)
ia.adshow(g)
###Output
(256, 1) uint8
(256, 3) uint8
(256, 256, 3) uint8
###Markdown
Equation$$ g(r,c) = IT( f(r,c) ) $$$$ g_{R}(r,c) = IT_R( f(r,c))\\ g_{G}(r,c) = IT_G( f(r,c))\\ g_{B}(r,c) = IT_B( f(r,c)) $$ See Also:- [ia636:colormap](colormap.ipnb) Pseudocolor maps
###Code
if testing:
print('testing applylut')
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([0,1,2,3]))) == repr(np.array([0,1,2,3])))
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([[0,0,0],[1,1,1],[2,2,2],[3,3,3]]))) == repr(np.array([[0,0,0], [1,1,1], [2,2,2],[3,3,3]])))
###Output
testing applylut
True
True
###Markdown
Function applylut SynopseIntensity image transform.- **g = applylut(fi, it)** - **g**: Image. - **fi**: Image. input image, gray scale or index image. - **it**: Image. Intensity transform. Table of one or three columns.
###Code
import numpy as np
def applylut(fi, it):
g = it[fi]
if len(g.shape) == 3:
g = np.swapaxes(g, 0,2)
g = np.swapaxes(g, 1,2)
return g
###Output
_____no_output_____
###Markdown
DescriptionApply an intensity image transform to the input image. The input image can be seen as an gray scale image or an index image. The intensity transform is represented by a table where the input (gray scale) color address the table line and its column contents indicates the output (gray scale) image color. The table can have one or three columns. If it has three columns, the output image is a three color band image. This intensity image transformation is very powerful and can be use in many applications involving gray scale and color images. If the input image has an index (gray scale color) that is greater than the size of the intensity table, an error is reported. Examples
###Code
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python applylut.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
###Output
[NbConvertApp] Converting notebook applylut.ipynb to python
[NbConvertApp] Writing 5436 bytes to applylut.py
###Markdown
Example 1This first example shows a simple numeric 2 lines, 3 columns image with sequential pixel values. First the identitytable is applied and image g is generated with same values of f. Next, a new table, itn = 5 - it is generated creatinga negation table. The resultant image gn has the values of f negated.
###Code
if testing:
f = np.array([[0,1,2],
[3,4,5]])
print('f=\n',f)
it = np.array(list(range(6))) # identity transform
print('it=',it)
g = ia.applylut(f, it)
print('g=\n',g)
itn = 5 - it # negation
print('itn=',itn)
gn = ia.applylut(f, itn)
print('gn=\n',gn)
###Output
f=
[[0 1 2]
[3 4 5]]
it= [0 1 2 3 4 5]
g=
[[0 1 2]
[3 4 5]]
itn= [5 4 3 2 1 0]
gn=
[[5 4 3]
[2 1 0]]
###Markdown
Example 2This example shows the negation operation applying the intensity transform through a negation grayscale table: it = 255 - i.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
it = (255 - np.arange(256)).astype('uint8')
g = ia.applylut(f, it)
ia.adshow(f,'f')
ia.adshow(g,'g')
###Output
_____no_output_____
###Markdown
Example 3In this example, the colortable has 3 columns and the application of the colortable to an scalar image results in an image with 3 bands.
###Code
if testing:
f = np.array([[0,1,2],
[2,0,1]])
ct = np.array([[100,101,102],
[110,111,112],
[120,121,122]])
#print iaimginfo(ct)
g = ia.applylut(f,ct)
print(g)
###Output
[[[100 110 120]
[120 100 110]]
[[101 111 121]
[121 101 111]]
[[102 112 122]
[122 102 112]]]
###Markdown
Example 4In this example, the colortable has 3 columns, R, G and B, where G and B are zeros and R is identity.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
aux = np.resize(np.arange(256).astype('uint8'), (256,1))
ct = np.concatenate((aux, np.zeros((256,2),'uint8')), 1)
g = ia.applylut(f, ct) # generate (bands,H,W)
g = g.transpose(1,2,0) # convert to (H,W,bands)
ia.adshow(f)
ia.adshow(g)
###Output
_____no_output_____
###Markdown
Equation$$ g(r,c) = IT( f(r,c) ) $$$$ g_{R}(r,c) = IT_R( f(r,c))\\ g_{G}(r,c) = IT_G( f(r,c))\\ g_{B}(r,c) = IT_B( f(r,c)) $$ See Also:- [ia636:colormap](colormap.ipnb) Pseudocolor maps
###Code
if testing:
print('testing applylut')
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([0,1,2,3]))) == repr(np.array([0,1,2,3])))
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([[0,0,0],[1,1,1],[2,2,2],[3,3,3]]))) == repr(np.array([[0,0,0], [1,1,1], [2,2,2],[3,3,3]])))
###Output
testing applylut
True
True
###Markdown
Function applylut SynopseIntensity image transform.- **g = applylut(fi, it)** - **g**: Image. - **fi**: Image. input image, gray scale or index image. - **it**: Image. Intensity transform. Table of one or three columns.
###Code
import numpy as np
def applylut(fi, it):
g = it[fi]
if len(g.shape) == 3:
g = np.swapaxes(g, 0,2)
g = np.swapaxes(g, 1,2)
return g
###Output
_____no_output_____
###Markdown
DescriptionApply an intensity image transform to the input image. The input image can be seen as an gray scale image or an index image. The intensity transform is represented by a table where the input (gray scale) color address the table line and its column contents indicates the output (gray scale) image color. The table can have one or three columns. If it has three columns, the output image is a three color band image. This intensity image transformation is very powerful and can be use in many applications involving gray scale and color images. If the input image has an index (gray scale color) that is greater than the size of the intensity table, an error is reported. Examples
###Code
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python applylut.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ea979path = os.path.abspath('../../')
if ea979path not in sys.path:
sys.path.append(ea979path)
import ea979.src as ia
###Output
[NbConvertApp] Converting notebook applylut.ipynb to python
[NbConvertApp] Writing 3952 bytes to applylut.py
###Markdown
Example 1This first example shows a simple numeric 2 lines, 3 columns image with sequential pixel values. First the identitytable is applied and image g is generated with same values of f. Next, a new table, itn = 5 - it is generated creatinga negation table. The resultant image gn has the values of f negated.
###Code
if testing:
f = np.array([[0,1,2],
[3,4,5]])
print('f=\n',f)
it = np.array(list(range(6))) # identity transform
print('it=',it)
g = ia.applylut(f, it)
print('g=\n',g)
itn = 5 - it # negation
print('itn=',itn)
gn = ia.applylut(f, itn)
print('gn=\n',gn)
###Output
f=
[[0 1 2]
[3 4 5]]
it= [0 1 2 3 4 5]
g=
[[0 1 2]
[3 4 5]]
itn= [5 4 3 2 1 0]
gn=
[[5 4 3]
[2 1 0]]
###Markdown
Example 2This example shows the negation operation applying the intensity transform through a negation grayscale table: it = 255 - i.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
it = (255 - np.arange(256)).astype('uint8')
g = ia.applylut(f, it)
ia.adshow(f,'f')
ia.adshow(g,'g')
###Output
_____no_output_____
###Markdown
Example 3In this example, the colortable has 3 columns and the application of the colortable to an scalar image results in an image with 3 bands.
###Code
if testing:
f = np.array([[0,1,2],
[2,0,1]])
ct = np.array([[100,101,102],
[110,111,112],
[120,121,122]])
#print iaimginfo(ct)
g = ia.applylut(f,ct)
print(g)
###Output
[[[100 110 120]
[120 100 110]]
[[101 111 121]
[121 101 111]]
[[102 112 122]
[122 102 112]]]
###Markdown
Example 4In this example, the colortable has 3 columns, R, G and B, where G and B are zeros and R is identity.
###Code
if testing:
f = mpimg.imread('../data/cameraman.tif')
aux = np.resize(np.arange(256).astype('uint8'), (256,1))
ct = np.concatenate((aux, np.zeros((256,2),'uint8')), 1)
g = ia.applylut(f, ct) # generate (bands,H,W)
g = g.transpose(1,2,0) # convert to (H,W,bands)
ia.adshow(f)
ia.adshow(g)
###Output
_____no_output_____
###Markdown
Equation$$ g(r,c) = IT( f(r,c) ) $$$$ g_{R}(r,c) = IT_R( f(r,c))\\ g_{G}(r,c) = IT_G( f(r,c))\\ g_{B}(r,c) = IT_B( f(r,c)) $$ See Also:- [ia636:colormap](colormap.ipnb) Pseudocolor maps
###Code
if testing:
print('testing applylut')
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([0,1,2,3]))) == repr(np.array([0,1,2,3])))
print(repr(ia.applylut(np.array([0,1,2,3]),np.array([[0,0,0],[1,1,1],[2,2,2],[3,3,3]]))) == repr(np.array([[0,0,0], [1,1,1], [2,2,2],[3,3,3]])))
###Output
testing applylut
True
True
|
ADR_DictionaryLearning.ipynb
|
###Markdown
Advanced Dimensionality Reduction: Dictionary Learning Misael M. Morales Executive SummarySubsurface spatial data is often very large and difficult to process and utilize in machine learning workflows. Moreover, there tends to be high degrees of correlation and structure in 2D subsurface data, which makes it a perfect candidate for latent structure analysis and modeling. Here, we will utilize a 2D multivariate subsurface dataset and demonstrate the usage of **Dictionary Learning** to encode the data into the latent space and then reconstruct using a fraction of the original size. In Dictionary Learning, we construct a set of atoms/words that sparsely represent the data through a singular value-type representation, and can use this set of atoms to selectively to reconstruct our images, thus reducing the latent dimension of the problem.We learn that using Dictionary Learning, different latent dimensions will result in different degrees of lossy reconstruction, but that this method is efficient, economic, and simple enough to treat our large data into a reduced dimensionality form. We recommend to use this autoencoder structure whenever dealing with image or volume problems in order to reduce redundancy and increase efficiency of our machine learning workflows. Table of Contents1. Import Packages2. Declare Functions3. Load & Preprocess Data a) MNIST Data b) Subsurface Data4. Dimensionality Reduction: Dictionary Learning *** 1. Import Packages We start by importing our most basic packages:
###Code
%matplotlib inline
import numpy as np #arrays and matrix math
import pandas as pd #DataFrames management and indexing
import matplotlib.pyplot as plt #plotting and visualization
import matplotlib.gridspec as gridspec #enhanced subplot referencing
import tensorflow as tf #deep learning functionality and MNIST data
###Output
_____no_output_____
###Markdown
Import other important packages for preprocessing, metrics, etc., and project-specific packages and functions
###Code
# Feature Engineering/Preprocessing
from sklearn.preprocessing import MinMaxScaler #Normalize variables to be in [0,1]
from scipy.interpolate import Rbf as Rbf_interpolation #Inteprolate 2D map from sparse data
# Goodness-of-Fit Metrics
from sklearn.metrics import mean_squared_error #Mean squared error (MSE)
from skimage.metrics import structural_similarity as SSIM #Structural Similarity Index (SSIM)
# Project-specific
from sklearn.decomposition import DictionaryLearning #Sparse encoding using dictionary of atoms
from sklearn.decomposition import MiniBatchDictionaryLearning #Sparse encoding using dictionary (mini-batch formulation)
from sklearn.decomposition import SparseCoder #Sparse reconstruction using dictionary of atoms
###Output
_____no_output_____
###Markdown
2. Delcare FunctionsThe following functions will be used in the workflow.
###Code
# Plot function for sample images
def plot_sample_matrix(samples, my_cmap):
num_samples, x_dim, y_dim, _ = samples.shape
axes = (np.round(np.sqrt(num_samples))).astype(int)
fig = plt.figure(figsize=(axes, axes))
gs = gridspec.GridSpec(axes, axes)
gs.update(wspace=0.05, hspace=0.05)
for i, sample in enumerate(samples):
ax = plt.subplot(gs[i])
plt.axis('off'); ax.set_aspect('equal')
plt.imshow(sample, cmap=my_cmap)
###Output
_____no_output_____
###Markdown
This variable will help us when inserting text boxes into our matplotlib plots
###Code
# Define arguments for text box in PLT.TEXT()
my_box = dict(boxstyle='round', facecolor='wheat', alpha=0.5)
###Output
_____no_output_____
###Markdown
This next function is optional, and will simply be a check that your current version of tensorflow on your Python kernel is running on a GPU and if tensorflow is built with CUDA.
###Code
# Check tensorflow GPU settings
print('Tensorflow built with CUDA? ', tf.test.is_built_with_cuda())
tf.config.list_physical_devices()
###Output
Tensorflow built with CUDA? True
###Markdown
3. Load & Preprocess DataWe will deal with two different datasets, both of which need preprocessing.(1) MNIST dataset: handwritten digits as $28x28$ images from *tensorflow* (2) Subsurface multivariate data: 2D spatial (sparse) data 3. a) MNIST DataThis is a set of $60,000$ images of handwritten digits $0$-$9$. We load it directly from *tensorflow* datasets ([link](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist)), and will preprocess to center and flatten as needed for our techniques.
###Code
# Load the Dataset and split into train/test
(x_train_all, y_train_all), (x_test_all, y_test_all) = tf.keras.datasets.mnist.load_data()
# Choose to work with ALL or only a few (N) MNIST images (full size is 60,000)
#N = len(x_train_all)
N = 5000
x_train, x_test = x_train_all[:N], x_test_all[:N]
y_train, y_test = y_train_all[:N], y_test_all[:N]
# Normalize the Images
x_train = np.expand_dims(x_train/255.0, axis=-1)
x_test = np.expand_dims(x_test/255.0, axis=-1)
# Define the labels
class_names = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
# Print the shapes of the training and testing sets + check that training images are normalized
print('MNIST dataset properties:')
print('Train || Shapes: X={}, Y={} | min={}, max={}'.format(x_train.shape, y_train.shape, x_train.min(), x_train.max()))
print('Test || Shapes: X={}, Y={} | min={}, max={}'.format(x_test.shape, y_test.shape, x_test.min(), x_test.max()))
# Flatten and Center the images
print('Flattened and Center Images:')
# Flatten the images into NxM array
x_train_f = np.transpose(np.reshape(x_train, [x_train.shape[0], -1]))
x_test_f = np.transpose(np.reshape(x_test, [x_test.shape[0], -1]))
# Center the Flattened images
x_train_f_c = x_train_f - np.expand_dims(np.mean(x_train_f, axis=1), axis=1)
x_test_f_c = x_test_f - np.expand_dims(np.mean(x_test_f, axis=1), axis=1)
print('Train || Shapes: X={}, Y={} | min={:.3f}, max={:.3f}'.format(x_train_f_c.shape, y_train.shape, x_train_f_c.min(), x_train_f_c.max()))
print('Test || Shapes: X={}, Y={} | min={:.3f}, max={:.3f}'.format(x_test_f_c.shape, y_test.shape, x_test_f_c.min(), x_test_f_c.max()))
###Output
Flattened and Center Images:
Train || Shapes: X=(784, 5000), Y=(5000,) | min=-0.549, max=1.000
Test || Shapes: X=(784, 5000), Y=(5000,) | min=-0.531, max=1.000
###Markdown
For improved visualization, we will define a new colormap that uses the 10 individual digits ($0$-$9$) and implement the '*jet*' colormap.
###Code
# Define a colormap for the 10-class classification system
import matplotlib.cm as cm
from matplotlib.colors import Normalize
my_cmap = cm.get_cmap('jet')
my_norm = Normalize(vmin=0, vmax=9)
cs = my_cmap(my_norm(y_train))
###Output
_____no_output_____
###Markdown
Next, we will count the number of items that is in each of the 10 digit categories, and also visualize the first few samples from the training and testing dataset.
###Code
# Count the number of occurrences for each digit within the training/testing datasets
digit_count = {}
for i in np.arange(len(class_names)):
digit_count[i] = x_train[y_train==i].shape[0]
digit_count = pd.DataFrame(list(digit_count.values()), columns=['Count']).T
print('Count per Digit:')
digit_count.head()
# Visualize a few Train/Test samples from mnist
fig = plt.figure(figsize=(10, 3), constrained_layout=False)
fig.suptitle('Train samples'+60*' '+'Test samples')
outer_grid = fig.add_gridspec(1, 2, wspace=0.1, hspace=0)
left_grid = outer_grid[0, 0].subgridspec(5, 10, wspace=0, hspace=0)
axs = left_grid.subplots()
for (c,d), ax in np.ndenumerate(axs):
ax.imshow(x_train[y_train==d][c]); ax.set(xticks=[], yticks=[])
right_grid = outer_grid[0, 1].subgridspec(5, 10, wspace=0, hspace=0)
axs = right_grid.subplots()
for (c,d), ax in np.ndenumerate(axs):
ax.imshow(x_test[y_test==d][c]); ax.set(xticks=[], yticks=[])
plt.show();
###Output
_____no_output_____
###Markdown
3. b) Subsurface DataThe following workflow applies the .csv file 'spatial_nonlinear_MV_facies_v1.csv', a synthetic dataset calculated with geostatistical cosimulation by Dr. Michael Pyrcz, The University of Texas at Austin. The dataset is publically available [here](https://github.com/GeostatsGuy/GeoDataSets) From this site, other datasets can also be used for this workflow including but not limited to: {'spatial_nonlinear_MV_facies_v5.csv', 'sample_data_MV_biased.csv', 'PGE383_Dataset_13_Wells.csv', '12_sample_data.csv'}.We will work with the following features:* **X** and **Y** - the sptial coordinates (in meters) for the subsurface data* **Porosity** - fraction of rock void in units of percentage* **Permeability** - ability of a fluid to flow through the rock in milliDarcy* **Acoustic Impedence** - product of sonic velocity and rock density (in $kg/m^2s*10^3$)* **Facies** - binary indicator of sand or shale facies
###Code
# Select a subsurface Dataset for image reconstruction
df = pd.read_csv("https://raw.githubusercontent.com/GeostatsGuy/GeoDataSets/master/spatial_nonlinear_MV_facies_v1.csv")
df.head() #visualize first few rows of the DataFrame
###Output
_____no_output_____
###Markdown
We perform normalization of the features by applying Min-Max scaling of the features such that:$$ x^* = \frac{x-min(x)}{max(x)-min(x)} $$where $min(x)$ and $max(x)$ are the minimum and maximum values for each of the features in the dataset.This is done by the *scikitlearn* built-in function``` pythonmin_max_scaler = sklearn.preprocessing.MinMaxScaler()scaled_array = min_max_scaler.fit_transform(float_array)```
###Code
scaler = MinMaxScaler() #instantiate the normalization function
df_s = pd.DataFrame(scaler.fit_transform(df), columns=df.columns) #apply min-max scaling
df_s.describe().T #show summary statistics of the new DataFrame
###Output
_____no_output_____
###Markdown
For simplicity, we specifically name our subsurface features/properties.We also specifically name the *matplotlib* colormaps that we want to use for each of the feautures/properties.
###Code
features = ['Porosity','Perm','AI','Facies'] # names of our features
my_maps = ['magma', 'jet', 'seismic', 'viridis'] # names of the corresponding colormaps
###Output
_____no_output_____
###Markdown
However, this is a **sparse** dataset of the subsurface, with 457 wells in approximately $1 km^2$ area of the subsurface. Therefore, we must interpolate the spatial properties so that we obtain a full image of the subsurface properties. The 2D interpolation is done through *scipy*'s RBF interpolation function. This generates radial basis function inteprolation from $(N,D)$ arrays to an $(M,D)$ domain.We will interpolate the subsurface 2D data into $(28,28)$ images. These are the standard dimensions of the MNIST dataset, a generic dataset of handwritten digits that we will use later for our workflow.
###Code
# Interpolate spatial properties
ti = np.linspace(start=0, stop=1, num=28) #an array of 28 discrete points
XI, YI = np.meshgrid(ti,ti) #a mesh of 28x28 discrete points
ZI = {}; ZI_s = {}
for i in features:
# RBF interpolation
ZI[i] = Rbf_interpolation(df_s['X'], df_s['Y'], df[i], function='thin_plate')(XI, YI)[::-1]
# Normalize our interpolated features
ZI_s[i] = scaler.fit_transform(ZI[i])
for i in np.arange(len(features)):
print(features[i]+': Shape={}, min={:.3f}, max={:.3f}'.format(ZI_s[features[i]].shape,
ZI_s[features[i]].min(),
ZI_s[features[i]].max()))
fig, axs = plt.subplots(2,4, figsize=(15,6))
for i in range(len(features)):
axs[0,i].set_title(features[i]); axs[1,i].set_title('Interpolated '+features[i])
# plot original data scatterplots
im1 = axs[0,i].scatter(x=df_s['X'], y=df_s['Y'], s=8, c=df_s[features[i]], cmap=my_maps[i])
fig.colorbar(im1, ax=axs[0,i])
# plot interpolated images
im2 = axs[1,i].imshow(ZI_s[features[i]], cmap=my_maps[i])
fig.colorbar(im2, ax=axs[1,i])
# remove ticks, set square box ratios
for k in range(2):
axs[k,i].set_xticks([]); axs[k,i].set_yticks([]);
axs[k,i].set_aspect('equal', adjustable='box')
plt.show();
###Output
_____no_output_____
###Markdown
Select one of the subsurface features to be used in the remainder of the notebook for image reconstruction.
###Code
# Select one of the subsruface features to work with
feature_selected = 'AI'
###Output
_____no_output_____
###Markdown
*** 4. Advanced Dimensionality ReductionDimensionality Reduction is quite ubiqutuous in modern machine learning. PCA has been widely-studied and applied in theoretical and applied setting for data science, including reservoir characterization, modeling, and simulation. However, PCA assumes linearity and no outliers, while SVD will simply decompose the matrix into left-and-right singular vectors and a diagonal singular value matrix. With this, we can project our 2D data onto the vectors and work in latent space.Another consideration is the idea of using the latent space for a generic dataset as the basis for reconstruction of a more complex dataset. For instance, using the SVD decomposition of the $60,000$ MNIST images, we can reconstruct our 2D subsurface maps from a latent represenation! 4. b) Dictionary LearningWith dictionary learning, we attempt to find a "dictionary" or "set of atoms" that performs well at sparsely encoding the fitted data. Essentially, this is a method for sparse encoding of our data, such that we can create a dictionary, or a repertoire of models, from which we can selectively choose a few representative samples to best predict a new data, thus promoting sparsity in the encoding. We use a linear combination of the "words" or "elements" or "atoms" in the dictionary. Dictionary Learning has direct relationship with Compressed Sensing and Sparse Coding, where a high-dimensional signal can be reconstructed with only a few linear measurements of the data assuming that the signal is relatively sparse. Regarding this branch of signal processing theory, one can for instance use Fourier or Wavelet Transforms are generic bases to create a sparse encoding of the data; however, Dictionary Learning provides tailored basis unique to the fitted/training data in order to provide the best sparse encoding. Mathematically: Given an input dataset $ X = [x_1, x_2, ..., x_k], \forall{x} \in \mathbb{R^d} $ we will find a Dictionary $ \mathbf{D} \in \mathbb{R}^{(d\times n)} := [d_1, d_2, ..., d_n] $ and a Representation vector $ R = [r_1, r_2, ..., r_k], r_i \in \mathbb{R}^n $ such that $ \|{X-\mathbf{D}R}\|_F^2 $ is minimized and the representations $r_i$ are sparse enough. As an optimization problem:$$ (U^*,V^*) = \min_{ D\in{C}; r_i\in{\mathbb{R}^n} } {\sum\limits_{i=1}^{k} \|x_i- \mathbf{D}r_i\|_2^2 + \lambda\|r_i\|_0} $$ $$ \text{subject to:} \text{ } \lambda > 0 \text{ , } C\equiv {\{\mathbf{D}\in{\mathbb{R}^{(d\times n)}} : \|d_i\|_2 \leq 1, \forall{i}\in[1,n] \}}$$where $(U^*,V^*)$ denote the components of the dictionary. Also, $\|\bullet\|_F$ denotes the Frobenius norm, $C$ constrains $\mathbf{D}$ so its atoms don't become arbitrarily large so $r_i$ is not arbitrarily small, and $\lambda$ controls the trade-off between sparsity and error minimization. In general, this problem is NP-hard and sometimes unsolvable due to the $l_0$ norm in the penalty term, and can be solved by convex relaxation from Compressed Sensing using the $l_1$ norm instead to ensure sparsity (this is the default implementation in *sklearn*).This problems leads to many possible numerical solution schemes, including the LASSO or $l_1$ regularization mentioned above. Another solution scheme is through the $k$-SVD algorithm, where we update the atoms of the dictionary one by one. This can be thought of as a generalization of $k$-Means clustering algorithm, where we enforce each input data $x_i$ to be encoded by a linear combination of not more than $T_0$ elements in a way such that:$$ \min_{\mathbf{D},R} \|X-\mathbf{D}R\|_F^2 $$$$ \text{subject to:} \text{ } \|r_i\|_0 \leq T_0, \forall{i}$$With this, we are fixing the first dictionary generated, finding the best possible $R$ given the constraint, and then iteratively updating the atoms of dictionary $\mathbf{D}$ as denoted below:$$ \|X-\mathbf{D}R\|_F^2 = \lvert X-\sum\limits_{i=1}^{k} d_ix_T^i\rvert _F^2 = \|E_k-d_kx_T^k\|_F^2 $$where at each step we compute the rank-$1$ approximation of the residual matrix $E_k$, update $d_k$, and enforce sparsity of $x_k$.In general, choosing a tailored dictionary for a dataset is a non-convex problem, and $k$-SVD operates iteratively which does not guarantee convergence to the global optimum. We also take advantage that this technique is posed as an optimization problem, and utilize the Mini-Batch version in *sklearn* to improve computational efficiency and reduce timing. Essentially, at each iteration we only use a small subset of the training data to try and solve the optimization problem. This introduces stochasticity and accelerates the iterations by having more but smaller and quicker runs per iteration. For this implementation of Dictionary Learning, we consider the following hyperparameters:- *n_components*: the number of dictionary elements to extract - *alpha*: sparsity controlling parameters ($\lambda$) {default=1}! fair warning: fitting the dictionary learner may take up to a minute or two
###Code
# Instatiate and fit the model to generic (MNIST) training data
n_atoms = 120
#DL = DictionaryLearning(n_components=n_atoms, alpha=2)
DL = MiniBatchDictionaryLearning(n_components=n_atoms, alpha=2, batch_size=10, n_jobs=4)
dictionary = DL.fit(x_train_f_c.T, y_train)
atoms = dictionary.components_
print('Dictionary Shape={} | Iterations={}'.format(atoms.shape, dictionary.n_iter_))
###Output
Dictionary Shape=(120, 784) | Iterations=1000
###Markdown
Below, we visualize the Dictionary that we created. We see that some digits are very nicely encoded, while other digits not so much and some digits are even missing from the encoding. This is highly dependent on the sparsity parameter selected ($\alpha$, a.k.a. $\lambda$). However, this dictionary can still be used to reconstructed the original dataset.
###Code
n_display_atoms = 36
atoms_ = np.reshape(atoms, (-1, x_train.shape[1], x_train.shape[2], 1)) #reshape dictionary
atoms_k_ = atoms_[:n_display_atoms,:] #select only a few atoms to visualize
plot_sample_matrix(atoms_k_, my_cmap='cividis')
plt.suptitle('First '+str(n_display_atoms)+' Dictionary Learning (MNIST) Bases')
plt.show();
###Output
_____no_output_____
###Markdown
Below we can see the latent space representations on the first 2D projection for the sparse Dictionary Learning representation of the MNIST data.
###Code
# Project the training data
zDL = x_train_f_c.T @ atoms.T
# Visualize feature space projections
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.scatter(zDL[:,0], zDL[:,1], s=30, c=cs, alpha=0.6)
plt.title('10-atoms Dictionary feature space 1-2')
plt.xlabel('$z_1$'); plt.ylabel('$z_2$')
plt.subplot(122)
plt.scatter(zDL[:,1], zDL[:,2], s=30, c=cs, alpha=0.6)
plt.title('10-atoms Dictionary feature space 2-3')
plt.xlabel('$z_2$'); plt.ylabel('$z_3$')
plt.show();
###Output
_____no_output_____
###Markdown
Using the learned dictionary, we can then reconstruct images through the *SparseCoder* function, which takes the learned dictionary and finds the sparse representation of the data against it. With this, each row of the product is a solution of the sparse coding proble, and the algorithm fides the optimal code such that: $ X \approx \hat{X} = Coder * Dictionary $. Thus, let's denote it as:$$ \hat{X} = \Phi * D $$
###Code
# Generate the sparse encoder and project onto the atoms of the dictionary
sparse_code = SparseCoder(atoms).fit_transform(x_train_f_c.T, y_train)
sparse_recs = (atoms.T @ sparse_code.T)
print('Sparse Encoder shape = {}'.format(sparse_code.shape))
print('Sparse Reconstruction dictionary shape = {}'.format(sparse_recs.shape))
###Output
Sparse Encoder shape = (5000, 120)
Sparse Reconstruction dictionary shape = (784, 5000)
###Markdown
Next, we have to restore the reconstruction by uncentering and unflattening the vectors into images, and below we plot a few of the original MNIST training images against the sparse Dictionary Learning reconstructions.
###Code
DL_img_f_hat = sparse_recs + np.expand_dims(np.mean(x_train_f, axis=1), axis=1) #uncenter
DL_img_hat = np.reshape(DL_img_f_hat.T, [-1, 28, 28]) #unflatten
num_rows, num_cols = 3, 3
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_rows*num_cols):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plt.grid(False); plt.xticks([]); plt.yticks([])
plt.imshow(x_train[i], vmin=0, vmax=1, cmap='viridis')
if i < 3: plt.title('Image')
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plt.grid(False); plt.xticks([]); plt.yticks([])
plt.imshow(DL_img_hat[i], vmin=0, vmax=1, cmap='turbo')
if i < 3: plt.title('Reconstruction')
plt.show();
###Output
_____no_output_____
###Markdown
We now have predicted images $\hat{x}$ for our MNIST dataset. Essentially, the decoded images from a sparse dictionary representation of the MNIST training set.We will visualize these reconstructed images, and compare their quality by means of pixel-wise MSE and SSIM. Where: $$ MSE = \frac{1}{n} \sum\limits_{i=1}^{n}(y_i-\hat{y}_i)^2$$and $$ SSIM(x,y) = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)}{(\mu_x^2+\mu_y^2+c_1)(\sigma^2_x+\sigma^2_y+c_2)}$$where $y_i$ are the true images and $\hat{y}_i$ are the reconstructed images for the MSE computation. On the other hand, we have that for SSIM, $x$ and $y$ are the two images to be compared, and $c_1 = (k_1L)^2$ and $c_2=(k_2L)^2$ are two variables to stabilize the division with weak denominators. Usually, $k_1=0.01$ and $k_2=0.03$, and $L=2^{(\bits/pixel)}-1$ typically.For MSE calculation, we will use the flattened, centered arrays $N \times M$ as opposed to the images $N \times (M,D)$ used in the SSIM calculation.
###Code
# Error Metrics for selected level of k
mse = mean_squared_error(x_train_f_c, sparse_recs)
ssim = SSIM(x_train.squeeze(), DL_img_hat)
print('MSE={:.3f} | SSIM={:.3f}'.format(mse,ssim))
###Output
MSE=0.008 | SSIM=0.927
###Markdown
We see an extremely good MSE and SSIM, so the reconstructed images from the sparse dictionary are quite good! **Subsurface Dictionary Learning:**The next step is to use this sparse dictionary from MNIST to try to reconstruct our subsurface quantity of interest. Again, this is testing the limit of our autoencoder structure, and the limits of the latent space representation of a generic dataset to reconstruct a new, unrelated image (e.g., transfer learning).This can be written as:$$ \hat{X}_{subsurface} = \Phi_{subsurface} * D_{generic} $$where $D_{generic}$ is the learned dictionary from MNIST data, and $\Phi_{subsurface}$ is the optimal sparse coder for the subsurface image using the learned generic dictionary.
###Code
A_gsc_r_f = np.reshape(ZI_s[feature_selected], (28*28, 1)) #flatten
A_gsc_r_f_c = A_gsc_r_f - np.expand_dims(np.mean(x_train_f, axis=1), axis=1) #center
print('Image shape:', A_gsc_r_f_c.shape)
# Plot the processed 2D map
plt.figure(figsize=(4, 4))
plt.imshow(np.reshape(A_gsc_r_f_c, (28, 28)), vmin=0, vmax=1,
cmap=my_maps[features.index(feature_selected)])
plt.title('Original '+feature_selected+' Map'); plt.xticks([]); plt.yticks([])
plt.colorbar(shrink=0.85)
plt.show();
sparse_code_subsurf = SparseCoder(atoms).fit_transform(A_gsc_r_f_c.T)
sparse_rec_subsurf = atoms.T @ sparse_code_subsurf.T
print('Subsurface Sparse Encoder shape = {}'.format(sparse_code_subsurf.shape))
print('Subsurface Sparse Reconstruction shape = {}'.format(sparse_rec_subsurf.shape))
###Output
Subsurface Sparse Encoder shape = (1, 120)
Subsurface Sparse Reconstruction shape = (784, 1)
###Markdown
Let's uncenter and unflatten the reconstructed subsurface image, and visualize the result. We can also compare to the true subsurface image and compute the MSE and SSIM from the reconstruction to the true image.
###Code
sparse_rec_subsurf_f = sparse_rec_subsurf + np.expand_dims(np.mean(x_train_f, axis=1), axis=1) #uncenter
sparse_rec_subsurf = np.reshape(sparse_rec_subsurf_f.T, [-1,28,28]) #unflatten
# Compute MSE and SSIM
mse = mean_squared_error(A_gsc_r_f_c, sparse_rec_subsurf_f)
ssim = SSIM(ZI_s[feature_selected], sparse_rec_subsurf.squeeze())
print('MSE={:.3f} | SSIM={:.3f}'.format(mse, ssim))
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.imshow(ZI_s[feature_selected], cmap=my_maps[features.index(feature_selected)])
plt.title('Original '+feature_selected+' Map')
plt.colorbar(fraction=0.046, pad=0.04); plt.xticks([]); plt.yticks([])
plt.subplot(122)
plt.imshow(sparse_rec_subsurf.squeeze(), vmin=0, vmax=1,
cmap=my_maps[features.index(feature_selected)])
plt.title('Reconstructed '+feature_selected+' Map')
plt.xlabel('using a Dictionary of '+str(n_atoms)+' atoms')
plt.colorbar(fraction=0.046, pad=0.04); plt.xticks([]); plt.yticks([])
plt.show();
###Output
_____no_output_____
|
mobile-strategy-games.ipynb
|
###Markdown
IMPORTING MODULES
###Code
import warnings as wrn
wrn.filterwarnings('ignore', category = DeprecationWarning)
wrn.filterwarnings('ignore', category = FutureWarning)
wrn.filterwarnings('ignore', category = UserWarning)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
%matplotlib inline
###Output
_____no_output_____
###Markdown
CLEANING DATA
###Code
'''Reading the data from csv files'''
data = pd.read_csv('appstore_games.csv')
display(data.head(3))
print('Dimension of data:', data.shape)
data.drop(['URL', 'ID'], axis = 1, inplace = True)
df.shape
data=data.drop_duplicates()
data=data.drop(['Subtitle','Languages'],axis=1)
data1 = data.dropna(subset=['User Rating Count'])
data1.isnull().sum()
def description(df):
summary = pd.DataFrame(df.dtypes,columns=['dtypes'])
summary = summary.reset_index()
summary['Name'] = summary['index']
summary = summary[['Name','dtypes']]
summary['Missing'] = df.isnull().sum().values
summary['Uniques'] = df.nunique().values
summary['First Value'] = df.iloc[0].values
summary['Second Value'] = df.iloc[1].values
summary['Third Value'] = df.iloc[2].values
return summary
###Output
_____no_output_____
###Markdown
DISPLAYING CLEANED DATA
###Code
description(data)
###Output
_____no_output_____
###Markdown
To expose the best combination for strategy games available in the appstore in order to get a good user rating (4.0/5.0 and above)
###Code
dat = data.groupby(['Genres']).median()
print (dat['Average User Rating'])
###Output
Genres
Books, Games, Board, Strategy 4.00
Books, Games, Role Playing, Strategy 4.50
Books, Games, Strategy, Word 4.50
Books, Role Playing, Games, Strategy 4.00
Books, Role Playing, Strategy, Games NaN
Books, Strategy, Games, Word 4.50
Books, Strategy, Word, Games NaN
Business, Family, Strategy, Games NaN
Business, Games, Puzzle, Strategy NaN
Business, Games, Simulation, Strategy NaN
Business, Games, Strategy 4.00
Business, Games, Strategy, Music NaN
Business, Games, Word, Strategy NaN
Business, Role Playing, Games, Strategy NaN
Business, Strategy, Action, Games NaN
Business, Strategy, Games 2.00
Business, Strategy, Games, Simulation NaN
Business, Strategy, Simulation, Games NaN
Education, Action, Strategy, Games NaN
Education, Board, Games, Strategy 3.75
Education, Board, Strategy, Games 4.00
Education, Card, Games, Strategy 4.00
Education, Card, Strategy, Games NaN
Education, Casual, Games, Strategy 4.50
Education, Casual, Strategy, Games NaN
Education, Family, Games, Strategy NaN
Education, Family, Strategy, Games NaN
Education, Games, Action, Strategy 4.50
Education, Games, Board, Strategy 5.00
Education, Games, Card, Strategy NaN
...
Utilities, Games, Action, Strategy 4.50
Utilities, Games, Adventure, Strategy 4.50
Utilities, Games, Board, Strategy 5.00
Utilities, Games, Card, Strategy 3.50
Utilities, Games, Music, Strategy NaN
Utilities, Games, Role Playing, Strategy 4.00
Utilities, Games, Simulation, Strategy NaN
Utilities, Games, Strategy 4.00
Utilities, Games, Strategy, Action 4.75
Utilities, Games, Strategy, Board 4.50
Utilities, Games, Strategy, Card NaN
Utilities, Games, Strategy, Casual 3.00
Utilities, Games, Strategy, Role Playing NaN
Utilities, Games, Strategy, Simulation 4.00
Utilities, Games, Strategy, Trivia 2.00
Utilities, Puzzle, Games, Strategy NaN
Utilities, Role Playing, Games, Strategy 4.50
Utilities, Role Playing, Strategy, Games 3.75
Utilities, Simulation, Games, Strategy NaN
Utilities, Strategy, Board, Games 4.50
Utilities, Strategy, Card, Games 5.00
Utilities, Strategy, Family, Games 4.00
Utilities, Strategy, Games 3.50
Utilities, Strategy, Games, Action NaN
Utilities, Strategy, Games, Adventure 4.00
Utilities, Strategy, Games, Board 3.00
Utilities, Strategy, Games, Card 4.50
Utilities, Strategy, Games, Simulation 5.00
Utilities, Strategy, Simulation, Games 4.00
Utilities, Strategy, Word, Games NaN
Name: Average User Rating, Length: 1004, dtype: float64
###Markdown
this shows the strategies which can be used for good user rating as the mdeian value will show the maximum number of ratings of 4 or 4+ SIMPLE ANALYSIS
###Code
plt.rcParams['figure.figsize'] = (18, 10)
ax = sns.countplot(data = data, x ='Average User Rating', palette = 'gray', alpha = 0.7, linewidth=4, edgecolor= 'black')
ax.set_ylabel('Count', fontsize = 20)
ax.set_xlabel('Average User Rating', fontsize = 20)
plt.show()
###Output
_____no_output_____
###Markdown
INFERENCE : most users have rated - 4.5 stars
###Code
plt.rcParams['figure.figsize'] = (18, 10)
ax = sns.kdeplot(data['User Rating Count'], shade = True, linewidth = 5, color = 'k')
ax.set_ylabel('Count', fontsize = 20)
ax.set_xlabel('User Rating Count', fontsize = 20)
plt.show()
###Output
C:\Users\SKS\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:448: RuntimeWarning: invalid value encountered in greater
X = X[np.logical_and(X > clip[0], X < clip[1])] # won't work for two columns.
C:\Users\SKS\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:448: RuntimeWarning: invalid value encountered in less
X = X[np.logical_and(X > clip[0], X < clip[1])] # won't work for two columns.
###Markdown
INFERENCE - very less people have voted between 0 and 1 and highest rating between 0 and 500000 and the count is nearer to 1.2 Identify the genres which are more significant and do simple analysis for the distribution according to genres.
###Code
plt.rcParams['figure.figsize'] = (25, 18)
data['Primary Genre'].value_counts().plot(kind='bar',color = 'gray', alpha = 0.7, linewidth=4, edgecolor= 'black')
plt.xlabel("primary genre", fontsize=20)
plt.ylabel("Count", fontsize=20)
plt.title("Common genres ", fontsize=22)
plt.xticks(rotation=90, fontsize = 15)
plt.show()
###Output
_____no_output_____
###Markdown
INFERENCE - most prominent genre is "Games" with its highest count and shows it is highly significant among other genres. Identify Which genres have higher user ratings.
###Code
from matplotlib.ticker import NullFormatter
plt.rcParams['figure.figsize'] = (25,15)
ax = sns.lineplot(data = data, x='Primary Genre', y='Average User Rating', color = 'teal')
plt.xlabel('Primary Genre', fontsize = 18)
plt.ylabel('Average User Rating', fontsize = 18)
plt.show()
###Output
_____no_output_____
###Markdown
INFERENCE - this shows that food and drink has highest rating and least for business and medical according to the users.Rest genres are in proximity of mid range specifying their equal usage and rating depending upon the varieties and nature and mutiple applications Identify trend of user rating based on pricing.
###Code
plt.figure(figsize=(18,10), dpi= 100)
ax = sns.regplot(data=data, x='Price', y='Average User Rating', color = 'teal')
ax.set_ylabel('Average User Rating', fontsize = 20)
ax.set_xlabel('Price', fontsize = 20)
plt.show()
###Output
_____no_output_____
|
DAY 101 ~ 200/DAY154_[leetCode] House Robber (Python).ipynb
|
###Markdown
2020년 7월 9일 목요일 House Robber (Python) 문제 : https://leetcode.com/problems/house-robber/ 블로그 : https://somjang.tistory.com/entry/leetCode-198-House-Robber-Python 첫번째 시도
###Code
class Solution():
def rob(self, nums):
now = 0
last = 0
for i in nums:
last, now = now, max(i+last, now)
return now
###Output
_____no_output_____
|
code/chap20.ipynb
|
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
kg = UNITS.kilogram
init = State(r=381 * m,
v=0 * m/s)
#r = distance from the Earth to Sun
#v = starting velocity
G = 6.67408 * 10 ** -11 * m / s #Gravitational constant
t_end = 10 * s
ms = 1.989 * 10**30 * kg#Mass of the sun
me = 5.972 * 10**24 * kg#mass of the Eartht
system = System(init=init, G=G, t_end=t_end)
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
import numpy as np
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
init = State(r=1.5e11 * m,
v=0 * m/s)
system = System(g = -9.81 * m/s**2,
G = 6.67408 * 10**-11 * m**3*kg**(-1)*s**(-2),
m_earth = 5.972 * 10**24 * kg,
m_sun = 1.989 * 10**30 * kg,
r_sun = 695508 * km,
init = init,
t_end = np.inf)
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
r, v = state
unpack(system)
dvdt = -G * m_sun / r**2
drdt = v
return drdt, dvdt
def event_func(state, t, system):
"""Return the height of the earth from the surface of the sun.
"""
r, v = state
dist = r - system.r_sun
return dist
results, details = run_ode_solver(system, slope_func, max_step=1000*s)
details.message
results
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
def plot_position(results):
plot(results.r, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg =UNITS.kilogram
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y,v = state
unpack(system)
dydt = v
dvdt = -(G/y**2)*(m1+m2)
return dydt, dvdt
init = State(y=149.6e9*m,
v=0 * m/s)
t_end=1e9 * s
system = System(init=init, m1=1.989e30*kg, m2=5.972e24*kg, G=6.67e-11*m*m*m/kg/s/s, t_end=t_end)
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=1e4)
details
results.plot()
results
-G*m1*m2/(149.6e9*m)**2
log(149.6e9)*G*(m1+m2)
G*(m1+m2)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
_____no_output_____
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
_____no_output_____
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
v_sidewalk = get_last_value(results.v) * m / s
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
m = UNITS.meter
km = UNITS.kilometer
mkm = km * 1e6
s = UNITS.second
day = UNITS.day
kg = UNITS.kilogram;
init = State(r = 1.481e11 * m, v = 0 * m / s)
m_earth = 5.9721986e24 * kg
m_sun = 1.988435e30 * kg
G = 6.67408e-11 * m**3 * kg**(-1) * s**(-2)
t_end = 200000000000 * day
r_earth = 6.371008e6 * m
r_sun = 6.957e8 * m
unpack (init)
F = m_earth * m_sun * G / r**2
system = System(init=init, m_earth=m_earth, m_sun=m_sun, G=G, t_end=t_end, r_earth=r_earth, r_sun=r_sun)
def slope_func(state, t, system):
r, v = state
unpack(system)
F = m_earth * m_sun * G / r**2
drdt = -v
dvdt = F / m_earth
return drdt, dvdt
drdt, dvdt = slope_func(init, 0, system)
print(drdt)
print(dvdt)
print(get_first_value(results.r))
print(get_last_value(results.r))
print(get_first_value(results.r)-get_last_value(results.r))
r_init = get_first_value(results.r) * m
r_final = get_last_value(results.r) * m
delta_r = r_final - r_init
delta_U = G * m_earth * m_sun * (1/r_final - 1/r_init)
delta_v = get_last_value(results.v) * m / s
delta_K = .5 * m_earth * delta_v**2
(delta_U - delta_K)
def event_func(state, t, system):
r, v = state
min_r = r_earth + r_sun
return r - min_r
results, details = run_ode_solver(system, slope_func, events=event_func, max_step = 100000 * s)
details
results;
results.index /= 60 * 60 * 24
distance = results.r / 1e9
distance;
t_impact = get_last_label(results.r) * day
plt.plot(distance)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
N = UNITS.newton
M = 1.99 * 10**30 #kg
R = 1.5 * 10**11 #m
G = 6.674e-11 #N / kg**2 * m**2
init = State(D = R, S = 0)
a_earth = G*M/R**2 #* 7464960000
system = System(init=init, g=a_earth, t_end=6000000)
def slope_func2(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
D, S = state
unpack(system)
dDdt = S
dSdt = -g
return dDdt, dSdt
dDdt, dSdt = slope_func2(init, 0, system)
print(dDdt)
print(dSdt)
results, details = run_ode_solver(system, slope_func2, max_step=0.5*s)
details.message
results
def plot_position(results):
plot(results.D, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
def event_func2(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
D, S = state
return D
results, details = run_ode_solver(system, slope_func2, events=event_func2)
details
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
AU = UNITS.astronomical_unit
m = UNITS.meter
s = UNITS.second
g = UNITS.gram
kg = 1000 * g
km = 1000 * m
init = State(r=149.6 * 10e9 * km,
v=0 * m/s)
G = 6.67 * 10e-11 * (m*m*m) / (kg * s*s)
Me = 5.972 * 10e24 *kg
Ms = 1.99 * 10e30 *kg
t_end = 1e7 * s
system = System(init=init, G=G, Me=Me, Ms=Ms, t_end=t_end)
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing 'G', mass of sun and earth
returns: derivatives of r and v
"""
r, v = state
unpack(system)
force = G*((Ms * Me)/(r*r))
drdt = v
dvdt = -force/Me
return drdt, dvdt
slope_func(init, 0, system)
results, details = run_ode_solver(system, slope_func)
details
# convert seconds to days
#results.index /= 60 * 60 * 24
# convert meters to millions of km
#results.r /= 1e9
plot(results.r, label='r')
decorate(xlabel='Time (day)', ylabel='Distance from sun (million km)')
r_earth = 6.371e6 * m
r_sun = 695.508e6 * m
system = System(init=init,
G = G,
Ms = Ms,
r_final=r_sun + r_earth,
Me = Me,
t_end=t_end)
def event_func(state, t, system):
"""Return the distance of the Earth from the sun.
"""
r, v = state
return r - system.r_final
event_func(init, 0, system)
results, details = run_ode_solver(system, slope_func, events=event_func)
details
# convert seconds to days
results.index /= 60 * 60 * 24
# convert meters to millions of km
results.r /= 1e9
plot(results.r, label='r')
decorate(xlabel='Time (day)', ylabel='Distance from sun (million km)')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
_____no_output_____
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
_____no_output_____
###Markdown
**Exercise:** By default, `run_ode_solver` returns equally-spaced points in time, but that's not what it really computes. The ODE solver it uses is adaptive, choosing small time steps when errors are large, and large time steps when errors are small.Run `run_ode_solver` again with the option `t_eval=None`. It should return the actual time steps it computed. What can you infer about how `run_ode_solver` works?
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
km = UNITS.kilometer
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
mass2 = 1.989 * 1030 *kg
mass1 = 5.9722 * 10**24 * kg
G = 6.67 * 10**-11
init = State(r=148000000*km)
system = System(mass1 = mass1, mass2 = mass2, G = G)
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
r, v = state
unpack(system)
dvdt = -g
return dydt, dvdt
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
_____no_output_____
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
_____no_output_____
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
N = UNITS.newton
kg = UNITS.kilogram
m = UNITS.meter
AU = UNITS.astronomical_unit
day = UNITS.day
mEarth = 5.972e24 * kg
mSun = 1.989e30 *kg
G = 6.674e-11 * (N*m**2)/(kg**2)
death = 6.371e6 * m + 695.508e6 * m
def make_system(mEarth, mSun, G,death):
init = State(r=149597870000 * m,
v=0 * m/s);
t_0 = 0 *s
t_end = 1e10 * s
return System(init=init,
mEarth = mEarth,
mSun = mSun,
G = G,
death = death,
t_0 = t_0,
t_end=t_end)
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
r, v = state
unpack(system)
force = universal_grav(state,system)
drdt = v
dvdt = -force/mEarth
return drdt, dvdt
def universal_grav (state,system):
r,v = state
unpack (system)
force = G * mEarth * mSun / r**2
print(force)
return force
system = make_system(mEarth, mSun, G,death)
drdt, dvdt = slope_func(init, 0, system)
print(drdt)
print(dvdt)
def event_func(state, t, system):
"""Return the distance between the earth and the sun surfaces
"""
r, v = state
return r - death
results, details = run_ode_solver(system, slope_func, events=event_func)
details
t_death = get_last_label(results) * s
t_death.to(UNITS.day)
ts = linspace(system.t_0, t_death, 201)
results, details = run_ode_solver(system, slope_func, events=event_func, t_eval=ts)
results.index /= 60*60*24
results.r /= 1e9 #converts to millions of km
def plot_position(results):
plot(results.r, label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
plot_position(results)
penergy_start = -G*mEarth*mSun/(149597870000*m)
penergy_end = -G*mEarth*mSun/death
kenergy_start = 0.5*mEarth*((get_first_value(results.v)*m/s)**2)
kenergy_end = 0.5*mEarth*((get_last_value(results.v)*m/s)**2)
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
|
examples_ja/tutorial018_boolean_reduction.ipynb
|
###Markdown
3体問題の2体問題への分解QUBOでは通常2体問題と呼ばれる2つの量子ビットの掛け算までしか計算できません。しかし、時々問題を作っている際に、3量子ビット以上の掛け算が出ることがあります。その際には数学的なテクニックを利用して3体問題を2体問題に分解をします。 2体問題の解き方例えば下記のようなコスト関数がある時には、そのままその係数をQUBOで計算できます。これは、q0のバイアスとq0,q1間の結合係数を上三角行列で表すことで実現できます。詳しくは他のチュートリアルを参考にしてください。
###Code
import wildqat as wq
a = wq.opt()
a.qubo = [[1,-1],[0,0]]
a.sa()
###Output
1.5159461498260498
###Markdown
3体問題の解き方では、3体問題という量子ビットが複数掛け算されているものはどうでしょうか?結合係数は2量子ビット同士の係数しかないので、そのままでは設定できません。そこで、今回はこの3体問題を数学的に2体問題に分解します。 分解方法分解に際して新しい量子ビットを追加します。下記のように新しい量子ビットを加えて、表現します。ただ、置き換えただけではなく、上記条件を満たすための新しい拘束条件を式の最後に付け加えます。係数のガンマはその時々によって調整するハイパーパラメータです。ここではガンマを0.5にしてみると下記のようになり、これを計算にかけると、
###Code
b = wq.opt()
b.qubo = [[1,0,0,-1],[0,0,0.5,-1],[0,0,0,-1],[0,0,0,1.5]]
b.sa()
###Output
1.5440006256103516
###Markdown
実際に、これは条件を満たしています。全体のエネルギーの遷移はmatplotlibを用いて確認することができて下記のようになります。
###Code
a.plot()
###Output
_____no_output_____
|
notebooks/02.04-Flow_Constrictions_on_Upper_Rainy_River.ipynb
|
###Markdown
*This notebook contains material from [Controlling Natural Watersheds](https://jckantor.github.io/Controlling-Natural-Watersheds);content is available [on Github](https://github.com/jckantor/Controlling-Natural-Watersheds.git).* Flow Constrictions on Upper Rainy River
###Code
# Display graphics inline with the notebook
%matplotlib notebook
# Standard Python modules
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import datetime
# Modules to display images and data tables
from IPython.display import Image
from IPython.core.display import display
dir = '../data/'
img = '../figures/'
hydat = pd.HDFStore('../data/hydat.h5')
DLY_LEVELS = hydat['DLY_LEVELS']
DLY_FLOWS = hydat['DLY_FLOWS']
STATIONS = hydat['STATIONS']
def getFlowsWSC(s):
data = DLY_FLOWS[DLY_FLOWS['STATION_NUMBER'] == s]
ts = {}
for k in data.index:
mo = str(data.ix[k,'MONTH'])
yr = str(data.ix[k,'YEAR'])
for n in range(1,data.ix[k,'NO_DAYS']+1):
ts[pd.to_datetime(mo+'/'+str(n)+'/'+yr)] = data.ix[k,'FLOW'+str(n)]
ts = pd.Series(ts)
ts.name = STATIONS.ix[s,'STATION_NAME'] + ' (' + s + ')'
# drop initial and terminal null entries
j = 0
while pd.isnull(ts.ix[j]):
j += 1
k = len(ts.index) - 1
while pd.isnull(ts.ix[k]):
k += -1
return ts[j:k]
def getLevelsWSC(s):
global DLY_LEVELS
data = DLY_LEVELS[DLY_LEVELS['STATION_NUMBER'] == s]
ts = {}
for k in data.index:
mo = str(data.ix[k,'MONTH'])
yr = str(data.ix[k,'YEAR'])
for n in range(1,data.ix[k,'NO_DAYS']+1):
ts[pd.to_datetime(mo+'/'+str(n)+'/'+yr)] = data.ix[k,'LEVEL'+str(n)]
ts = pd.Series(ts)
ts.name = STATIONS.ix[s,'STATION_NAME'] + ' (' + s + ')'
# drop initial and terminal null entries
j = 0
while pd.isnull(ts.ix[j]):
j += 1
k = len(ts.index) - 1
while pd.isnull(ts.ix[k]):
k += -1
return ts[j:k]
def mapWSC(stationList):
S = STATIONS.ix[stationList,['STATION_NAME','LATITUDE','LONGITUDE']]
locs = ["{0},{1}".format(S.ix[s,'LATITUDE'], S.ix[s,'LONGITUDE']) \
for s in S.index]
google_maps_url = \
"https://maps.googleapis.com/maps/api/staticmap?" + \
"size=640x320" + \
"&maptype=terrain" + \
"&markers=color:red%7Csize:mid%7C" + "|".join(locs)
img = Image(url = google_maps_url)
display(img)
return S
from IPython.display import Image
from IPython.core.display import display
ranierLevelStations = ['05PB007','05PC024','05PC025']
ranierFlowStations = ['05PC019']
levelLocs = ["{0},{1}".format(STATIONS.ix[s,'LATITUDE'], \
STATIONS.ix[s,'LONGITUDE']) for s in ranierLevelStations]
levelMarkers = "&markers=color:red%7Csize:mid%7C" + "|".join(levelLocs)
flowLocs = ["{0},{1}".format(STATIONS.ix[s,'LATITUDE'], \
STATIONS.ix[s,'LONGITUDE']) for s in ranierFlowStations]
flowMarkers = "&markers=color:green%7Csize:mid%7C" + "|".join(flowLocs)
google_maps_url = "https://maps.googleapis.com/maps/api/staticmap?" + \
"size=640x300&maptype=terrain"
Image(url = google_maps_url + levelMarkers + flowMarkers)
Image.save9
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import urllib
urllib.urlretrieve(google_maps_url + levelMarkers + flowMarkers,'../images/RRMap.png')
img=mpimg.imread('../images/RRMap.png')
plt.figure(figsize=(10,8))
import matplotlib.gridspec as gridspec
gs = gridspec.GridSpec(2, 3, height_ratios=[1,1])
ax1 = plt.subplot(gs[0,:])
fig = ax1.imshow(img)
fig.set_cmap('hot')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
def ranier(y,ax):
plt.hold(True)
for r in ranierLevelStations:
lvl = getLevelsWSC(r)
lvl[lvl.index.year==y].plot()
#ax.legend(ranierLevelStations,loc='upper left')
plt.ylim(336.0,339.0)
plt.hold(False)
ranier(2012,plt.subplot(gs[1,0]))
ranier(2013,plt.subplot(gs[1,1]))
ranier(2014,plt.subplot(gs[1,2]))
plt.tight_layout()
fname = '../images/RainyRiverConstrictions.png'
plt.savefig(fname)
!convert $fname -trim $fname
###Output
_____no_output_____
|
_posts/python-v3/3d/3d-clusters/3d-clusters.ipynb
|
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! 3D Clustering with Alpha Shapes
###Code
import plotly.plotly as py
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/alpha_shape.csv')
df.head()
scatter = dict(
mode = "markers",
name = "y",
type = "scatter3d",
x = df['x'], y = df['y'], z = df['z'],
marker = dict( size=2, color="rgb(23, 190, 207)" )
)
clusters = dict(
alphahull = 7,
name = "y",
opacity = 0.1,
type = "mesh3d",
x = df['x'], y = df['y'], z = df['z']
)
layout = dict(
title = '3d point clustering',
scene = dict(
xaxis = dict( zeroline=False ),
yaxis = dict( zeroline=False ),
zaxis = dict( zeroline=False ),
)
)
fig = dict( data=[scatter, clusters], layout=layout )
# Use py.iplot() for IPython notebook
py.iplot(fig, filename='3d point clustering')
###Output
_____no_output_____
###Markdown
Reference See https://plot.ly/python/reference/mesh3d for more information regarding subplots!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'3d-clusters.ipynb', 'python/3d-point-clustering/', 'Python 3D Clustering | plotly',
'How to cluster points in 3d with alpha shapes in plotly and Python',
title= '3D Point Clustering in Python | plotly',
name = '3d Clustering',
has_thumbnail='true', thumbnail='thumbnail/3d-clusters.jpg',
language='python',
display_as='3d_charts', order=14,
ipynb= '~notebook_demo/74')
###Output
_____no_output_____
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).You can set up Plotly to work in [online](https://plotly.com/python/getting-started/initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! 3D Clustering with Alpha Shapes
###Code
import plotly.plotly as py
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/alpha_shape.csv')
df.head()
scatter = dict(
mode = "markers",
name = "y",
type = "scatter3d",
x = df['x'], y = df['y'], z = df['z'],
marker = dict( size=2, color="rgb(23, 190, 207)" )
)
clusters = dict(
alphahull = 7,
name = "y",
opacity = 0.1,
type = "mesh3d",
x = df['x'], y = df['y'], z = df['z']
)
layout = dict(
title = '3d point clustering',
scene = dict(
xaxis = dict( zeroline=False ),
yaxis = dict( zeroline=False ),
zaxis = dict( zeroline=False ),
)
)
fig = dict( data=[scatter, clusters], layout=layout )
# Use py.iplot() for IPython notebook
py.iplot(fig, filename='3d point clustering')
###Output
_____no_output_____
###Markdown
Reference See https://plotly.com/python/reference/mesh3d for more information regarding subplots!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'3d-clusters.ipynb', 'python/3d-point-clustering/', 'Python 3D Clustering | plotly',
'How to cluster points in 3d with alpha shapes in plotly and Python',
title= '3D Point Clustering in Python | plotly',
name = '3d Clustering',
has_thumbnail='true', thumbnail='thumbnail/3d-clusters.jpg',
language='python',
display_as='3d_charts', order=14,
ipynb= '~notebook_demo/74')
###Output
_____no_output_____
|
ArcGIS Python API - List all web maps apps and layers.ipynb
|
###Markdown
ArcGIS Python API - List all web maps apps and layers
###Code
from arcgis import *
from IPython.display import display
import pandas as pd
import getpass
username = getpass.getpass()
password = getpass.getpass()
gis = GIS("https://mpi.maps.arcgis.com", username, password)
###Output
_____no_output_____
###Markdown

###Code
import datetime
MPI_AGOL_items = gis.content.search("", item_type = "", max_items=10000, outside_org = False)
print(MPI_AGOL_items[0].folder)
items_arr = []
for i in MPI_AGOL_items:
items_arr.append([i.id,i.title,i.owner,datetime.datetime.fromtimestamp(i.created / 1e3),datetime.datetime.fromtimestamp(i.modified / 1e3),i.access,i.description,i.snippet,i.spatialReference,i.accessInformation,i.licenseInfo,i.type,i.url,i.tags,i.numViews])
#print(items_arr[0])
items_df = pd.DataFrame(items_arr,columns=["id","title","owner","created","modified","access","description","snippet","spatialReference","accessInformation","licenseInfo","type","url","tags","numViews"])
items_df.info()
items_df.head()
thisweek_items_df = items_df[items_df['modified'] > datetime.datetime.today() - datetime.timedelta(days=7)]
public_items_df = items_df[items_df['access'] == "public"]
public_items_df.count()
print(MPI_AGOL_items[0].folder)
###Output
_____no_output_____
|
Covid_eda.ipynb
|
###Markdown
Step 1 Use Pandas to load COVID-19 State Data Set as the dataframe. **[Pandas](https://pandas.pydata.org/)** **[COVID-19 State Data Set](https://www.kaggle.com/nightranger77/covid19-state-data/data)**
###Code
import pandas as pd
df=pd.read_csv("COVID19_state.csv")
df.head()
###Output
_____no_output_____
###Markdown
Step 2 Get 20 data items as sample randomly and show them.
###Code
df.sample(n=20)
###Output
_____no_output_____
###Markdown
Step 3 Show 10 data items which the Deaths are more than 100 as sample randomly.
###Code
res=df[df['Deaths']>100]
res1=res.sample(n=10)
print("The states whose deaths are above 100 are :\n",res1)
###Output
The states whose deaths are above 100 are :
State Tested Infected Deaths Population Pop Density Gini \
18 Louisiana 58498 12496 409.0 4645184 107.5175 0.4990
4 California 126700 12026 276.0 39937489 256.3727 0.4899
38 Pennsylvania 70030 10017 136.0 12820878 286.5449 0.4689
44 Utah 63751 6110 105.0 3282115 39.9430 0.4063
10 Georgia 26294 6383 208.0 10736059 186.6719 0.4813
19 Massachusetts 68800 11736 216.0 6976597 894.4355 0.4786
14 Illinois 53581 10357 243.0 12659682 228.0243 0.4810
5 Colorado 23900 4565 126.0 5845526 56.4011 0.4586
48 Wisconsin 87918 7591 310.0 5851754 108.0497 0.4498
31 New Jersey 75356 34124 846.0 8936574 1215.1991 0.4813
ICU Beds Income GDP ... Hospitals Health Spending Pollution \
18 1289 45542 53589 ... 158 7815 7.9
4 7338 62586 74205 ... 359 7549 12.8
38 3169 55349 61594 ... 199 9258 9.2
44 565 45340 55550 ... 54 5982 8.4
10 2508 45745 55832 ... 145 6587 8.3
19 1326 70073 82480 ... 75 10559 6.3
14 3144 56933 67268 ... 187 8262 9.3
5 1597 56846 63882 ... 89 6804 6.7
48 1159 50756 57720 ... 133 8702 6.8
31 1822 67609 69378 ... 82 8859 8.1
Med-Large Airports Temperature Urban Age 0-25 Age 26-54 Age 55+ \
18 1.0 66.4 73.2 0.34 0.37 0.28
4 9.0 59.4 95.0 0.33 0.40 0.26
38 2.0 48.8 78.7 0.30 0.37 0.32
44 1.0 48.6 90.6 0.42 0.37 0.21
10 1.0 63.5 75.1 0.35 0.39 0.26
19 1.0 47.9 92.0 0.30 0.39 0.31
14 2.0 51.8 88.5 0.33 0.38 0.28
5 1.0 45.1 86.2 0.33 0.40 0.27
48 1.0 43.1 70.2 0.32 0.37 0.31
31 1.0 52.7 94.7 0.31 0.38 0.30
School Closure Date
18 03/16/20
4 03/19/20
38 03/16/20
44 03/16/20
10 03/18/20
19 03/17/20
14 03/17/20
5 03/23/20
48 03/18/20
31 03/18/20
[10 rows x 26 columns]
###Markdown
Step 4 Sort the data by GDP and present the top 20 data items.
###Code
sort_by_gdp=df.sort_values('GDP', ascending=False)
sort_by=sort_by_gdp.head(20)
print("Sorted data :\n",sort_by)
###Output
Sorted data :
State Tested Infected Deaths Population Pop Density \
7 District of Columbia 6438 902 21.0 720687 11814.5410
34 New York 283621 113704 3565.0 19440469 412.5211
19 Massachusetts 68800 11736 216.0 6976597 894.4355
8 Delaware 4289 593 14.0 982895 504.3073
6 Connecticut 22029 5276 165.0 3563077 735.8689
4 California 126700 12026 276.0 39937489 256.3727
47 Washington 5844 461 20.0 7797095 117.3272
0 Alaska 5022 171 5.0 734002 1.2863
28 North Dakota 6207 186 3.0 761723 11.0393
50 Wyoming 8838 324 3.0 567025 5.8400
31 New Jersey 75356 34124 846.0 8936574 1215.1991
20 Maryland 28337 3609 67.0 6083116 626.6731
14 Illinois 53581 10357 243.0 12659682 228.0243
23 Minnesota 25423 865 24.0 5700671 71.5922
11 Hawaii 10462 351 3.0 1412687 219.9419
29 Nebraska 5472 323 8.0 1952570 25.4161
5 Colorado 23900 4565 126.0 5845526 56.4011
30 New Hampshire 8032 621 9.0 1371246 153.1605
45 Virginia 28043 1428 8.0 8626207 218.4403
38 Pennsylvania 70030 10017 136.0 12820878 286.5449
Gini ICU Beds Income GDP ... Hospitals Health Spending \
7 0.5420 314 47285 200277 ... 10 11944
34 0.5229 3952 68667 85746 ... 166 9778
19 0.4786 1326 70073 82480 ... 75 10559
8 0.4522 186 51449 77253 ... 7 10254
6 0.4945 674 74561 76342 ... 32 9859
4 0.4899 7338 62586 74205 ... 359 7549
47 0.4591 1265 60781 74182 ... 92 7913
0 0.4081 119 59687 73205 ... 21 11064
28 0.4533 238 54306 72597 ... 39 9851
50 0.4360 102 60095 69900 ... 29 8320
31 0.4813 1822 67609 69378 ... 82 8859
20 0.4499 1134 62914 68573 ... 50 8602
14 0.4810 3144 56933 67268 ... 187 8262
23 0.4496 1171 56374 64675 ... 127 8871
11 0.4420 201 54565 64096 ... 22 7299
29 0.4477 440 52110 63942 ... 93 8412
5 0.4586 1597 56846 63882 ... 89 6804
30 0.4304 242 61405 63067 ... 28 9589
45 0.4705 1654 56952 62563 ... 96 7556
38 0.4689 3169 55349 61594 ... 199 9258
Pollution Med-Large Airports Temperature Urban Age 0-25 Age 26-54 \
7 9.8 0.0 54.65 100.0 0.30 0.48
34 6.6 3.0 45.40 87.9 0.31 0.39
19 6.3 1.0 47.90 92.0 0.30 0.39
8 8.3 0.0 55.30 83.3 0.30 0.37
6 7.2 1.0 49.00 88.0 0.30 0.38
4 12.8 9.0 59.40 95.0 0.33 0.40
47 8.0 1.0 48.30 84.1 0.31 0.40
0 6.4 1.0 26.60 66.0 0.36 0.39
28 4.6 0.0 40.40 59.9 0.35 0.37
50 5.0 0.0 42.00 64.8 0.32 0.36
31 8.1 1.0 52.70 94.7 0.31 0.38
20 7.7 1.0 54.20 87.2 0.31 0.39
14 9.3 2.0 51.80 88.5 0.33 0.38
23 6.6 1.0 41.20 73.3 0.32 0.38
11 5.4 2.0 70.00 91.9 0.30 0.37
29 7.1 1.0 48.80 73.1 0.35 0.37
5 6.7 1.0 45.10 86.2 0.33 0.40
30 4.4 0.0 43.80 60.3 0.28 0.37
45 6.9 2.0 55.10 75.5 0.33 0.38
38 9.2 2.0 48.80 78.7 0.30 0.37
Age 55+ School Closure Date
7 0.22 03/16/20
34 0.30 03/18/20
19 0.31 03/17/20
8 0.33 03/16/20
6 0.32 03/17/20
4 0.26 03/19/20
47 0.29 03/17/20
0 0.25 03/19/20
28 0.28 03/16/20
50 0.31 03/20/20
31 0.30 03/18/20
20 0.29 03/16/20
14 0.28 03/17/20
23 0.30 03/18/20
11 0.32 03/23/20
29 0.29 NaN
5 0.27 03/23/20
30 0.34 03/16/20
45 0.29 03/16/20
38 0.32 03/16/20
[20 rows x 26 columns]
###Markdown
Step 5 Show the simple statistical information (mean, std, min, max, quartile1, quartile2, quartile3).
###Code
res1=df.min()
print("the minimum infected people of COVID-19 are :\n",res1)
res2=df.max()
print("the maximum infected people of COVID-19 are :\n",res2)
res3=df.std()
print("the std of infected people of COVID-19 are :\n",res3)
res4=df.mean()
print("the mean of infected people of COVID-19 are :\n",res4)
res5=df.quantile([0.25,0.5,0.75])
print("the quartile of infected people of COVID-19 are :\n",res5)
res6=df.describe()
print("the description of infected people of COVID-19 are :\n",res6)
###Output
the minimum infected people of COVID-19 are :
State Alabama
Tested 2523
Infected 171
Deaths 2
Population 567025
Pop Density 1.2863
Gini 0.4063
ICU Beds 94
Income 37994
GDP 37948
Unemployment 2.2
Sex Ratio 0.88857
Smoking Rate 8.9
Flu Deaths 9.6
Respiratory Deaths 19.6
Physicians 1172
Hospitals 7
Health Spending 5982
Pollution 4.4
Med-Large Airports 0
Temperature 26.6
Urban 38.7
Age 0-25 0.26
Age 26-54 0.35
Age 55+ 0.21
dtype: object
the maximum infected people of COVID-19 are :
State Wyoming
Tested 283621
Infected 113704
Deaths 3565
Population 39937489
Pop Density 11814.5
Gini 0.542
ICU Beds 7338
Income 74561
GDP 200277
Unemployment 5.8
Sex Ratio 1.05469
Smoking Rate 26
Flu Deaths 26.1
Respiratory Deaths 64.3
Physicians 112906
Hospitals 523
Health Spending 11944
Pollution 12.8
Med-Large Airports 9
Temperature 70.7
Urban 100
Age 0-25 0.42
Age 26-54 0.48
Age 55+ 0.37
dtype: object
the std of infected people of COVID-19 are :
Tested 4.568136e+04
Infected 1.643309e+04
Deaths 5.097193e+02
Population 7.450657e+06
Pop Density 1.647226e+03
Gini 2.345488e-02
ICU Beds 1.562125e+03
Income 8.224387e+03
GDP 2.264827e+04
Unemployment 8.312334e-01
Sex Ratio 3.186821e-02
Smoking Rate 3.489429e+00
Flu Deaths 3.669887e+00
Respiratory Deaths 1.090842e+01
Physicians 2.253292e+04
Hospitals 8.888191e+01
Health Spending 1.256751e+03
Pollution 1.457535e+00
Med-Large Airports 1.758564e+00
Temperature 8.627992e+00
Urban 1.488548e+01
Age 0-25 2.711631e-02
Age 26-54 1.967979e-02
Age 55+ 3.093573e-02
dtype: float64
the mean of infected people of COVID-19 are :
Tested 3.258867e+04
Infected 6.071039e+03
Deaths 1.651961e+02
Population 6.496451e+06
Pop Density 4.315605e+02
Gini 4.661647e-01
ICU Beds 1.466412e+03
Income 5.159761e+04
GDP 6.149733e+04
Unemployment 3.515686e+00
Sex Ratio 9.637209e-01
Smoking Rate 1.727059e+01
Flu Deaths 1.524118e+01
Respiratory Deaths 4.233529e+01
Physicians 1.971167e+04
Hospitals 1.019216e+02
Health Spending 8.332157e+03
Pollution 7.413725e+00
Med-Large Airports 1.215686e+00
Temperature 5.199902e+01
Urban 7.410784e+01
Age 0-25 3.235294e-01
Age 26-54 3.764706e-01
Age 55+ 2.990196e-01
dtype: float64
the quartile of infected people of COVID-19 are :
Tested Infected Deaths Population Pop Density Gini ICU Beds \
0.25 7090.5 659.5 14.0 1802113.0 50.60485 0.45205 327.0
0.50 18925.0 1676.0 40.0 4499692.0 108.04970 0.46800 1134.0
0.75 33555.0 4920.5 126.5 7587794.5 223.98310 0.47950 1841.5
Income GDP Unemployment ... Physicians Hospitals \
0.25 45981.0 51156.0 2.85 ... 5656.0 44.5
0.50 49417.0 57492.0 3.40 ... 12205.0 89.0
0.75 56610.0 65971.5 3.80 ... 23991.5 129.5
Health Spending Pollution Med-Large Airports Temperature Urban \
0.25 7390.0 6.65 0.0 45.3 65.40
0.50 8107.0 7.40 1.0 51.7 74.20
0.75 9095.5 8.15 1.0 58.3 87.55
Age 0-25 Age 26-54 Age 55+
0.25 0.305 0.370 0.29
0.50 0.320 0.370 0.30
0.75 0.340 0.385 0.31
[3 rows x 24 columns]
the description of infected people of COVID-19 are :
Tested Infected Deaths Population Pop Density \
count 51.000000 51.000000 51.000000 5.100000e+01 51.000000
mean 32588.666667 6071.039216 165.196078 6.496451e+06 431.560508
std 45681.363900 16433.091907 509.719335 7.450657e+06 1647.225920
min 2523.000000 171.000000 2.000000 5.670250e+05 1.286300
25% 7090.500000 659.500000 14.000000 1.802113e+06 50.604850
50% 18925.000000 1676.000000 40.000000 4.499692e+06 108.049700
75% 33555.000000 4920.500000 126.500000 7.587794e+06 223.983100
max 283621.000000 113704.000000 3565.000000 3.993749e+07 11814.541000
Gini ICU Beds Income GDP Unemployment ... \
count 51.000000 51.000000 51.000000 51.000000 51.000000 ...
mean 0.466165 1466.411765 51597.607843 61497.333333 3.515686 ...
std 0.023455 1562.124594 8224.387459 22648.274324 0.831233 ...
min 0.406300 94.000000 37994.000000 37948.000000 2.200000 ...
25% 0.452050 327.000000 45981.000000 51156.000000 2.850000 ...
50% 0.468000 1134.000000 49417.000000 57492.000000 3.400000 ...
75% 0.479500 1841.500000 56610.000000 65971.500000 3.800000 ...
max 0.542000 7338.000000 74561.000000 200277.000000 5.800000 ...
Physicians Hospitals Health Spending Pollution \
count 51.000000 51.000000 51.000000 51.000000
mean 19711.666667 101.921569 8332.156863 7.413725
std 22532.917088 88.881909 1256.751246 1.457535
min 1172.000000 7.000000 5982.000000 4.400000
25% 5656.000000 44.500000 7390.000000 6.650000
50% 12205.000000 89.000000 8107.000000 7.400000
75% 23991.500000 129.500000 9095.500000 8.150000
max 112906.000000 523.000000 11944.000000 12.800000
Med-Large Airports Temperature Urban Age 0-25 Age 26-54 \
count 51.000000 51.000000 51.000000 51.000000 51.000000
mean 1.215686 51.999020 74.107843 0.323529 0.376471
std 1.758564 8.627992 14.885481 0.027116 0.019680
min 0.000000 26.600000 38.700000 0.260000 0.350000
25% 0.000000 45.300000 65.400000 0.305000 0.370000
50% 1.000000 51.700000 74.200000 0.320000 0.370000
75% 1.000000 58.300000 87.550000 0.340000 0.385000
max 9.000000 70.700000 100.000000 0.420000 0.480000
Age 55+
count 51.000000
mean 0.299020
std 0.030936
min 0.210000
25% 0.290000
50% 0.300000
75% 0.310000
max 0.370000
[8 rows x 24 columns]
###Markdown
Step 6 Use matplotlib show 2D images about data.[Matplotlib](https://matplotlib.org/) Plot the distribution of two class (1. GDP = 58000) of COVID-19 State Data using different colors and different marker where x-axis is the Pollution and y-axis the Mortality-rate.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
df['Mortality-rate']=df['Deaths']/df['Population']*1000
#print("mortality rate: \n",df['Mortality-rate'])
df1=pd.DataFrame()
df2=pd.DataFrame()
df1['gdp1']=df['GDP']
df2['gdp2']=df['GDP']
df1['pollution']=df['Pollution']
df1['mortality-rate']=df['Mortality-rate']
print(df1['pollution'])
print(df1['mortality-rate'])
df1['pollution1']=df1[df1['gdp1']<58000]['pollution']
print(df1['pollution1'])
df1['mortality-rate1']=df1[df1['gdp1']<58000]['mortality-rate']
print(df1['mortality-rate1'])
df2['pollution2']=df['Pollution']
df2['mortality-rate2']=df['Mortality-rate']
print(df2['pollution2'])
print(df2['mortality-rate2'])
df2['pollution3']=df2[df2['gdp2']>=58000]['pollution2']
print(df2['pollution3'])
df2['mortality-rate3']=df2[df2['gdp2']>=58000]['mortality-rate2']
print(df2['mortality-rate3'])
colorDict = {
'pollution1': 0,
'mortality-rate1': 1,
'pollution3': 2,
'mortality-rate3':3
}
plt.xlabel('POLLUTION')
plt.ylabel('MORTALITY RATE')
plt.scatter(df1['pollution1'], df1['mortality-rate1'], color='g', marker='*', alpha=0.6, s=100, label='GDP<58000')
plt.scatter(df2['pollution3'], df2['mortality-rate3'], color='b', marker='.', alpha=0.6, s=100, label='GDP>=58000')
plt.title('Scatter plot of distribution of GDP<58000 and GDP>=58000')
plt.legend()
plt.show()
import matplotlib.pyplot as plt
import numpy as np
bigGDP=df[df["GDP"]<58000]
smallGDP=df[df["GDP"]>=58000]
bigMR=bigGDP['Deaths']/bigGDP['Infected']
smallMR=smallGDP['Deaths']/smallGDP['Infected']
# x=[bigGDP['Population'],smallGDP['Population']]
# y=[bigMR,smallMR]
labelList=['GDP<58000','GDP>=58000']
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.set_title('Scatter Plot')
plt.xlabel('Pollution')
plt.ylabel('MR')
color=['r','b']
ax1.scatter(bigGDP['Pollution'],bigMR,c='r',marker='s',label='GDP >= 58000')
ax1.scatter(smallGDP['Pollution'],smallMR,c='b',marker='o', label='GDP < 58000')
plt.legend( loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Step 7 Show the proportion of three class of COVID-19 State Data using pie chart.** About the class: ** 1、 Mortality-rate 2、 Mortality-rate between 0.02 and 0.03 3、Mortality-rate > 0.03
###Code
%matplotlib inline
import matplotlib.pyplot as plt
df4=pd.DataFrame()
df4['mr1']=df['Mortality-rate']
#print(df3['mr1'])
df4['mortality-rate12']=df4[df4['mr1']<0.02]['mr1']
print(df4['mortality-rate12'])
df4['mortality-rate22']=df4[df4['mr1']>0.03]['mr1']
print(df4['mortality-rate22'])
df4['mortality-rate32']=df4[(df4['mr1']>0.02) & (df4['mr1']<0.03)]['mr1']
print(df4['mortality-rate32'])
df4=df4.drop(columns='mr1')
plt.pie(df4.count())
plt.show();
%matplotlib inline
import matplotlib.pyplot as plt
df['MR']=df['Deaths']/df['Infected']
MR1=len(df[df['MR']<0.02]) #<0.02
MR3=len(df[df['MR']>0.03])#>0.03
MR2=len(df["MR"])-(MR3+MR1)#0.02~0.03
labels = 'MR<0.2', '0.02<=MR<=0.03', 'MR>0.03'
sizes = [MR1,MR2,MR3]
print(sizes)
print(MR1/(MR1+MR2+MR3))
print(MR2/(MR1+MR2+MR3))
print(MR3/(MR1+MR2+MR3))
colors = ['gold', 'yellowgreen', 'lightcoral']
explode = (0.1, 0, 0) # explode 1st slice
# Plot
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.show()
###Output
[18, 22, 11]
0.35294117647058826
0.43137254901960786
0.21568627450980393
|
Clustering-Dimensionality-Reduction/Affinity Propagation.ipynb
|
###Markdown
| Name | Description | Date| :- |-------------: | :-:|Reza Hashemi| Affinity Propagation. | On 15th of July 2019 Affinity Propagation Clustering TechniqueAffinity Propagation creates clusters by sending messages between pairs of samples until convergence. **A dataset is then described using a small number of exemplars**, which are identified as those most representative of other samples. The messages sent between pairs represent the suitability for one sample to be the exemplar of the other, which is updated in response to the values from other pairs. This updating happens iteratively until convergence, at which point the final exemplars are chosen, and hence the final clustering is given. The algorithmLet $x_1$ through $x_n$ be a set of data points, with no assumptions made about their internal structure, and let $s$ be a function that quantifies the similarity between any two points, such that $s(x_i, x_j) > s(x_i, x_k) \text{ iff } x_i$ is more similar to $x_j$ than to $x_k$. For this example, the negative squared distance of two data points was used i.e. for points xi and xk, ${\displaystyle s(i,k)=-\left\|x_{i}-x_{k}\right\|^{2}} {\displaystyle s(i,k)=-\left\|x_{i}-x_{k}\right\|^{2}}$The diagonal of s i.e. ${s(i,i)}$ is particularly important, as it represents the input preference, meaning how likely a particular input is to become an exemplar. When it is set to the same value for all inputs, it controls how many classes the algorithm produces. A value close to the minimum possible similarity produces fewer classes, while a value close to or larger than the maximum possible similarity, produces many classes. It is typically initialized to the median similarity of all pairs of inputs.The algorithm proceeds by alternating two message passing steps, to update two matrices:* The "responsibility" matrix *$R$* has values $r(i, k)$ that quantify how well-suited $x_k$ is to serve as the exemplar for $x_i$, relative to other candidate exemplars for $x_i$.* The "availability" matrix *$A$* contains values $a(i, k)$ that represent how "appropriate" it would be for $x_i$ to pick $x_k$ as its exemplar, taking into account other points' preference for $x_k$ as an exemplar.Both matrices are initialized to all zeroes, and can be viewed as log-probability tables. The algorithm then performs the following updates iteratively:First, responsibility updates are sent around: $$ {\displaystyle r(i,k)\leftarrow s(i,k)-\max _{k'\neq k}\left\{a(i,k')+s(i,k')\right\}} $$Then, availability is updated per$$ {\displaystyle a(i,k)\leftarrow \min \left(0,r(k,k)+\sum _{i'\not \in \{i,k\}}\max(0,r(i',k))\right)} $$ for ${\displaystyle i\neq k}$ and$$ {\displaystyle a(k,k)\leftarrow \sum _{i'\neq k}\max(0,r(i',k))} $$The iterations are performed until either the cluster boundaries remain unchanged over a number of iterations, or after some predetermined number of iterations. The exemplars are extracted from the final matrices as those whose 'responsibility + availability' for themselves is positive i.e.$${\displaystyle (r(i,i)+a(i,i))>0} {\displaystyle (r(i,i)+a(i,i))>0})$$ Pros and consAffinity Propagation can be interesting as **it automatically chooses the number of clusters based on the data provided**. For this purpose, the two important parameters are the preference, which controls how many exemplars are used, and the damping factor which damps the responsibility and availability messages to avoid numerical oscillations when updating these messages.**The main drawback of Affinity Propagation is its complexity**. The algorithm has a time complexity of the order $O(N^2 T)$, where $N$ is the number of samples and $T$ is the number of iterations until convergence. Further, the memory complexity is of the order $O(N^2)$ if a dense similarity matrix is used, but reducible if a sparse similarity matrix is used. This makes Affinity Propagation most appropriate for small to medium sized datasets. Make some synthetic data
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5,random_state=0)
X.shape
plt.figure(figsize=(8,5))
plt.scatter(X[:,0],X[:,1],edgecolors='k',c='orange',s=75)
plt.grid(True)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Clustering
###Code
# Compute Affinity Propagation
af_model = AffinityPropagation(preference=-50).fit(X)
cluster_centers_indices = af_model.cluster_centers_indices_
labels = af_model.labels_
n_clusters_ = len(cluster_centers_indices)
###Output
_____no_output_____
###Markdown
Number of detected clusters and their centers
###Code
print("Number of clusters detected by the algorithm:", n_clusters_)
print("Cluster centers detected at:\n\n", X[cluster_centers_indices])
plt.figure(figsize=(8,5))
plt.scatter(X[:,0],X[:,1],edgecolors='k',c=af_model.labels_,s=75)
plt.grid(True)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
HomogeneityHomogeneity metric of a cluster labeling given a ground truth.A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
###Code
print ("Homogeneity score:", metrics.homogeneity_score(labels_true,labels))
###Output
Homogeneity score: 0.871559529839
###Markdown
CompletenessCompleteness metric of a cluster labeling given a ground truth.A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way.
###Code
print("Completeness score:",metrics.completeness_score(labels_true,labels))
###Output
Completeness score: 0.871585975337
###Markdown
Prediction
###Code
x_new = [0.5,0.4]
x_pred = af_model.predict([x_new])[0]
print("New point ({},{}) will belong to cluster {}".format(x_new[0],x_new[1],x_pred))
x_new = [-0.5,0.4]
x_pred = af_model.predict([x_new])[0]
print("New point ({},{}) will belong to cluster {}".format(x_new[0],x_new[1],x_pred))
###Output
New point (-0.5,0.4) will belong to cluster 2
###Markdown
Time complexity and model quality as the data size grows
###Code
import time
from tqdm import tqdm
n_samples = [10,20,50,100,200,500,1000,2000,3000,5000,7500,10000]
centers = [[1, 1], [-1, -1], [1, -1]]
t_aff = []
homo_aff=[]
complete_aff=[]
for i in tqdm(n_samples):
X,labels_true = make_blobs(n_samples=i, centers=centers, cluster_std=0.5,random_state=0)
t1 = time.time()
af_model = AffinityPropagation(preference=-50,max_iter=50).fit(X)
t2=time.time()
t_aff.append(t2-t1)
homo_aff.append(metrics.homogeneity_score(labels_true,af_model.labels_))
complete_aff.append(metrics.completeness_score(labels_true,af_model.labels_))
plt.figure(figsize=(8,5))
plt.title("Time complexity of Affinity Propagation\n",fontsize=20)
plt.scatter(n_samples,t_aff,edgecolors='k',c='green',s=100)
plt.plot(n_samples,t_aff,'k--',lw=3)
plt.grid(True)
plt.xticks(fontsize=15)
plt.xlabel("Number of samples",fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Time taken for model (sec)",fontsize=15)
plt.show()
plt.figure(figsize=(8,5))
plt.title("Homogeneity score with data set size\n",fontsize=20)
plt.scatter(n_samples,homo_aff,edgecolors='k',c='green',s=100)
plt.plot(n_samples,homo_aff,'k--',lw=3)
plt.grid(True)
plt.xticks(fontsize=15)
plt.xlabel("Number of samples",fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Homogeneity score",fontsize=15)
plt.show()
plt.figure(figsize=(8,5))
plt.title("Completeness score with data set size\n",fontsize=20)
plt.scatter(n_samples,complete_aff,edgecolors='k',c='green',s=100)
plt.plot(n_samples,complete_aff,'k--',lw=3)
plt.grid(True)
plt.xticks(fontsize=15)
plt.xlabel("Number of samples",fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Completeness score",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
How well the cluster detection works in the presence of noise? Can damping help?Create data sets with varying degree of noise std. dev and run the model to detect clusters. Also, play with damping parameter to see the effect.
###Code
noise = [0.01,0.05,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.25,1.5,1.75,2.0]
n_clusters = []
for i in noise:
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=200, centers=centers, cluster_std=i,random_state=101)
af_model=AffinityPropagation(preference=-50,max_iter=500,convergence_iter=15,damping=0.5).fit(X)
n_clusters.append(len(af_model.cluster_centers_indices_))
print("Detected number of clusters:",n_clusters)
plt.figure(figsize=(8,5))
plt.title("Cluster detection with noisy data for low damping=0.5\n",fontsize=16)
plt.scatter(noise,n_clusters,edgecolors='k',c='green',s=100)
plt.grid(True)
plt.xticks(fontsize=15)
plt.xlabel("Noise std.dev",fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Number of clusters detected",fontsize=15)
plt.show()
noise = [0.01,0.05,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.25,1.5,1.75,2.0]
n_clusters = []
for i in noise:
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=200, centers=centers, cluster_std=i,random_state=101)
af_model=AffinityPropagation(preference=-50,max_iter=500,convergence_iter=15,damping=0.9).fit(X)
n_clusters.append(len(af_model.cluster_centers_indices_))
print("Detected number of clusters:",n_clusters)
plt.figure(figsize=(8,5))
plt.title("Cluster detection with noisy data for high damping=0.9\n",fontsize=16)
plt.scatter(noise,n_clusters,edgecolors='k',c='green',s=100)
plt.grid(True)
plt.xticks(fontsize=15)
plt.xlabel("Noise std.dev",fontsize=15)
plt.yticks([i for i in range(2,10)],fontsize=15)
plt.ylabel("Number of clusters detected",fontsize=15)
plt.show()
###Output
Detected number of clusters: [3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 6, 6, 7]
|
Anita Mburu-WT-21-022/19.ipynb
|
###Markdown
Transforming and Combining DataIn the previous module you worked on a dataset that combined two different `World HealthOrganization datasets: population and the number of deaths due to tuberculosis`.They could be combined because they share a `common attribute: the countries`. Thisweek you will learn the techniques behind the creation of such a combined dataset.
###Code
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
table = [
['UK', 2678454886796.7], # 1st row
['USA', 16768100000000.0], # 2nd row
['China', 9240270452047.0], # and so on...
['Brazil', 2245673032353.8],
['South Africa', 366057913367.1]
]
headings = ['Country', 'GDP (US$)']
gdp = pd.DataFrame(columns=headings, data=table)
gdp
headings = ['Country name', 'Life expectancy (years)']
table = [
['China', 75],
['Russia', 71],
['United States', 79],
['India', 66],
['United Kingdom', 81]
]
life = pd.DataFrame(columns=headings, data=table)
life
###Output
_____no_output_____
###Markdown
Defining functionsTo make the GDP values easier to read, I wish to convert US dollars to millions of USdollars.I have to be precise about what I mean. For example, if the GDP is 4,567,890.1 (usingcommas to separate the thousands, millions, etc.), what do I want to obtain? Do I want always to round down to the nearest million, making it 4 million, round to the nearestmillion, making it 5, or round to one decimal place, making it 4.6 million? Since the aim is to simplify the numbers and not introduce a false sense of precision, let’s round to thenearest million.The following function, written in two different ways, rounds a number to the nearest million. It calls the Python function `round()` which rounds a decimal number to the nearest integer. If two integers are equally near, it rounds to the even integer.
###Code
def roundToMillions (value):
result = round(value / 1000000)
return result
###Output
_____no_output_____
###Markdown
A function definition always starts with `def` , which is a reserved word in Python.After it comes the function’s name and arguments, surrounded by parenthesis, and finallya colon (:). This function just takes one argument. If there’s more than one argument, usecommas to separate them.Next comes the function’s body, where the calculations are done, using the argumentslike any other variables. The body must be indented, conventionally by four spaces.For this function, the calculation is simple. I take the value, divide it by one million, and callthe built-in Python function `round()` to convert that number to the nearest integer. If thenumber is exactly mid-way between two integers, `round()` will pick the even integer,i.e. `round(2.5) is 2 but round(3.5)` is 4.Finally, I write a `return statement` to pass theresult back to the code that called the function. The return word is also reserved inPython.The result variable just stores the rounded value temporarily and has no other purpose.It‘s better to write the body as a single line of code:
###Code
def roundToMillions (value):
return round(value / 1000000)
###Output
_____no_output_____
###Markdown
To test a function, write expressions that check for various argument values whether the function returns the expected value in each case.
###Code
roundToMillions(4567890.1) == 5
###Output
_____no_output_____
###Markdown
The art of testing is to find as few test cases as possible that cover all bases. And I meanall. Prepare for the worst and hope for the best.So here are some more tests, even for the unlikely cases of the GDP being zero ornegative, and you can probably think of others.
###Code
roundToMillions(0) == 0 # always test with zero...
roundToMillions(-1) == 0 # ...and negative numbers
roundToMillions(1499999) == 1 # test rounding to the nearest
###Output
_____no_output_____
###Markdown
Now for the next conversion, from US dollars to a local currency, for example Britishpounds. I searched the internet for ‘average yearly USD to GBP rate’, chose a conversionservice and took the value for 2013. Here’s the code and some tests.The next function converts US dollars to British pounds.
###Code
def usdToGBP (usd):
return usd / 1.564768 # average rate during 2013
usdToGBP(0) == 0
usdToGBP(1.564768) == 1
usdToGBP(-1) < 0
###Output
_____no_output_____
###Markdown
Tasks1. Define a few more test cases for both functions.- Why can't you use `roundToMillions()` to round the population to millions of inhabitants? Write a new function and test it. **You need to write this function in preparation for next task.**- Write a function to convert US dollars to your local currency. If your local currency is USD or GBP, convert to Euros. Look up online what was the average exchange rate in 2013.
###Code
# Why can't you use roundToMillions() to round the population to millions of inhabitants? Write a new function and test it. You need to write this function in preparation for next task.
def roundToMillions (value):
return round(value / 1000000)
# Write a function to convert US dollars to your local currency. If your local currency is USD or GBP, convert to Euros. Look up online what was the average exchange rate in 2013.
$ currency_converter 100 USD --to EUR
1 USD = 0.7268 EUR on 31-12-2013
###Output
_____no_output_____
|
Chapter 1/ch1_oner_application.ipynb
|
###Markdown
The OneR algorithm is quite simple but can be quite effective, showing the power of using even basic statistics in many applications.The algorithm is:* For each variable * For each value of the variable * The prediction based on this variable goes the most frequent class * Compute the error of this prediction * Sum the prediction errors for all values of the variable* Use the variable with the lowest error
###Code
# Load our dataset
from sklearn.datasets import load_iris
#X, y = np.loadtxt("X_classification.txt"), np.loadtxt("y_classification.txt")
dataset = load_iris()
X = dataset.data
y = dataset.target
print(dataset.DESCR)
n_samples, n_features = X.shape
###Output
.. _iris_dataset:
Iris plants dataset
--------------------
**Data Set Characteristics:**
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
from Fisher's paper. Note that it's the same as in R, but not as in the UCI
Machine Learning Repository, which has two wrong data points.
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
.. topic:: References
- Fisher, R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
###Markdown
Our attributes are continuous, while we want categorical features to use OneR. We will perform a *preprocessing* step called discretisation. At this stage, we will perform a simple procedure: compute the mean and determine whether a value is above or below the mean.
###Code
# Compute the mean for each attribute
attribute_means = X.mean(axis=0)
assert attribute_means.shape == (n_features,)
X_d = np.array(X >= attribute_means, dtype='int')
# Now, we split into a training and test set
from sklearn.model_selection import train_test_split
# Set the random state to the same number to get the same results as in the book
random_state = 14
X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state)
print("There are {} training samples".format(y_train.shape))
print("There are {} testing samples".format(y_test.shape))
from collections import defaultdict
from operator import itemgetter
def train(X, y_true, feature):
"""Computes the predictors and error for a given feature using the OneR algorithm
Parameters
----------
X: array [n_samples, n_features]
The two dimensional array that holds the dataset. Each row is a sample, each column
is a feature.
y_true: array [n_samples,]
The one dimensional array that holds the class values. Corresponds to X, such that
y_true[i] is the class value for sample X[i].
feature: int
An integer corresponding to the index of the variable we wish to test.
0 <= variable < n_features
Returns
-------
predictors: dictionary of tuples: (value, prediction)
For each item in the array, if the variable has a given value, make the given prediction.
error: float
The ratio of training data that this rule incorrectly predicts.
"""
# Check that variable is a valid number
n_samples, n_features = X.shape
assert 0 <= feature < n_features
# Get all of the unique values that this variable has 特征的所有取值可能
values = set(X[:,feature])
# Stores the predictors array that is returned
predictors = dict()
errors = []
for current_value in values:
# 最佳分类取most_frequent_class值时的错误error量
most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)
# 特征取current_value 对应的最佳分类取most_frequent_class值
predictors[current_value] = most_frequent_class
errors.append(error)
# Compute the total error of using this feature to classify on
total_error = sum(errors)
# predictors特征组合,两种取值下的总错误量
return predictors, total_error
# Compute what our predictors say each sample is based on its value
#y_predicted = np.array([predictors[sample[feature]] for sample in X])
def train_feature_value(X, y_true, feature, value):
# Create a simple dictionary to count how frequency they give certain predictions
class_counts = defaultdict(int)
# Iterate through each sample and count the frequency of each class/value pair
for sample, y in zip(X, y_true):
# sample 为每一行的列表
# feature 对应的特征
if sample[feature] == value:
class_counts[y] += 1
# Now get the best one by sorting (highest first) and choosing the first item
sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
# sorted_class_counts结果类似这样 [(0, 33), (1, 15), (2, 4)]
most_frequent_class = sorted_class_counts[0][0]
# The error is the number of samples that do not classify as the most frequent class
# *and* have the feature value.
error = sum([class_count for class_value, class_count in class_counts.items()
if class_value != most_frequent_class])
# 最佳分类取most_frequent_class值时的错误error量
return most_frequent_class, error
# Compute all of the predictors variable特征
# {0: ({0: 0, 1: 2}, 41), 1: ({0: 1, 1: 0}, 58), 2: ({0: 0, 1: 2}, 37), 3: ({0: 0, 1: 2}, 37)}
all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])}
errors = {variable: error for variable, (mapping, error) in all_predictors.items()}
print(errors)
# Now choose the best and save that as "model"
# Sort by error
best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0]
print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))
# Choose the bset model
model = {'variable': best_variable,
'predictor': all_predictors[best_variable][0]}
print(model)
all_predictors
def predict(X_test, model):
variable = model['variable']
predictor = model['predictor']
y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
return y_predicted
y_predicted = predict(X_test, model)
print(y_predicted)
# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test
%time accuracy = np.mean(y_predicted == y_test) * 100
%time accuracy2 = np.sum(y_predicted == y_test) / len(y_predicted) * 100
print("The test accuracy is {:.1f}%".format(accuracy))
print("The test accuracy2 is {:.1f}%".format(accuracy2))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
###Output
precision recall f1-score support
0 0.94 1.00 0.97 17
1 0.00 0.00 0.00 13
2 0.40 1.00 0.57 8
micro avg 0.66 0.66 0.66 38
macro avg 0.45 0.67 0.51 38
weighted avg 0.51 0.66 0.55 38
###Markdown
The OneR algorithm is quite simple but can be quite effective, showing the power of using even basic statistics in many applications.The algorithm is:* For each variable * For each value of the variable * The prediction based on this variable goes the most frequent class * Compute the error of this prediction * Sum the prediction errors for all values of the variable* Use the variable with the lowest error
###Code
# Load our dataset
from sklearn.datasets import load_iris
#X, y = np.loadtxt("X_classification.txt"), np.loadtxt("y_classification.txt")
dataset = load_iris()
X = dataset.data
y = dataset.target
print(dataset.DESCR)
n_samples, n_features = X.shape
###Output
Iris Plants Database
Notes
-----
Data Set Characteristics:
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris
The famous Iris database, first used by Sir R.A Fisher
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
References
----------
- Fisher,R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
###Markdown
Our attributes are continuous, while we want categorical features to use OneR. We will perform a *preprocessing* step called discretisation. At this stage, we will perform a simple procedure: compute the mean and determine whether a value is above or below the mean.
###Code
# Compute the mean for each attribute
attribute_means = X.mean(axis=0)
assert attribute_means.shape == (n_features,)
X_d = np.array(X >= attribute_means, dtype='int')
# Now, we split into a training and test set
from sklearn.model_selection import train_test_split
# Set the random state to the same number to get the same results as in the book
random_state = 14
X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state)
print("There are {} training samples".format(y_train.shape))
print("There are {} testing samples".format(y_test.shape))
from collections import defaultdict
from operator import itemgetter
def train(X, y_true, feature):
"""Computes the predictors and error for a given feature using the OneR algorithm
Parameters
----------
X: array [n_samples, n_features]
The two dimensional array that holds the dataset. Each row is a sample, each column
is a feature.
y_true: array [n_samples,]
The one dimensional array that holds the class values. Corresponds to X, such that
y_true[i] is the class value for sample X[i].
feature: int
An integer corresponding to the index of the variable we wish to test.
0 <= variable < n_features
Returns
-------
predictors: dictionary of tuples: (value, prediction)
For each item in the array, if the variable has a given value, make the given prediction.
error: float
The ratio of training data that this rule incorrectly predicts.
"""
# Check that variable is a valid number
n_samples, n_features = X.shape
assert 0 <= feature < n_features
# Get all of the unique values that this variable has
values = set(X[:,feature])
# Stores the predictors array that is returned
predictors = dict()
errors = []
for current_value in values:
most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)
predictors[current_value] = most_frequent_class
errors.append(error)
# Compute the total error of using this feature to classify on
total_error = sum(errors)
return predictors, total_error
# Compute what our predictors say each sample is based on its value
#y_predicted = np.array([predictors[sample[feature]] for sample in X])
def train_feature_value(X, y_true, feature, value):
# Create a simple dictionary to count how frequency they give certain predictions
class_counts = defaultdict(int)
# Iterate through each sample and count the frequency of each class/value pair
for sample, y in zip(X, y_true):
if sample[feature] == value:
class_counts[y] += 1
# Now get the best one by sorting (highest first) and choosing the first item
sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
most_frequent_class = sorted_class_counts[0][0]
# The error is the number of samples that do not classify as the most frequent class
# *and* have the feature value.
n_samples = X.shape[1]
error = sum([class_count for class_value, class_count in class_counts.items()
if class_value != most_frequent_class])
return most_frequent_class, error
# Compute all of the predictors
all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])}
errors = {variable: error for variable, (mapping, error) in all_predictors.items()}
# Now choose the best and save that as "model"
# Sort by error
best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0]
print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))
# Choose the bset model
model = {'variable': best_variable,
'predictor': all_predictors[best_variable][0]}
print(model)
def predict(X_test, model):
variable = model['variable']
predictor = model['predictor']
y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
return y_predicted
y_predicted = predict(X_test, model)
print(y_predicted)
# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test
accuracy = np.mean(y_predicted == y_test) * 100
print("The test accuracy is {:.1f}%".format(accuracy))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
###Output
precision recall f1-score support
0 0.94 1.00 0.97 17
1 0.00 0.00 0.00 13
2 0.40 1.00 0.57 8
avg / total 0.51 0.66 0.55 38
###Markdown
The OneR algorithm is quite simple but can be quite effective, showing the power of using even basic statistics in many applications.The algorithm is:* For each variable * For each value of the variable * The prediction based on this variable goes the most frequent class * Compute the error of this prediction * Sum the prediction errors for all values of the variable* Use the variable with the lowest error
###Code
# Load our dataset
from sklearn.datasets import load_iris
#X, y = np.loadtxt("X_classification.txt"), np.loadtxt("y_classification.txt")
dataset = load_iris()
X = dataset.data
y = dataset.target
print(dataset.DESCR)
n_samples, n_features = X.shape
###Output
Iris Plants Database
Notes
-----
Data Set Characteristics:
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris
The famous Iris database, first used by Sir R.A Fisher
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
References
----------
- Fisher,R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
###Markdown
Our attributes are continuous, while we want categorical features to use OneR. We will perform a *preprocessing* step called discretisation. At this stage, we will perform a simple procedure: compute the mean and determine whether a value is above or below the mean.
###Code
# Compute the mean for each attribute
attribute_means = X.mean(axis=0)
assert attribute_means.shape == (n_features,)
X_d = np.array(X >= attribute_means, dtype='int')
# Now, we split into a training and test set
from sklearn.cross_validation import train_test_split
# Set the random state to the same number to get the same results as in the book
random_state = 14
X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state)
print("There are {} training samples".format(y_train.shape))
print("There are {} testing samples".format(y_test.shape))
from collections import defaultdict
from operator import itemgetter
def train(X, y_true, feature):
"""Computes the predictors and error for a given feature using the OneR algorithm
Parameters
----------
X: array [n_samples, n_features]
The two dimensional array that holds the dataset. Each row is a sample, each column
is a feature.
y_true: array [n_samples,]
The one dimensional array that holds the class values. Corresponds to X, such that
y_true[i] is the class value for sample X[i].
feature: int
An integer corresponding to the index of the variable we wish to test.
0 <= variable < n_features
Returns
-------
predictors: dictionary of tuples: (value, prediction)
For each item in the array, if the variable has a given value, make the given prediction.
error: float
The ratio of training data that this rule incorrectly predicts.
"""
# Check that variable is a valid number
n_samples, n_features = X.shape
assert 0 <= feature < n_features
# Get all of the unique values that this variable has
values = set(X[:,feature])
# Stores the predictors array that is returned
predictors = dict()
errors = []
for current_value in values:
most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)
predictors[current_value] = most_frequent_class
errors.append(error)
# Compute the total error of using this feature to classify on
total_error = sum(errors)
return predictors, total_error
# Compute what our predictors say each sample is based on its value
#y_predicted = np.array([predictors[sample[feature]] for sample in X])
def train_feature_value(X, y_true, feature, value):
# Create a simple dictionary to count how frequency they give certain predictions
class_counts = defaultdict(int)
# Iterate through each sample and count the frequency of each class/value pair
for sample, y in zip(X, y_true):
if sample[feature] == value:
class_counts[y] += 1
# Now get the best one by sorting (highest first) and choosing the first item
sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
most_frequent_class = sorted_class_counts[0][0]
# The error is the number of samples that do not classify as the most frequent class
# *and* have the feature value.
n_samples = X.shape[1]
error = sum([class_count for class_value, class_count in class_counts.items()
if class_value != most_frequent_class])
return most_frequent_class, error
# Compute all of the predictors
all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])}
errors = {variable: error for variable, (mapping, error) in all_predictors.items()}
# Now choose the best and save that as "model"
# Sort by error
best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0]
print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))
# Choose the bset model
model = {'variable': best_variable,
'predictor': all_predictors[best_variable][0]}
print(model)
def predict(X_test, model):
variable = model['variable']
predictor = model['predictor']
y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
return y_predicted
y_predicted = predict(X_test, model)
print(y_predicted)
# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test
accuracy = np.mean(y_predicted == y_test) * 100
print("The test accuracy is {:.1f}%".format(accuracy))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
###Output
precision recall f1-score support
0 0.94 1.00 0.97 17
1 0.00 0.00 0.00 13
2 0.40 1.00 0.57 8
avg / total 0.51 0.66 0.55 38
###Markdown
The OneR algorithm is quite simple but can be quite effective, showing the power of using even basic statistics in many applications.The algorithm is:* For each variable * For each value of the variable * The prediction based on this variable goes the most frequent class * Compute the error of this prediction * Sum the prediction errors for all values of the variable* Use the variable with the lowest error
###Code
# Load our dataset
from sklearn.datasets import load_iris
#X, y = np.loadtxt("X_classification.txt"), np.loadtxt("y_classification.txt")
dataset = load_iris()
X = dataset.data
y = dataset.target
print(dataset.DESCR)
n_samples, n_features = X.shape
###Output
.. _iris_dataset:
Iris plants dataset
--------------------
**Data Set Characteristics:**
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:
============== ==== ==== ======= ===== ====================
Min Max Mean SD Class Correlation
============== ==== ==== ======= ===== ====================
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
============== ==== ==== ======= ===== ====================
:Missing Attribute Values: None
:Class Distribution: 33.3% for each of 3 classes.
:Creator: R.A. Fisher
:Donor: Michael Marshall (MARSHALL%[email protected])
:Date: July, 1988
The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken
from Fisher's paper. Note that it's the same as in R, but not as in the UCI
Machine Learning Repository, which has two wrong data points.
This is perhaps the best known database to be found in the
pattern recognition literature. Fisher's paper is a classic in the field and
is referenced frequently to this day. (See Duda & Hart, for example.) The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant. One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.
.. topic:: References
- Fisher, R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
Mathematical Statistics" (John Wiley, NY, 1950).
- Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
- Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
- Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
on Information Theory, May 1972, 431-433.
- See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
- Many, many more ...
###Markdown
Our attributes are continuous, while we want categorical features to use OneR. We will perform a *preprocessing* step called discretisation. At this stage, we will perform a simple procedure: compute the mean and determine whether a value is above or below the mean.
###Code
# Compute the mean for each attribute
attribute_means = X.mean(axis=0)
assert attribute_means.shape == (n_features,)
X_d = np.array(X >= attribute_means, dtype='int')
# Now, we split into a training and test set
from sklearn.model_selection import train_test_split
# Set the random state to the same number to get the same results as in the book
random_state = 14
X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state)
print("There are {} training samples".format(y_train.shape))
print("There are {} testing samples".format(y_test.shape))
from collections import defaultdict
from operator import itemgetter
def train(X, y_true, feature):
"""Computes the predictors and error for a given feature using the OneR algorithm
Parameters
----------
X: array [n_samples, n_features]
The two dimensional array that holds the dataset. Each row is a sample, each column
is a feature.
y_true: array [n_samples,]
The one dimensional array that holds the class values. Corresponds to X, such that
y_true[i] is the class value for sample X[i].
feature: int
An integer corresponding to the index of the variable we wish to test.
0 <= variable < n_features
Returns
-------
predictors: dictionary of tuples: (value, prediction)
For each item in the array, if the variable has a given value, make the given prediction.
error: float
The ratio of training data that this rule incorrectly predicts.
"""
# Check that variable is a valid number
n_samples, n_features = X.shape
assert 0 <= feature < n_features
# Get all of the unique values that this variable has
values = set(X[:,feature])
# Stores the predictors array that is returned
predictors = dict()
errors = []
for current_value in values:
most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)
predictors[current_value] = most_frequent_class
errors.append(error)
# Compute the total error of using this feature to classify on
total_error = sum(errors)
return predictors, total_error
# Compute what our predictors say each sample is based on its value
#y_predicted = np.array([predictors[sample[feature]] for sample in X])
def train_feature_value(X, y_true, feature, value):
# Create a simple dictionary to count how frequency they give certain predictions
class_counts = defaultdict(int)
# Iterate through each sample and count the frequency of each class/value pair
for sample, y in zip(X, y_true):
if sample[feature] == value:
class_counts[y] += 1
# Now get the best one by sorting (highest first) and choosing the first item
sorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)
most_frequent_class = sorted_class_counts[0][0]
# The error is the number of samples that do not classify as the most frequent class
# *and* have the feature value.
n_samples = X.shape[1]
error = sum([class_count for class_value, class_count in class_counts.items()
if class_value != most_frequent_class])
return most_frequent_class, error
# Compute all of the predictors
all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])}
errors = {variable: error for variable, (mapping, error) in all_predictors.items()}
# Now choose the best and save that as "model"
# Sort by error
best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0]
print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))
# Choose the best model
model = {'variable': best_variable,
'predictor': all_predictors[best_variable][0]}
print(model)
def predict(X_test, model):
variable = model['variable']
predictor = model['predictor']
y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])
return y_predicted
y_predicted = predict(X_test, model)
print(y_predicted)
# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test
accuracy = np.mean(y_predicted == y_test) * 100
print("The test accuracy is {:.1f}%".format(accuracy))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_predicted))
###Output
precision recall f1-score support
0 0.94 1.00 0.97 17
1 0.00 0.00 0.00 13
2 0.40 1.00 0.57 8
accuracy 0.66 38
macro avg 0.45 0.67 0.51 38
weighted avg 0.51 0.66 0.55 38
|
Sesion_08_jSon.ipynb
|
###Markdown
Curso Introducción a PythonUniversidad EAFIT - BancolombiaMEDELLÍN - COLOMBIA Sesión 08 - Manipulación de archivos JSON Introducción`JSON` (*JavaScript Object Notation*) es un formato ligero de intercambio de datos que los humanos pueden leer y escribir fácilmente. También es fácil para las computadoras analizar y generar. `JSON` se basa en el lenguaje de programación [JavaScript](https://www.javascript.com/ 'JavaScript'). Es un formato de texto que es independiente del lenguaje y se puede usar en `Python`, `Perl`, entre otros idiomas. Se utiliza principalmente para transmitir datos entre un servidor y aplicaciones web. `JSON` se basa en dos estructuras:- Una colección de pares nombre / valor. Esto se realiza como un objeto, registro, diccionario, tabla hash, lista con clave o matriz asociativa.- Una lista ordenada de valores. Esto se realiza como una matriz, vector, lista o secuencia. JSON en PythonHay una serie de paquetes que admiten `JSON` en `Python`, como [metamagic.json](https://pypi.org/project/metamagic.json/ 'metamagic.json'), [jyson](http://opensource.xhaus.com/projects/jyson/wiki 'jyson'), [simplejson](https://simplejson.readthedocs.io/en/latest/ 'simplejson'), [Yajl-Py](http://pykler.github.io/yajl-py/ 'Yajl-Py'), [ultrajson](https://github.com/esnme/ultrajson 'ultrajson') y [json](https://docs.python.org/3.6/library/json.html 'json'). En este curso, utilizaremos [json](https://docs.python.org/3.6/library/json.html 'json'), que es compatible de forma nativa con `Python`. Podemos usar [este sitio](https://jsonlint.com/ 'jsonlint') que proporciona una interfaz `JSON` para verificar nuestros datos `JSON`. A continuación se muestra un ejemplo de datos `JSON`.
###Code
{
"nombre": "Jaime",
"apellido": "Perez",
"aficiones": ["correr", "ciclismo", "caminar"],
"edad": 35,
"hijos": [
{
"nombre": "Pedro",
"edad": 6
},
{
"nombre": "Alicia",
"edad": 8
}
]
}
###Output
_____no_output_____
###Markdown
Como puede verse, `JSON` admite tanto tipos primitivos, cadenas de caracteres y números, como listas y objetos anidados.Notamos que la representación de datos es muy similar a los diccionarios de `Python`
###Code
{
"articulo": [
{
"id":"01",
"lenguaje": "JSON",
"edicion": "primera",
"autor": "Derrick Mwiti"
},
{
"id":"02",
"lenguaje": "Python",
"edicion": "segunda",
"autor": "Derrick Mwiti"
}
],
"blog":[
{
"nombre": "Datacamp",
"URL":"datacamp.com"
}
]
}
###Output
_____no_output_____
###Markdown
Reescribámoslo en una forma más familiar
###Code
{"articulo":[{"id":"01","lenguaje": "JSON","edicion": "primera","author": "Derrick Mwiti"},
{"id":"02","lenguaje": "Python","edicion": "segunda","autor": "Derrick Mwiti"}],
"blog":[{"nombre": "Datacamp","URL":"datacamp.com"}]}
###Output
_____no_output_____
###Markdown
`JSON` nativo en `Python``Python` viene con un paquete incorporado llamado `json` para codificar y decodificar datos `JSON`.
###Code
import json
###Output
_____no_output_____
###Markdown
Un poco de vocabulario El proceso de codificación de `JSON` generalmente se llama serialización. Este término se refiere a la transformación de datos en una serie de bytes (por lo tanto, en serie) para ser almacenados o transmitidos a través de una red. También puede escuchar el término de clasificación, pero esa es otra discusión. Naturalmente, la deserialización es el proceso recíproco de decodificación de datos que se ha almacenado o entregado en el estándar `JSON`.De lo que estamos hablando aquí es leer y escribir. Piénselo así: la codificación es para escribir datos en el disco, mientras que la decodificación es para leer datos en la memoria. Serialización en `JSON`¿Qué sucede después de que una computadora procesa mucha información? Necesita tomar un volcado de datos. En consecuencia, la biblioteca `json` expone el método `dump()` para escribir datos en archivos. También hay un método `dumps()` (pronunciado como "*dump-s*") para escribir en una cadena de `Python`.Los objetos simples de `Python` se traducen a `JSON` de acuerdo con una conversión bastante intuitiva. Comparemos los tipos de datos en `Python` y `JSON`.|**Python** | **JSON** ||:---------:|:----------------:||dict |object ||list|array ||tuple| array||str| string||int| number||float| number||True| true||False| false||None| null| Serialización, ejemplotenemos un objeto `Python` en la memoria que se parece a algo así:
###Code
data = {
"president": {
"name": "Zaphod Beeblebrox",
"species": "Betelgeusian"
}
}
###Output
_____no_output_____
###Markdown
Es fundamental que se guarde esta información en el disco, por lo que la tarea es escribirla en un archivo.Con el administrador de contexto de `Python`, puede crear un archivo llamado `data_file.json` y abrirlo en modo de escritura. (Los archivos `JSON` terminan convenientemente en una extensión `.json`).
###Code
with open("data_file.json", "w") as write_file:
json.dump(data, write_file)
###Output
_____no_output_____
###Markdown
Tenga en cuenta que `dump()` toma dos argumentos posicionales: 1. el objeto de datos que se va a serializar y 2. el objeto tipo archivo en el que se escribirán los bytes.O, si estaba tan inclinado a seguir usando estos datos `JSON` serializados en su programa, podría escribirlos en un objeto `str` nativo de `Python`.
###Code
json_string = json.dumps(data)
###Output
_____no_output_____
###Markdown
Tenga en cuenta que el objeto similar a un archivo está ausente ya que no está escribiendo en el disco. Aparte de eso, `dumps()` es como `dump()`.Se ha creado un objeto `JSON` y está listo para trabajarlo. Algunos argumentos útiles de palabras claveRecuerde, `JSON` está destinado a ser fácilmente legible por los humanos, pero la sintaxis legible no es suficiente si se aprieta todo junto. Además, probablemente tenga un estilo de programación diferente a éste presentado, y puede que le resulte más fácil leer el código cuando está formateado a su gusto.***NOTA:*** Los métodos `dump()` y `dumps()` usan los mismos argumentos de palabras clave.La primera opción que la mayoría de la gente quiere cambiar es el espacio en blanco. Puede usar el argumento de sangría de palabras clave para especificar el tamaño de sangría para estructuras anidadas. Compruebe la diferencia por sí mismo utilizando los datos, que definimos anteriormente, y ejecutando los siguientes comandos en una consola:
###Code
json.dumps(data)
json.dumps(data, indent=4)
###Output
_____no_output_____
###Markdown
Otra opción de formato es el argumento de palabra clave de separadores. Por defecto, esta es una tupla de 2 de las cadenas de separación (`","`, `": "`), pero una alternativa común para `JSON` compacto es (`","`, `":"`). observe el ejemplo `JSON` nuevamente para ver dónde entran en juego estos separadores.Hay otros, como `sort_keys`. Puede encontrar una lista completa en la [documentación](https://docs.python.org/3/library/json.htmlbasic-usage) oficial. Deserializando JSONHemos trabajado un poco de `JSON` muy básico, ahora es el momento de ponerlo en forma. En la biblioteca `json`, encontrará `load()` y `loads()` para convertir datos codificados con `JSON` en objetos de `Python`.Al igual que la serialización, hay una tabla de conversión simple para la deserialización, aunque probablemente ya puedas adivinar cómo se ve.|**JSON** | **Python** ||:---------:|:----------------:||object |dict ||array |list||array|tuple ||string|str ||number|int ||number|float ||true|True ||false|False ||null|None | Técnicamente, esta conversión no es un inverso perfecto a la tabla de serialización. Básicamente, eso significa que si codifica un objeto de vez en cuando y luego lo decodifica nuevamente más tarde, es posible que no recupere exactamente el mismo objeto. Me imagino que es un poco como teletransportación: descomponga mis moléculas aquí y vuelva a unirlas allí. ¿Sigo siendo la misma persona?En realidad, probablemente sea más como hacer que un amigo traduzca algo al japonés y que otro amigo lo traduzca nuevamente al inglés. De todos modos, el ejemplo más simple sería codificar una tupla y recuperar una lista después de la decodificación, así:
###Code
blackjack_hand = (8, "Q")
encoded_hand = json.dumps(blackjack_hand)
decoded_hand = json.loads(encoded_hand)
blackjack_hand == decoded_hand
type(blackjack_hand)
type(decoded_hand)
blackjack_hand == tuple(decoded_hand)
###Output
_____no_output_____
###Markdown
Deserialización, ejemploEsta vez, imagine que tiene algunos datos almacenados en el disco que le gustaría manipular en la memoria. Todavía usará el administrador de contexto, pero esta vez abrirá el archivo de datos existente `archivo_datos.json` en modo de lectura.
###Code
with open("data_file.json", "r") as read_file:
data = json.load(read_file)
###Output
_____no_output_____
###Markdown
Hasta ahora las cosas son bastante sencillas, pero tenga en cuenta que el resultado de este método podría devolver cualquiera de los tipos de datos permitidos de la tabla de conversión. Esto solo es importante si está cargando datos que no ha visto antes. En la mayoría de los casos, el objeto raíz será un diccionario o una lista.Si ha extraído datos `JSON` de otro programa o ha obtenido una cadena de datos con formato `JSON` en `Python`, puede deserializarlo fácilmente con `loads()`, que naturalmente se carga de una cadena:
###Code
my_json_string = """{
"article": [
{
"id":"01",
"language": "JSON",
"edition": "first",
"author": "Derrick Mwiti"
},
{
"id":"02",
"language": "Python",
"edition": "second",
"author": "Derrick Mwiti"
}
],
"blog":[
{
"name": "Datacamp",
"URL":"datacamp.com"
}
]
}
"""
to_python = json.loads(my_json_string)
###Output
_____no_output_____
###Markdown
Ahora ya estamos trabajando con `JSON` puro. Lo que se hará de ahora en adelante dependerá del usuario, por lo que hay qué estar muy atentos con lo que se quiere hacer, se hace, y el resultado que se obtiene. Un ejemplo realPara este ejemplo introductorio, utilizaremos [JSONPlaceholder](https://jsonplaceholder.typicode.com/ "JSONPlaceholder"), una excelente fuente de datos `JSON` falsos para fines prácticos.Primero cree un archivo de script llamado `scratch.py`, o como desee llamarlo.Deberá realizar una solicitud de `API` al servicio `JSONPlaceholder`, así que solo use el paquete de solicitudes para realizar el trabajo pesado. Agregue estas importaciones en la parte superior de su archivo:
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Ahora haremos una solicitud a la `API` `JSONPlaceholder`, si no está familiarizado con las solicitudes, existe un práctico método `json()` que hará todo el trabajo, pero puede practicar el uso de la biblioteca `json` para deserializar el atributo de texto del objeto de respuesta. Debería verse más o menos así:
###Code
response = requests.get("https://jsonplaceholder.typicode.com/todos")
todos = json.loads(response.text)
###Output
_____no_output_____
###Markdown
Para saber si lo anterior funcionó (por lo menos no sacó ningún error), verifique el tipo de `todos` y luego hacer una consulta a los 10 primeros elementos de la lista.
###Code
todos == response.json()
type(todos)
todos[:10]
###Output
_____no_output_____
###Markdown
Puede ver la estructura de los datos visualizando el archivo en un navegador, pero aquí hay un ejemplo de parte de él:
###Code
# parte del archivo JSON - TODO
{
"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": false
}
###Output
_____no_output_____
###Markdown
Hay varios usuarios, cada uno con un ID de usuario único, y cada tarea tiene una propiedad booleana completada. ¿Puedes determinar qué usuarios han completado la mayoría de las tareas?
###Code
# Mapeo de userID para la cantidad completa de TODOS para cada usuario
todos_by_user = {}
# Incrementa el recuento completo de TODOs para cada usuario.
for todo in todos:
if todo["completed"]:
try:
# Incrementa el conteo del usuario existente.
todos_by_user[todo["userId"]] += 1
except KeyError:
# Este usuario no ha sido visto, se inicia su conteo en 1.
todos_by_user[todo["userId"]] = 1
# Crea una lista ordenada de pares (userId, num_complete).
top_users = sorted(todos_by_user.items(),
key=lambda x: x[1], reverse=True)
# obtiene el número máximo completo de TODO
max_complete = top_users[0][1]
# Cree una lista de todos los usuarios que hayan completado la cantidad máxima de TODO
users = []
for user, num_complete in top_users:
if num_complete < max_complete:
break
users.append(str(user))
max_users = " y ".join(users)
###Output
_____no_output_____
###Markdown
Ahora se pueden manipular los datos `JSON` como un objeto `Python` normal.Al ejecutar el script se obtienen los siguientes resultados:
###Code
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
###Output
_____no_output_____
###Markdown
Continuando, se creará un archivo `JSON` que contiene los *TODO* completos para cada uno de los usuarios que completaron el número máximo de *TODO*.Todo lo que necesita hacer es filtrar todos y escribir la lista resultante en un archivo. llamaremos al archivo de salida `filter_data_file.json`. Hay muchas maneras de hacerlo, pero aquí hay una:
###Code
# Defina una función para filtrar TODO completos de usuarios con TODOS máximos completados.
def keep(todo):
is_complete = todo["completed"]
has_max_count = str(todo["userId"]) in users
return is_complete and has_max_count
# Escriba el filtrado de TODO a un archivo.
with open("filtered_data_file.json", "w") as data_file:
filtered_todos = list(filter(keep, todos))
json.dump(filtered_todos, data_file, indent=2)
###Output
_____no_output_____
###Markdown
Se han filtrado todos los datos que no se necesitan y se han guardado los necesarios en un archivo nuevo! Vuelva a ejecutar el script y revise `filter_data_file.json` para verificar que todo funcionó. Estará en el mismo directorio que `scratch.py` cuando lo ejecutes.
###Code
s = "s" if len(users) > 1 else ""
print(f"usuario{s} {max_users} completaron {max_complete} TODOs")
###Output
_____no_output_____
###Markdown
Por ahora estamos viendo los aspectos básicos de la manipulación de datos en `JSON`. Ahora vamos a tratar de avanzar un poco más en profundidad. Codificación y decodificación de objetos personalizados de `Python`Veamos un ejemplo de una clase de un juego muy famoso (Dungeons & Dragons) ¿Qué sucede cuando intentamos serializar la clase `Elf` de esa aplicación?
###Code
class Elf:
def __init__(self, level, ability_scores=None):
self.level = level
self.ability_scores = {
"str": 11, "dex": 12, "con": 10,
"int": 16, "wis": 14, "cha": 13
} if ability_scores is None else ability_scores
self.hp = 10 + self.ability_scores["con"]
elf = Elf(level=4)
json.dumps(elf)
###Output
_____no_output_____
###Markdown
`Python` indica que `Elf` no es serializable Aunque el módulo `json` puede manejar la mayoría de los tipos de `Python` integrados, no comprende cómo codificar los tipos de datos personalizados de forma predeterminada. Es como tratar de colocar una clavija cuadrada en un orificio redondo: necesita una sierra circular y la supervisión de los padres. Simplificando las estructuras de datoscómo lidiar con estructuras de datos más complejas?. Se podría intentar codificar y decodificar el `JSON` "*manualmente*", pero hay una solución un poco más inteligente que ahorrará algo de trabajo. En lugar de pasar directamente del tipo de datos personalizado a `JSON`, puede lanzar un paso intermedio.Todo lo que se necesita hacer es representar los datos en términos de los tipos integrados que `json` ya comprende. Esencialmente, traduce el objeto más complejo en una representación más simple, que el módulo `json` luego traduce a `JSON`. Es como la propiedad transitiva en matemáticas: si `A = B` y `B = C`, entonces `A = C`.Para entender esto, necesitarás un objeto complejo con el que jugar. Puede usar cualquier clase personalizada que desee, pero `Python` tiene un tipo incorporado llamado `complex` para representar números complejos, y no es serializable por defecto.
###Code
z = 3 + 8j
type(z)
json.dumps(z)
###Output
_____no_output_____
###Markdown
Una buena pregunta que debe hacerse al trabajar con tipos personalizados es ¿Cuál es la cantidad mínima de información necesaria para recrear este objeto? En el caso de números complejos, solo necesita conocer las partes real e imaginaria, a las que puede acceder como atributos en el objeto `complex`:
###Code
z.real
z.imag
###Output
_____no_output_____
###Markdown
Pasar los mismos números a un constructor `complex` es suficiente para satisfacer el operador de comparación `__eq__`:
###Code
complex(3, 8) == z
###Output
_____no_output_____
###Markdown
Desglosar los tipos de datos personalizados en sus componentes esenciales es fundamental para los procesos de serialización y deserialización. Codificación de tipos personalizadosPara traducir un objeto personalizado a `JSON`, todo lo que necesita hacer es proporcionar una función de codificación al parámetro predeterminado del método `dump()`. El módulo `json` llamará a esta función en cualquier objeto que no sea serializable de forma nativa. Aquí hay una función de decodificación simple que puede usar para practicar ([aquí](https://www.programiz.com/python-programming/methods/built-in/isinstance "isinstance") encontrará información acerca de la función `isinstance`):
###Code
def encode_complex(z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
type_name = z.__class__.__name__
raise TypeError(f"Object of type '{type_name}' is not JSON serializable")
###Output
_____no_output_____
###Markdown
Tenga en cuenta que se espera que genere un `TypeError` si no obtiene el tipo de objeto que esperaba. De esta manera, se evita serializar accidentalmente a cualquier `Elfo`. Ahora ya podemos intentar codificar objetos complejos.
###Code
json.dumps(9 + 5j, default=encode_complex)
json.dumps(elf, default=encode_complex)
###Output
_____no_output_____
###Markdown
¿Por qué codificamos el número complejo como una tupla? es la única opción, es la mejor opción? Qué pasaría si necesitáramos decodificar el objeto más tarde? El otro enfoque común es subclasificar el `JSONEncoder` estándar y anular el método `default()`:
###Code
class ComplexEncoder(json.JSONEncoder):
def default(self, z):
if isinstance(z, complex):
return (z.real, z.imag)
else:
return super().default(z)
###Output
_____no_output_____
###Markdown
En lugar de subir el `TypeError` usted mismo, simplemente puede dejar que la clase base lo maneje. Puede usar esto directamente en el método `dump()` a través del parámetro `cls` o creando una instancia del codificador y llamando a su método `encode()`:
###Code
json.dumps(2 + 5j, cls=ComplexEncoder)
encoder = ComplexEncoder()
>>> encoder.encode(3 + 6j)
###Output
_____no_output_____
###Markdown
Decodificación de tipos personalizadosSi bien las partes reales e imaginarias de un número complejo son absolutamente necesarias, en realidad no son suficientes para recrear el objeto. Esto es lo que sucede cuando intenta codificar un número complejo con `ComplexEncoder` y luego decodifica el resultado:
###Code
complex_json = json.dumps(4 + 17j, cls=ComplexEncoder)
json.loads(complex_json)
###Output
_____no_output_____
###Markdown
Todo lo que se obtiene es una lista, y se tendría que pasar los valores a un constructor complejo si se quiere ese objeto complejo nuevamente. Recordemos el comentario sobre *teletransportación*. Lo que falta son metadatos o información sobre el tipo de datos que está codificando.La pregunta que realmente debería hacerse es ¿Cuál es la cantidad mínima de información necesaria y suficiente para recrear este objeto?El módulo `json` espera que todos los tipos personalizados se expresen como objetos en el estándar `JSON`. Para variar, puede crear un archivo `JSON` esta vez llamado `complex_data.json` y agregar el siguiente objeto que representa un número complejo:
###Code
# JSON
{
"__complex__": true,
"real": 42,
"imag": 36
}
###Output
_____no_output_____
###Markdown
¿Ves la parte inteligente? Esa clave "`__complex__`" son los metadatos de los que acabamos de hablar. Realmente no importa cuál sea el valor asociado. Para que este pequeño truco funcione, todo lo que necesitas hacer es verificar que exista la clave:
###Code
def decode_complex(dct):
if "__complex__" in dct:
return complex(dct["real"], dct["imag"])
return dct
###Output
_____no_output_____
###Markdown
Si "`__complex__`" no está en el diccionario, puede devolver el objeto y dejar que el decodificador predeterminado se encargue de él.Cada vez que el método `load()` intenta analizar un objeto, se le da la oportunidad de interceder antes de que el decodificador predeterminado se adapte a los datos. Puede hacerlo pasando su función de decodificación al parámetro `object_hook`.Ahora regresemos a lo de antes
###Code
with open("complex_data.json") as complex_data:
data = complex_data.read()
z = json.loads(data, object_hook=decode_complex)
type(z)
###Output
_____no_output_____
###Markdown
Si bien `object_hook` puede parecer la contraparte del parámetro predeterminado del método `dump()`, la analogía realmente comienza y termina allí.
###Code
# JSON
[
{
"__complex__":true,
"real":42,
"imag":36
},
{
"__complex__":true,
"real":64,
"imag":11
}
]
###Output
_____no_output_____
###Markdown
Esto tampoco funciona solo con un objeto. Intente poner esta lista de números complejos en `complex_data.json` y vuelva a ejecutar el script:
###Code
with open("complex_data.json") as complex_data:
data = complex_data.read()
numbers = json.loads(data, object_hook=decode_complex)
###Output
_____no_output_____
###Markdown
Si todo va bien, obtendrá una lista de objetos complejos:
###Code
type(z)
numbers
###Output
_____no_output_____
|
10B - Monitoring Data Drift.ipynb
|
###Markdown
Monitoring Data DriftOver time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.In this lab, you'll configure data drift monitoring for datasets. Install the DataDriftDetector moduleTo define a data drift monitor, you'll need to ensure that you have the latest version of the Azure ML SDK installed, and install the **datadrift** module; so run the following cell to do that:
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# Restart the kernel after installation is complete!
###Output
_____no_output_____
###Markdown
> **Important**: Now you'll need to restart the kernel. In Jupyter, on the **Kernel** menu, select **Restart and Clear Output**. Then, when the output from the cell above has been removed and the kernel is restarted, continue the steps below. Connect to Your WorkspaceNow you're ready to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
Create a Baseline DatasetTo monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
###Code
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Target DatasetOver time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Data Drift MonitorNow you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. Create a Compute TargetTo run the data drift monitor, you'll need a compute target. In this lab, you'll use the compute cluster you created previously (if it doesn't exist, it will be created).> **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Define the Data Drift MonitorNow you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
###Code
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
Backfill the MonitorYou have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
Analyze Data DriftYou can use the following code to examine data drift for the points in time collected in the backfill run.
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
###Markdown
Monitoring Data DriftOver time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.In this lab, you'll configure data drift monitoring for datasets. Install the DataDriftDetector moduleTo define a data drift monitor, you'll need the **datadrift** module, so let's install that:
###Code
!pip install --upgrade azureml-datadrift
###Output
_____no_output_____
###Markdown
Now you'll need to restart the kernel. In Jupyter, on the **Kernel** menu, select **Restart and Clear Output**. Then, when the output from the cell above has been removed and the kernel is restarted, continue the steps below. Connect to Your WorkspaceNow you're ready to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
Create a Baseline DatasetTo monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
###Code
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Target DatasetOver time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Data Drift MonitorNow you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. Create a Compute TargetTo run the data drift monitor, you'll need a compute target.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "aml-cluster"
try:
# Get the cluster if it exists
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If not, create it
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS2_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Define the Data Drift MonitorNow you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
###Code
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target='aml-cluster',
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
Backfill the MonitorYou have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
Analyze Data DriftYou can use the following code to examine data drift for the points in time collected in the backfill run.
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
###Markdown
Monitoring Data DriftOver time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.In this lab, you'll configure data drift monitoring for datasets. Install the DataDriftDetector moduleTo define a data drift monitor, you'll need to ensure that you have the latest version of the Azure ML SDK installed, and install the **datadrift** module; so run the following cell to do that:
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# Restart the kernel after installation is complete!
###Output
_____no_output_____
###Markdown
> **Important**: Now you'll need to restart the kernel. In Jupyter, on the **Kernel** menu, select **Restart and Clear Output**. Then, when the output from the cell above has been removed and the kernel is restarted, continue the steps below. Connect to Your WorkspaceNow you're ready to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
Ready to work with Lab01A
###Markdown
Create a Baseline DatasetTo monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
###Code
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
Uploading an estimated of 2 files
Uploading ./data/diabetes.csv
Uploading ./data/diabetes2.csv
Uploaded ./data/diabetes2.csv, 1 files out of an estimated total of 2
Uploaded ./data/diabetes.csv, 2 files out of an estimated total of 2
Uploaded 2 files
Registering baseline dataset...
Baseline dataset registered!
###Markdown
Create a Target DatasetOver time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
# properties: ["Tabular", "Time series"]
# properties include "Time series" because of `.with_timestamp_columns('date')`
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
Generating simulated data...
Uploading an estimated of 6 files
Uploading data/diabetes_2020-07-05.csv
Uploading data/diabetes_2020-07-12.csv
Uploading data/diabetes_2020-07-19.csv
Uploading data/diabetes_2020-07-26.csv
Uploading data/diabetes_2020-08-02.csv
Uploading data/diabetes_2020-08-09.csv
Uploaded data/diabetes_2020-07-12.csv, 1 files out of an estimated total of 6
Uploaded data/diabetes_2020-07-05.csv, 2 files out of an estimated total of 6
Uploaded data/diabetes_2020-07-19.csv, 3 files out of an estimated total of 6
Uploaded data/diabetes_2020-07-26.csv, 4 files out of an estimated total of 6
Uploaded data/diabetes_2020-08-02.csv, 5 files out of an estimated total of 6
Uploaded data/diabetes_2020-08-09.csv, 6 files out of an estimated total of 6
Uploaded 6 files
Registering target dataset...
Target dataset registered!
###Markdown
Create a Data Drift MonitorNow you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. Create a Compute TargetTo run the data drift monitor, you'll need a compute target. In this lab, you'll use the compute cluster you created previously (if it doesn't exist, it will be created).> **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it!
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "LAB05B-Cluster"
try:
# Get the cluster if it exists
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If not, create it
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS2_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
###Output
Found existing cluster, use it.
Succeeded
AmlCompute wait for completion finished
Minimum number of nodes requested have been provisioned
###Markdown
Define the Data Drift MonitorNow you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
###Code
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
Backfill the MonitorYou have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
# [Click here to see the run in Azure Machine Learning studio]:
# https://ml.azure.com/experiments/diabetes-drift-detector-Monitor-Runs/runs/diabetes-drift-detector-Monitor-Runs_1596959577440?wsid=/subscriptions/35241d74-3b9e-4778-bb92-4bb15e7b0410/resourcegroups/DP-100/workspaces/Lab01A&tid=19e45a7b-505a-4c49-89e4-08ed55a529ea
###Output
_____no_output_____
###Markdown
Analyze Data DriftYou can use the following code to examine data drift for the points in time collected in the backfill run.
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
# [check the dataset drift monitor in]:
# https://ml.azure.com/data/monitor/diabetes-drift-detector?wsid=/subscriptions/35241d74-3b9e-4778-bb92-4bb15e7b0410/resourcegroups/DP-100/workspaces/Lab01A&tid=19e45a7b-505a-4c49-89e4-08ed55a529ea&startDate=2020-06-28&endDate=2020-08-10
# [Name]: diabetes-drift-detector
###Output
start_date 2020-06-28
end_date 2020-08-16
frequency Week
Datadrift percentage {'days_from_start': [0, 7, 14, 21, 28, 35, 42], 'drift_percentage': [74.19152901127207, 79.4213426130036, 89.33065283229664, 93.48161383816839, 96.11668317822499, 98.35454199065752, 99.23199438682525]}
###Markdown
监视数据偏移随时间推移,由于特征数据的变化趋势,模型在准确预测方面的有效性可能会降低。此现象称为数据偏移,请务必监视机器学习解决方案以检测数据偏移,以便在必要时重新训练模型。在本实验中,你将为数据集配置数据偏移监视。 安装“DataDriftDetector”模块要定义数据偏移监视器,需要确保已安装最新版本的 Azure ML SDK,并安装了 **“数据偏移”** 模块,因此,运行以下单元格即可:
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# 安装完成后重新启动内核!
###Output
_____no_output_____
###Markdown
> **重要事项**:现在你需要重新启动内核 。在 Jupyter 中, 在 **“内核”** 菜单上,选择 **“重新启动并清除输出”**。然后,当上面单元格的输出已删除并且内核已重新启动时,请继续以下步骤。 连接到工作区你可以使用 Azure ML SDK 连接到工作区。> **注意**:如果 Azure 订阅的身份验证会话在你完成上一练习后已过期,系统将提示你重新进行身份验证。
###Code
from azureml.core import Workspace
# 从保存的配置文件加载工作区
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
创建基线数据集要监视数据集是否存在数据偏移,必须注册一个基线数据集(通常用于训练模型),以便与将来收集的数据进行比较。
###Code
from azureml.core import Datastore, Dataset
# 上传基线数据
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# 创建并注册基线数据集
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
创建目标数据集随时间推移,可以收集与基线训练数据具有相同特征的新数据。要将新数据与基线数据进行比较,必须定义目标数据集,其中包括要用于分析数据偏移的特征以及表示新数据为最新状态的时间点的时间戳字段 - 这样你就能够测量时间间隔内的数据偏移。时间戳可以是数据集本身的字段,也可以是从用于存储数据的文件夹和文件名模式中派生的。例如,可以将新数据存储在文件夹层次结构中,其中依次包含表示年份的文件夹、表示月份的文件夹和表示某日的文件夹;或者你可以仅对如下文件名中的年、月、日进行编码:*data_2020-01-29.csv*;这是在以下代码中采用的方法:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# 加载两个数据文件中较小的文件
data = pd.read_csv('data/diabetes2.csv')
# 我们将生成过去 6 周的数据
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# 获取 X 周前的日期
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# 修改数据以减少偏移
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# 用文件名中编码的日期保存文件
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# 上传文件
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# 使用文件夹分区格式定义带有“日期”时间戳列的数据集
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# 注册目标数据集
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
创建数据偏移监视器现在可为糖尿病数据创建数据偏移监视器。数据偏移监视器将定期运行或按需运行,以便比较基线数据集与目标数据集(随时间推移,目标数据集中会添加新数据)。 创建计算目标要运行数据偏移监视器,你需要一个计算目标。在本实验室中,你将使用之前创建的计算群集(如果不存在,则创建它)。> **重要事项**:在运行以下代码之前,请先在代码中将 *“你的计算群集”* 更改为你的计算群集的名称!群集名称必须是长度在 2 到 16 个字符之间的全局唯一名称。有效字符是字母、数字和 - 字符。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# 检查现有的计算目标
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# 如果尚不存在,请创建它
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
定义数据偏移监视器现在你可以使用 **“DataDriftDetector”** 类来定义数据的数据偏移监视器。可以指定要针对数据偏移监视的特征、用于运行监视过程的计算目标的名称、比较数据的频率、应触发警报的数据偏移阈值以及允许数据收集的延迟(以小时为单位)。
###Code
from azureml.datadrift import DataDriftDetector
# 设置特征列表
features = ['Pregnancies', 'Age', 'BMI']
# 设置数据偏移检测器
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
回填监视器你有一个基线数据集和一个目标数据集,目标数据集包括为期六周的每周模拟数据收集。你可以用此来回填监视器,使监视器可以分析原始基线和目标数据之间的数据偏移。> **注意** 这可能需要一些时间才能运行,因为必须启动计算目标才能运行回填分析。小组件可能不会始终处于更新状态以显示状态,因此请单击此链接以在 Azure 机器学习工作室中查看试验状态!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
分析数据偏移可以使用以下代码来检查在回填运行中收集的时间点的数据偏移。
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
###Markdown
データ ドリフトの監視時間が経つにつれて、フィーチャ データの傾向の変化により、モデルの正確な予測の効果が低下する可能性があります。この現象は*データ ドリフト*と呼ばれ、必要に応じてモデルを再トレーニングできるように、機械学習ソリューションを監視して検出することが重要です。このラボでは、データセットのデータ ドリフト監視を構成します。 DataDriftDetector モジュールをインストールするデータ ドリフト モニターを定義するには、最新バージョンの Azure ML SDK がインストールされていることを確認し、**datadrift** モジュールをインストールして、以下の通り後続のセルを実行します。
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# インストールが完了したら、カーネルを再起動してください。
###Output
_____no_output_____
###Markdown
> **重要**: カーネルを再起動する必要があります。Jupyter の **「カーネル」** メニューで、**「再起動と出力のクリア」** を選択します。次に、上のセルからの出力が削除され、カーネルが再起動されたら、以下の手順を続行します。 ワークスペースに接続するこれで、Azure ML SDK を使用してワークスペースに接続する準備が整いました。> **注**: 前回の演習を完了してから Azure サブスクリプションとの認証済みセッションの有効期限が切れている場合は、再認証を求めるメッセージが表示されます。
###Code
from azureml.core import Workspace
# 保存した構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
ベースライン データセットを作成するデータ ドリフトのデータセットを監視するには、*ベースライン* データセット (通常、モデルのトレーニングに使用されるデータセット) を登録して、将来収集されるデータとの比較ポイントとして使用する必要があります。
###Code
from azureml.core import Datastore, Dataset
# ベースライン データをアップロードする
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# ベースライン データセットを作成して登録する
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
ターゲット データセットを作成する時間の経過とともに、ベースライン トレーニング データと同じ機能を持つ新しいデータを収集できます。この新しいデータをベースライン データと比較するには、データ ドリフトを分析する機能を含むターゲット データセットと、新しいデータが最新であった時点を示すタイムスタンプ フィールドを定義する必要があります。これにより、一時的なサイクル間隔でのデータ ドリフトを測定します。タイムスタンプは、データセット自体のフィールド、またはデータの格納に使用されるフォルダーとファイル名パターンから派生したフィールドのいずれかです。たとえば、月のフォルダーを含む年のフォルダーと、その日のフォルダーを含むフォルダー階層に新しいデータを保存できます。または、次のようにファイル名に年、月、日をエンコードすることもできます。*data_2020-01-29.csv*。これは、次のコードで採用されているアプローチです
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# 2 つのデータ ファイルのうち小さい方を読み込む
data = pd.read_csv('data/diabetes2.csv')
# 過去 6 週間のデータを生成します
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# X 週間前の日付を取得する
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# データを変更してドリフトを作成する
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# ファイル名にエンコードされた日付でファイルを保存する
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# ファイルをアップロードする
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# フォルダー パーティション形式を使用して、タイムスタンプ列 'date' を持つデータセットを定義する
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# ターゲット データセットを登録する
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
データ ドリフト モニターを作成するこれで、糖尿病データのデータ ドリフト モニターを作成する準備が整いました。データ ドリフト モニターは、定期的またはオンデマンドで実行され、ベースライン データセットとターゲット データセットを比較し、時間の経過とともに新しいデータが追加されます。 コンピューティング先を作成するデータ ドリフト モニターを実行するには、コンピューティング先が必要です。このラボでは、前に作成したコンピューティング クラスターを使用します (存在しない場合は作成されます)。> **重要**: 実行する前に、*your-compute-cluster* を以下のコードのコンピューティング クラスターの名前に変更してください。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。文字、数字、および - が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# 既存のコンピューティング先を確認する
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# まだ存在しない場合は、作成します
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
データ ドリフト モニターを定義するこれで、**DataDriftDetector** クラスを使用してデータのデータ ドリフト モニターを定義する準備ができました。データ ドリフトを監視する機能、監視プロセスの実行に使用するコンピューティング先の名前、データの比較頻度、アラートがトリガーされるデータのドリフトしきい値、データ収集を可能にする待機時間 (時間単位)を指定できます。
###Code
from azureml.datadrift import DataDriftDetector
# フィーチャーリストを設定する
features = ['Pregnancies', 'Age', 'BMI']
# データ ドリフト検出機能を設定する
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
モニターをバックフィルする6 週間のシミュレートされた毎週のデータ収集を含むベースライン データセットとターゲット データセットがあります。これを使用してモニターをバックフィルして、元のベースラインとターゲット データの間のデータ ドリフトを分析できます。> **注** バックフィル分析を実行するには、コンピューティング先を起動する必要があるため、実行に時間がかかる場合があります。ウィジェットは常に更新されて状態が表示されない場合があるため、リンクをクリックして、Azure Machine Learning Studio で実験の状態を確認してください。
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
データ ドリフトを分析する次のコードを使用して、バックフィル実行で収集された時点のデータ ドリフトを調べることができます。
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
###Markdown
Monitoring Data DriftOver time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.In this lab, you'll configure data drift monitoring for datasets. Install the DataDriftDetector moduleTo define a data drift monitor, you'll need to ensure that you have the latest version of the Azure ML SDK installed, and install the **datadrift** module; so run the following cell to do that:
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# Restart the kernel after installation is complete!
###Output
_____no_output_____
###Markdown
> **Important**: Now you'll need to restart the kernel. In Jupyter, on the **Kernel** menu, select **Restart and Clear Output**. Then, when the output from the cell above has been removed and the kernel is restarted, continue the steps below. Connect to Your WorkspaceNow you're ready to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
Create a Baseline DatasetTo monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
###Code
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Target DatasetOver time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Data Drift MonitorNow you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. Create a Compute TargetTo run the data drift monitor, you'll need a compute target. In this lab, you'll use the compute cluster you created previously (if it doesn't exist, it will be created).> **Important**: Change *compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Define the Data Drift MonitorNow you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
###Code
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
Backfill the MonitorYou have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
Analyze Data DriftYou can use the following code to examine data drift for the points in time collected in the backfill run.
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
###Markdown
Monitoring Data DriftOver time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary.In this lab, you'll configure data drift monitoring for datasets. Install the DataDriftDetector moduleTo define a data drift monitor, you'll need to ensure that you have the latest version of the Azure ML SDK installed, and install the **datadrift** module; so run the following cell to do that:
###Code
!pip install --upgrade azureml-sdk[notebooks,automl,explain]
!pip install --upgrade azureml-datadrift
# Restart the kernel after installation is complete!
###Output
_____no_output_____
###Markdown
> **Important**: Now you'll need to restart the kernel. In Jupyter, on the **Kernel** menu, select **Restart and Clear Output**. Then, when the output from the cell above has been removed and the kernel is restarted, continue the steps below. Connect to Your WorkspaceNow you're ready to connect to your workspace using the Azure ML SDK.> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
###Code
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to work with', ws.name)
###Output
_____no_output_____
###Markdown
Create a Baseline DatasetTo monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future.
###Code
from azureml.core import Datastore, Dataset
# Upload the baseline data
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'],
target_path='diabetes-baseline',
overwrite=True,
show_progress=True)
# Create and register the baseline dataset
print('Registering baseline dataset...')
baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv'))
baseline_data_set = baseline_data_set.register(workspace=ws,
name='diabetes baseline',
description='diabetes baseline data',
tags = {'format':'CSV'},
create_new_version=True)
print('Baseline dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Target DatasetOver time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code:
###Code
import datetime as dt
import pandas as pd
print('Generating simulated data...')
# Load the smaller of the two data files
data = pd.read_csv('data/diabetes2.csv')
# We'll generate data for the past 6 weeks
weeknos = reversed(range(6))
file_paths = []
for weekno in weeknos:
# Get the date X weeks ago
data_date = dt.date.today() - dt.timedelta(weeks=weekno)
# Modify data to ceate some drift
data['Pregnancies'] = data['Pregnancies'] + 1
data['Age'] = round(data['Age'] * 1.2).astype(int)
data['BMI'] = data['BMI'] * 1.1
# Save the file with the date encoded in the filename
file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d"))
data.to_csv(file_path)
file_paths.append(file_path)
# Upload the files
path_on_datastore = 'diabetes-target'
default_ds.upload_files(files=file_paths,
target_path=path_on_datastore,
overwrite=True,
show_progress=True)
# Use the folder partition format to define a dataset with a 'date' timestamp column
partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv'
target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'),
partition_format=partition_format)
# Register the target dataset
print('Registering target dataset...')
target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws,
name='diabetes target',
description='diabetes target data',
tags = {'format':'CSV'},
create_new_version=True)
print('Target dataset registered!')
###Output
_____no_output_____
###Markdown
Create a Data Drift MonitorNow you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. Create a Compute TargetTo run the data drift monitor, you'll need a compute target. In this lab, you'll use the compute cluster you created previously (if it doesn't exist, it will be created).> **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it!
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Get the cluster if it exists
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If not, create it
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS2_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Define the Data Drift MonitorNow you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection.
###Code
from azureml.datadrift import DataDriftDetector
# set up feature list
features = ['Pregnancies', 'Age', 'BMI']
# set up data drift detector
monitor = DataDriftDetector.create_from_datasets(ws, 'diabetes-drift-detector', baseline_data_set, target_data_set,
compute_target=cluster_name,
frequency='Week',
feature_list=features,
drift_threshold=.3,
latency=24)
monitor
###Output
_____no_output_____
###Markdown
Backfill the MonitorYou have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data.> **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio!
###Code
from azureml.widgets import RunDetails
backfill = monitor.backfill( dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now())
RunDetails(backfill).show()
backfill.wait_for_completion()
###Output
_____no_output_____
###Markdown
Analyze Data DriftYou can use the following code to examine data drift for the points in time collected in the backfill run.
###Code
drift_metrics = backfill.get_metrics()
for metric in drift_metrics:
print(metric, drift_metrics[metric])
###Output
_____no_output_____
|
notebooks/Transformation3D.ipynb
|
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T, vector=False)
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basisRot.T, xyz=basis.T, vector=False)
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
_____no_output_____
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensionsMarcos Duarte The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix} $$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $ \mathbf{L_G}=[1, 2, 3] $ in relation to the Global coordinate system, a point with coordinates $ \mathbf{P_l}=[4, 5, 6] $ at the local coordinate system will have the position $ \mathbf{P_G}=[5, 7, 9] $ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}cos\mathbf{X}x & cos\mathbf{X}y & cos\mathbf{X}z \\cos\mathbf{Y}x & cos\mathbf{Y}y & cos\mathbf{Y}z \\cos\mathbf{Z}x & cos\mathbf{Z}y & cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). Probably, the most straightforward solution would be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation.The rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\:X}} = \begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & -sin\alpha \\0 & sin\alpha & cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\:Y}} = \begin{bmatrix}cos\beta & 0 & sin\beta \\0 & 1 & 0 \\-sin\beta & 0 & cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\:Z}} = \begin{bmatrix}cos\gamma & -sin\gamma & 0\\sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($cos 90^o$).The rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simplily the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\;x} = \begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & sin\alpha \\0 & -sin\alpha & cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\;y} = \begin{bmatrix}cos\beta & 0 & -sin\beta \\0 & 1 & 0 \\sin\beta & 0 & cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\;z} = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $cos(-\:\cdot)=cos(\cdot)$ and $sin(-\:\cdot)=-sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}cos\gamma & -sin\gamma & 0\\sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\beta & 0 & sin\beta \\0 & 1 & 0 \\-sin\beta & 0 & cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & -sin\alpha \\0 & sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}cos\beta\:cos\gamma \;&\;sin\alpha\:sin\beta\:cos\gamma-cos\alpha\:sin\gamma \;&\;cos\alpha\:sin\beta\:cos\gamma+sin\alpha\:sin\gamma \;\;\; \\cos\beta\:sin\gamma \;&\;sin\alpha\:sin\beta\:sin\gamma+cos\alpha\:cos\gamma \;&\;cos\alpha\:sin\beta\:sin\gamma-sin\alpha\:cos\gamma \;\;\; \\-sin\beta \;&\; sin\alpha\:cos\beta \;&\; cos\alpha\:cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the multiplication of the matrices is from right to left (first the second rightmost matrix times the rightmost matrix, then the leftmost matrix times this result). We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\;XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\;XYZ}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, and $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, and $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\;xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\beta & 0 & -sin\beta \\0 & 1 & 0 \\sin\beta & 0 & cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & sin\alpha \\0 & -sin\alpha & cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}cos\beta\:cos\gamma \;&\;sin\alpha\:sin\beta\:cos\gamma+cos\alpha\:sin\gamma \;&\;cos\alpha\:sin\beta\:cos\gamma-sin\alpha\:sin\gamma \;\;\; \\-cos\beta\:sin\gamma \;&\;-sin\alpha\:sin\beta\:sin\gamma+cos\alpha\:cos\gamma \;&\;cos\alpha\:sin\beta\:sin\gamma+sin\alpha\:cos\gamma \;\;\; \\sin\beta \;&\; -sin\alpha\:cos\beta \;&\; cos\alpha\:cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the multiplication of the matrices is from right to left (first the second rightmost matrix times the rightmost matrix, then the leftmost matrix times this result). Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\;xyz}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the above matrix and the correspondent previous figure, one can see they also agree: the rotated $x$ axis (first column of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second column) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.Examining the $\mathbf{R_{Gl,\;XYZ}}$ and $\mathbf{R}_{lG,\;xyz}$ matrices one can see that negating the angles from one of the matrices results in the other matrix. That is, the rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as the rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$, as we saw for the elemental rotations. Let's check that: There is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:
###Code
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{GL,\;XYZ}}(\alpha,\beta,\gamma) \quad =') + \
sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all the angles:
Rx_n = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_n = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_n = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_n = Rz_n*Ry_n*Rx_n
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}(-\alpha,-\beta,-\gamma)=') + \
sym.latex(Rxyz_n, mat_str='matrix')))
# Check that the two matrices are equal:
print('\n')
display(Math(sym.latex(r'\mathbf{R_{GL,\;XYZ}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\;xyz}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_n
RZYX = RX*RY*RZ
display(Math(sym.latex(r'\mathbf{R_{Gl,\;ZYX}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
print('\n'')
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R_{Gl,\;ZYX}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicaular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the correspoding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The correspoding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = arctan\left(\frac{sin(\alpha)}{cos(\alpha)}\right) = arctan\left(\frac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = arctan\left(\frac{sin(\beta)}{cos(\beta)}\right) = arctan\left(\frac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = arctan\left(\frac{sin(\gamma)}{cos(\gamma)}\right) = arctan\left(\frac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/AngularKinematics2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, dtype=np.float64, copy=False)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\beta & sin\beta \\0 & -sin\beta & cos\beta\end{bmatrix}\begin{bmatrix}cos\alpha & sin\alpha & 0\\-sin\alpha & cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\alpha & sin\alpha & 0\\-sin\alpha & cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where the problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>Mission clock: 02 08 12 47 Flight: *Go, Guidance.* Guido: *He’s getting close to gimbal lock there.* Flight: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* CapCom: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-colinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three othogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upperarm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot((mm - LG), RlG) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot((lm - LG), RlG)
fhl = np.dot((fh - LG), RlG)
mcl = np.dot((mc - LG), RlG)
ajcl = np.dot((ajc - LG), RlG)
kjcl = np.dot((kjc - LG), RlG)
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0.1828 0.2899 3.8216]
Coordinates of lm in the anatomical system:
[ 0.1828 -0.2899 -3.8216]
Coordinates of fh in the anatomical system:
[ 6.2036 30.7834 -8.902 ]
Coordinates of mc in the anatomical system:
[ 9.168 31.0093 2.2824]
Coordinates of kjc in the anatomical system:
[ 7.6858 30.8964 -3.3098]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}$$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -sin\alpha \\0 & \sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/AngularKinematics2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[ -1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}$$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -sin\alpha \\0 & \sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[ -1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix} $$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $ \mathbf{P_G}=[5, 7, 9] $ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}cos\mathbf{X}x & cos\mathbf{X}y & cos\mathbf{X}z \\cos\mathbf{Y}x & cos\mathbf{Y}y & cos\mathbf{Y}z \\cos\mathbf{Z}x & cos\mathbf{Z}y & cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). Probably, the most straightforward solution would be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation.The rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\:X}} = \begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & -sin\alpha \\0 & sin\alpha & cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\:Y}} = \begin{bmatrix}cos\beta & 0 & sin\beta \\0 & 1 & 0 \\-sin\beta & 0 & cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\:Z}} = \begin{bmatrix}cos\gamma & -sin\gamma & 0\\sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($cos 90^o$).The rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\;x} = \begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & sin\alpha \\0 & -sin\alpha & cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\;y} = \begin{bmatrix}cos\beta & 0 & -sin\beta \\0 & 1 & 0 \\sin\beta & 0 & cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\;z} = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $cos(-\:\cdot)=cos(\cdot)$ and $sin(-\:\cdot)=-sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}cos\gamma & -sin\gamma & 0\\sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\beta & 0 & sin\beta \\0 & 1 & 0 \\-sin\beta & 0 & cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & -sin\alpha \\0 & sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}cos\beta\:cos\gamma \;&\;sin\alpha\:sin\beta\:cos\gamma-cos\alpha\:sin\gamma \;&\;cos\alpha\:sin\beta\:cos\gamma+sin\alpha\:sin\gamma \;\;\; \\cos\beta\:sin\gamma \;&\;sin\alpha\:sin\beta\:sin\gamma+cos\alpha\:cos\gamma \;&\;cos\alpha\:sin\beta\:sin\gamma-sin\alpha\:cos\gamma \;\;\; \\-sin\beta \;&\; sin\alpha\:cos\beta \;&\; cos\alpha\:cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the multiplication of the matrices is from right to left (first the second rightmost matrix times the rightmost matrix, then the leftmost matrix times this result). We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\;XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\;XYZ}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, and $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, and $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\;xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\beta & 0 & -sin\beta \\0 & 1 & 0 \\sin\beta & 0 & cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\alpha & sin\alpha \\0 & -sin\alpha & cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}cos\beta\:cos\gamma \;&\;sin\alpha\:sin\beta\:cos\gamma+cos\alpha\:sin\gamma \;&\;cos\alpha\:sin\beta\:cos\gamma-sin\alpha\:sin\gamma \;\;\; \\-cos\beta\:sin\gamma \;&\;-sin\alpha\:sin\beta\:sin\gamma+cos\alpha\:cos\gamma \;&\;cos\alpha\:sin\beta\:sin\gamma+sin\alpha\:cos\gamma \;\;\; \\sin\beta \;&\; -sin\alpha\:cos\beta \;&\; cos\alpha\:cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the multiplication of the matrices is from right to left (first the second rightmost matrix times the rightmost matrix, then the leftmost matrix times this result). Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\;xyz}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the above matrix and the correspondent previous figure, one can see they also agree: the rotated $x$ axis (first column of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second column) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.Examining the $\mathbf{R_{Gl,\;XYZ}}$ and $\mathbf{R}_{lG,\;xyz}$ matrices one can see that negating the angles from one of the matrices results in the other matrix. That is, the rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as the rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$, as we saw for the elemental rotations. Let's check that: There is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:
###Code
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{GL,\;XYZ}}(\alpha,\beta,\gamma) \quad =') + \
sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all the angles:
Rx_n = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_n = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_n = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_n = Rz_n*Ry_n*Rx_n
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}(-\alpha,-\beta,-\gamma)=') + \
sym.latex(Rxyz_n, mat_str='matrix')))
# Check that the two matrices are equal:
print('\n')
display(Math(sym.latex(r'\mathbf{R_{GL,\;XYZ}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\;xyz}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_n
RZYX = RX*RY*RZ
display(Math(sym.latex(r'\mathbf{R_{Gl,\;ZYX}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\;xyz}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R_{Gl,\;ZYX}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = arctan\left(\frac{sin(\alpha)}{cos(\alpha)}\right) = arctan\left(\frac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = arctan\left(\frac{sin(\beta)}{cos(\beta)}\right) = arctan\left(\frac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = arctan\left(\frac{sin(\gamma)}{cos(\gamma)}\right) = arctan\left(\frac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/AngularKinematics2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & cos\beta & sin\beta \\0 & -sin\beta & cos\beta\end{bmatrix}\begin{bmatrix}cos\alpha & sin\alpha & 0\\-sin\alpha & cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}cos\gamma & sin\gamma & 0\\-sin\gamma & cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}cos\alpha & sin\alpha & 0\\-sin\alpha & cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where the problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>Mission clock: 02 08 12 47 Flight: *Go, Guidance.* Guido: *He’s getting close to gimbal lock there.* Flight: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* CapCom: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot((mm - LG), RlG) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot((lm - LG), RlG)
fhl = np.dot((fh - LG), RlG)
mcl = np.dot((mc - LG), RlG)
ajcl = np.dot((ajc - LG), RlG)
kjcl = np.dot((kjc - LG), RlG)
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0.1828 0.2899 3.8216]
Coordinates of lm in the anatomical system:
[ 0.1828 -0.2899 -3.8216]
Coordinates of fh in the anatomical system:
[ 6.2036 30.7834 -8.902 ]
Coordinates of mc in the anatomical system:
[ 9.168 31.0093 2.2824]
Coordinates of kjc in the anatomical system:
[ 7.6858 30.8964 -3.3098]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}$$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -sin\alpha \\0 & \sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[ -1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab) > Federal University of ABC, Brazil Contents1 Python setup2 Translation3 Rotation4 Euler angles4.1 Elemental rotations4.2 Rotation around the fixed coordinate system4.3 Rotation around the local coordinate system4.4 Sequence of elemental rotations4.5 Rotations in a coordinate system is equivalent to minus rotations in the other coordinate system4.6 Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate system4.7 Sequence of rotations of a Vector4.8 The 12 different sequences of Euler angles4.9 Line of nodes5 Determination of the Euler angles5.1 Gimbal lock6 Determination of the rotation matrix6.1 Basis7 Determination of the rotation matrix between two local coordinate systems8 Translation and Rotation8.1 Transformation matrix8.2 Example with actual motion analysis data9 Futher reading10 Video lectures on the Internet11 Problems12 References The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. Python setup
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples: For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).applyfunc(lambda x: sym.Symbol('{:.3f}'.format(x))))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to python path
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1, e2, e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0, 0, 0]), Oxyz=np.array([0, 0, 0]), ijk=basis.T,
xyz=basisRot.T, vector=False);
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).applyfunc(lambda x: sym.Symbol('{:.3f}'.format(x))))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).applyfunc(lambda x: sym.Symbol('{:.3f}'.format(x))))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1, 0, 0]])
e2 = np.array([[0, 1, 0]])
e3 = np.array([[0, 0, 1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0, 0, 0]), Oxyz=np.array([0, 0, 0]), ijk=basisRot.T,
xyz=basis.T, vector=False);
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[-1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[0. 0. 0.]
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:$$ \begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array} $$And in matrix form:$$\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}$$From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:$$ \mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix} $$Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: $$ \mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} $$Around $\mathbf{Y}$ axis: $$ \mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $\mathbf{Z}$ axis: $$ \mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: $$ \mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} $$Around $y$ axis: $$ \mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix} $$Around $z$ axis: $$ \mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix} $$Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$.The fact that we chose to rotate the local coordinate system by a counterclockwise (positive) angle in relation to the Global coordinate system is just a matter of convention. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:$$ \begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -sin\alpha \\0 & \sin\alpha & cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$Note that the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array} $$As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
import sys
sys.path.insert(1, r'./../functions')
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:$$ \begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array} $$Note that we prefer to use the mathematical function `arctan` rather than simply `arcsin` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/AngularKinematics2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we wont see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:$$ \begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:$$ \begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array} $$The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [ 0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm). We will see how to perform this computation later. Now we will combine translation and rotation in a single transformation. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:$$ \begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix} $$Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:$$ \begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} $$ Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[ -1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[ 0. 0. 0.]
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T, vector=False)
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basisRot.T, xyz=basis.T, vector=False)
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
_____no_output_____
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil Contents1 Python setup2 Translation3 Rotation4 Euler angles4.1 Elemental rotations4.2 Rotation around the fixed coordinate system4.3 Rotation around the local coordinate system4.4 Sequence of elemental rotations4.5 Rotations in a coordinate system is equivalent to minus rotations in the other coordinate system4.6 Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate system4.7 Sequence of rotations of a Vector4.8 The 12 different sequences of Euler angles4.9 Line of nodes5 Determination of the Euler angles5.1 Gimbal lock6 Determination of the rotation matrix6.1 Basis7 Determination of the rotation matrix between two local coordinate systems8 Translation and Rotation8.1 Transformation matrix8.2 Example with actual motion analysis data9 Futher reading10 Video lectures on the Internet11 Problems12 References The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. Python setup
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples: For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T, vector=False)
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basisRot.T, xyz=basis.T, vector=False)
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
_____no_output_____
###Markdown
Further reading- Read pages 1136-1164 of the 21th chapter of the [Ruina and Rudra's book] (http://ruina.tam.cornell.edu/Book/index.html) about elementary introduction to 3D rigid-body dynamics. Video lectures on the Internet- Khan Academy: [Rotação em R3 ao redor do eixo x](https://pt.khanacademy.org/math/linear-algebra/matrix-transformations/lin-trans-examples/v/rotation-in-r3-around-the-x-axis)- [Sec. 10.9 - Euler Angles](https://www.youtube.com/watch?v=PLWfDgX9E6s)- [Tema 05 - Rotação | Aula 03 - Rotação em torno de um eixo](https://www.youtube.com/watch?v=oKQyBzDVwSU) Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples:
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # 2D array with 3 rows and 2 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ*RY*RX
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ}}=') + sym.latex(RXYZ, mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basis.T, xyz=basisRot.T, vector=False)
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0], [0, cos(a), sin(a)], [0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)], [0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz*Ry*Rx
Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz}=') + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(chop=True, prec=3))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R@basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]), ijk=basisRot.T, xyz=basis.T, vector=False)
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) =')))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg*Ry_neg*Rx_neg
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) =')))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(sym.latex(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)')))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX*RY*RZ
Rxyz = Rz*Ry*Rx
display(Math(sym.latex(r'\mathbf{R_{Gl,\,ZYX\,}^T}=') + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(sym.latex(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)')))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0], [-sin(a), cos(a), 0], [0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0], [0, cos(b), sin(b)], [0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0], [-sin(g), cos(g), 0], [0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(sym.latex(r'\mathbf{R}_{zxz}=') + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(sym.latex(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=') + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local', angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
_____no_output_____
###Markdown
Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Rigid-body transformations in three-dimensions> Marcos Duarte, Renato Naville Watanabe > Laboratory of Biomechanics and Motor Control ([http://pesquisa.ufabc.edu.br/bmclab](http://pesquisa.ufabc.edu.br/bmclab)) > Federal University of ABC, Brazil Contents1 Python setup2 Translation3 Rotation3.1 Euler angles3.2 Elemental rotations3.3 Rotations around the fixed coordinate system3.4 Rotations around the local coordinate system3.5 Sequence of elemental rotations3.6 Rotations in a coordinate system is equivalent to minus rotations in the other coordinate system3.7 Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate system3.8 Sequence of rotations of a Vector3.9 The 12 different sequences of Euler angles3.10 Line of nodes3.11 Determination of the Euler angles3.12 Gimbal lock4 Determination of the rotation matrix4.1 Basis5 Determination of the rotation matrix between two local coordinate systems6 Translation and Rotation6.1 Transformation matrix6.2 Example with actual motion analysis data7 Further reading8 Video lectures on the Internet9 Problems10 References11 Function euler_rotmatrix.py12 Appendix12.1 How to load .trc files The kinematics of a rigid body is completely described by its pose, i.e., its position and orientation in space (and the corresponding changes, translation and rotation). In a three-dimensional space, at least three coordinates and three angles are necessary to describe the pose of the rigid body, totalizing six degrees of freedom for a rigid body.In motion analysis, to describe a translation and rotation of a rigid body with respect to a coordinate system, typically we attach another coordinate system to the rigid body and determine a transformation between these two coordinate systems.A transformation is any function mapping a set to another set. For the description of the kinematics of rigid bodies, we are interested only in what is called rigid or Euclidean transformations (denoted as SE(3) for the three-dimensional space) because they preserve the distance between every pair of points of the body (which is considered rigid by definition). Translations and rotations are examples of rigid transformations (a reflection is also an example of rigid transformation but this changes the right-hand axis convention to a left hand, which usually is not of interest). In turn, rigid transformations are examples of [affine transformations](https://en.wikipedia.org/wiki/Affine_transformation). Examples of other affine transformations are shear and scaling transformations (which preserves angles but not lengths). We will follow the same rationale as in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) and we will skip the fundamental concepts already covered there. So, you if haven't done yet, you should read that notebook before continuing here. Python setup
###Code
# Import the necessary libraries
import numpy as np
# suppress scientific notation for small numbers:
np.set_printoptions(precision=4, suppress=True)
###Output
_____no_output_____
###Markdown
TranslationA pure three-dimensional translation of a rigid body (or a coordinate system attached to it) in relation to other rigid body (with other coordinate system) is illustrated in the figure below. Figure. A point in three-dimensional space represented in two coordinate systems, with one coordinate system translated. The position of point $\mathbf{P}$ originally described in the $xyz$ (local) coordinate system but now described in the $\mathbf{XYZ}$ (Global) coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{P_l} $$Or in terms of its components:\begin{equation}\begin{array}{}\mathbf{P_X} =& \mathbf{L_X} + \mathbf{P}_x \\\mathbf{P_Y} =& \mathbf{L_Y} + \mathbf{P}_y \\\mathbf{P_Z} =& \mathbf{L_Z} + \mathbf{P}_z \end{array}\end{equation}And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \end{bmatrix} =\begin{bmatrix}\mathbf{L_X} \\\mathbf{L_Y} \\\mathbf{L_Z} \end{bmatrix} +\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \end{bmatrix}\end{equation}From classical mechanics, this is an example of [Galilean transformation](http://en.wikipedia.org/wiki/Galilean_transformation). Let's use Python to compute some numeric examples: For example, if the local coordinate system is translated by $\mathbf{L_G}=[1, 2, 3]$ in relation to the Global coordinate system, a point with coordinates $\mathbf{P_l}=[4, 5, 6]$ at the local coordinate system will have the position $\mathbf{P_G}=[5, 7, 9]$ at the Global coordinate system:
###Code
LG = np.array([1, 2, 3]) # Numpy array
Pl = np.array([4, 5, 6])
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
This operation also works if we have more than one point (NumPy try to guess how to handle vectors with different dimensions):
###Code
Pl = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]) # 2D array with 3 rows and 3 columns
PG = LG + Pl
PG
###Output
_____no_output_____
###Markdown
RotationA pure three-dimensional rotation of a $xyz$ (local) coordinate system in relation to other $\mathbf{XYZ}$ (Global) coordinate system and the position of a point in these two coordinate systems are illustrated in the next figure (remember that this is equivalent to describing a rotation between two rigid bodies). A point in three-dimensional space represented in two coordinate systems, with one system rotated. In analogy to the rotation in two dimensions, we can calculate the rotation matrix that describes the rotation of the $xyz$ (local) coordinate system in relation to the $\mathbf{XYZ}$ (Global) coordinate system using the direction cosines between the axes of the two coordinate systems:\begin{equation}\mathbf{R_{Gl}} = \begin{bmatrix}\cos\mathbf{X}x & \cos\mathbf{X}y & \cos\mathbf{X}z \\\cos\mathbf{Y}x & \cos\mathbf{Y}y & \cos\mathbf{Y}z \\\cos\mathbf{Z}x & \cos\mathbf{Z}y & \cos\mathbf{Z}z\end{bmatrix}\end{equation}Note however that for rotations around more than one axis, these angles will not lie in the main planes ($\mathbf{XY, YZ, ZX}$) of the $\mathbf{XYZ}$ coordinate system, as illustrated in the figure below for the direction angles of the $y$ axis only. Thus, the determination of these angles by simple inspection, as we have done for the two-dimensional case, would not be simple. Figure. Definition of direction angles for the $y$ axis of the local coordinate system in relation to the $\mathbf{XYZ}$ Global coordinate system.Note that the nine angles shown in the matrix above for the direction cosines are obviously redundant since only three angles are necessary to describe the orientation of a rigid body in the three-dimensional space. An important characteristic of angles in the three-dimensional space is that angles cannot be treated as vectors: the result of a sequence of rotations of a rigid body around different axes depends on the order of the rotations, as illustrated in the next figure. Figure. The result of a sequence of rotations around different axes of a coordinate system depends on the order of the rotations. In the first example (first row), the rotations are around a Global (fixed) coordinate system. In the second example (second row), the rotations are around a local (rotating) coordinate system.Let's focus now on how to understand rotations in the three-dimensional space, looking at the rotations between coordinate systems (or between rigid bodies). Later we will apply what we have learned to describe the position of a point in these different coordinate systems. Euler anglesThere are different ways to describe a three-dimensional rotation of a rigid body (or of a coordinate system). The most straightforward solution would probably be to use a [spherical coordinate system](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ReferenceFrame.ipynbSpherical-coordinate-system), but spherical coordinates would be difficult to give an anatomical or clinical interpretation. A solution that has been often employed in biomechanics to handle rotations in the three-dimensional space is to use Euler angles. Under certain conditions, Euler angles can have an anatomical interpretation, but this representation also has some caveats. Let's see the Euler angles now.[Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) in the XVIII century showed that two three-dimensional coordinate systems with a common origin can be related by a sequence of up to three elemental rotations about the axes of the local coordinate system, where no two successive rotations may be about the same axis, which now are known as [Euler (or Eulerian) angles](http://en.wikipedia.org/wiki/Euler_angles). Elemental rotationsFirst, let's see rotations around a fixed Global coordinate system as we did for the two-dimensional case. The next figure illustrates elemental rotations of the local coordinate system around each axis of the fixed Global coordinate system. Figure. Elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. Note that for better clarity, the axis around where the rotation occurs is shown perpendicular to this page for each elemental rotation. Rotations around the fixed coordinate systemThe rotation matrices for the elemental rotations around each axis of the fixed $\mathbf{XYZ}$ coordinate system (rotations of the local coordinate system in relation to the Global coordinate system) are shown next.Around $\mathbf{X}$ axis: \begin{equation}\mathbf{R_{Gl,\,X}} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $\mathbf{Y}$ axis: \begin{equation}\mathbf{R_{Gl,\,Y}} = \begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $\mathbf{Z}$ axis: \begin{equation}\mathbf{R_{Gl,\,Z}} = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}These matrices are the rotation matrices for the case of two-dimensional coordinate systems plus the corresponding terms for the third axes of the local and Global coordinate systems, which are parallel. To understand why the terms for the third axes are 1's or 0's, for instance, remember they represent the cosine directors. The cosines between $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ for the elemental rotations around respectively the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes are all 1 because $\mathbf{X}x$, $\mathbf{Y}y$, and $\mathbf{Z}z$ are parallel ($\cos 0^o$). The cosines of the other elements are zero because the axis around where each rotation occurs is perpendicular to the other axes of the coordinate systems ($\cos 90^o$). Rotations around the local coordinate systemThe rotation matrices for the elemental rotations this time around each axis of the $xyz$ coordinate system (rotations of the Global coordinate system in relation to the local coordinate system), similarly to the two-dimensional case, are simply the transpose of the above matrices as shown next.Around $x$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,x} = \begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix}\end{equation}Around $y$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,y} = \begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\end{equation}Around $z$ axis: \begin{equation}\mathbf{R}_{\mathbf{lG},\,z} = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\end{equation}Notice this is equivalent to instead of rotating the local coordinate system by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system; remember that $\cos(-\:\cdot)=\cos(\cdot)$ and $\sin(-\:\cdot)=-\sin(\cdot)$. Sequence of elemental rotationsConsider now a sequence of elemental rotations around the $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ axes of the fixed $\mathbf{XYZ}$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of the $xyz$ coordinate system around each axis, $\mathbf{X}$, $\mathbf{Y}$, $\mathbf{Z}$, of the fixed $\mathbf{XYZ}$ coordinate system. This sequence of elemental rotations (each one of the local coordinate system with respect to the fixed Global coordinate system) is mathematically represented by a multiplication between the rotation matrices:\begin{equation}\begin{array}{l l}\mathbf{R_{Gl,\;XYZ}} & = \mathbf{R_{Z}} \mathbf{R_{Y}} \mathbf{R_{X}} \\\\ & = \begin{bmatrix}\cos\gamma & -\sin\gamma & 0\\\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & \sin\beta \\0 & 1 & 0 \\-\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & -\sin\alpha \\0 & \sin\alpha & \cos\alpha\end{bmatrix} \\\\ & =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma-\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:cos\gamma+\sin\alpha\:\sin\gamma \;\;\; \\\cos\beta\:\sin\gamma \;&\;\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma-\sin\alpha\:\cos\gamma \;\;\; \\-\sin\beta \;&\; \sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}Note the order of the matrices. We can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
#import the necessary libraries
from IPython.core.display import Math, display
import sympy as sym
cos, sin = sym.cos, sym.sin
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz in relation to XYZ:
RX = sym.Matrix([[1, 0, 0],
[0, cos(a), -sin(a)],
[0, sin(a), cos(a)]])
RY = sym.Matrix([[cos(b), 0, sin(b)],
[0, 1, 0],
[-sin(b), 0, cos(b)]])
RZ = sym.Matrix([[cos(g), -sin(g), 0],
[sin(g), cos(g), 0],
[ 0, 0, 1]])
# Rotation matrix of xyz in relation to XYZ:
RXYZ = RZ @ RY @ RX
display(Math(r'\mathbf{R_{Gl,\,XYZ}}=' + sym.latex(RXYZ,
mat_str='matrix')))
###Output
_____no_output_____
###Markdown
For instance, we can calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $\mathbf{X,Y,Z}$:
###Code
R = sym.lambdify((a, b, g), RXYZ, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(3, chop=True))))
###Output
_____no_output_____
###Markdown
Below you can test any sequence of rotation around the global coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the global basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/6$ rad.
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = RZ*RY*RX
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/2
beta = np.pi/2
gamma = np.pi/2
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R @ basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]),
ijk=basis.T, xyz=basisRot.T, vector=False)
###Output
_____no_output_____
###Markdown
Examining the matrix above and the correspondent previous figure, one can see they agree: the rotated $x$ axis (first column of the above matrix) has value -1 in the $\mathbf{Z}$ direction $[0,0,-1]$, the rotated $y$ axis (second column) is at the $\mathbf{Y}$ direction $[0,1,0]$, and the rotated $z$ axis (third column) is at the $\mathbf{X}$ direction $[1,0,0]$.We also can calculate the sequence of elemental rotations around the $x$, $y$, $z$ axes of the rotating $xyz$ coordinate system illustrated in the next figure. Figure. Sequence of elemental rotations of a second $xyz$ local coordinate system around each axis, $x$, $y$, $z$, of the rotating $xyz$ coordinate system.Likewise, this sequence of elemental rotations (each one of the local coordinate system with respect to the rotating local coordinate system) is mathematically represented by a multiplication between the rotation matrices (which are the inverse of the matrices for the rotations around $\mathbf{X,Y,Z}$ as we saw earlier):\begin{equation}\begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz} & = \mathbf{R_{z}} \mathbf{R_{y}} \mathbf{R_{x}} \\\\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\beta & 0 & -\sin\beta \\0 & 1 & 0 \\\sin\beta & 0 & \cos\beta\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\alpha & \sin\alpha \\0 & -\sin\alpha & \cos\alpha\end{bmatrix} \\\\& =\begin{bmatrix}\cos\beta\:\cos\gamma \;&\;\sin\alpha\:\sin\beta\:\cos\gamma+\cos\alpha\:\sin\gamma \;&\;\cos\alpha\:\sin\beta\:\cos\gamma-\sin\alpha\:\sin\gamma \;\;\; \\-\cos\beta\:\sin\gamma \;&\;-\sin\alpha\:\sin\beta\:\sin\gamma+\cos\alpha\:\cos\gamma \;&\;\cos\alpha\:\sin\beta\:\sin\gamma+\sin\alpha\:\cos\gamma \;\;\; \\\sin\beta \;&\; -\sin\alpha\:\cos\beta \;&\; \cos\alpha\:\cos\beta \;\;\;\end{bmatrix} \end{array}\end{equation}As before, the order of the matrices is from right to left. Once again, we can check this matrix multiplication using [Sympy](http://sympy.org/en/index.html):
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rx = sym.Matrix([[1, 0, 0],
[0, cos(a), sin(a)],
[0, -sin(a), cos(a)]])
Ry = sym.Matrix([[cos(b), 0, -sin(b)],
[0, 1, 0], [sin(b), 0, cos(b)]])
Rz = sym.Matrix([[cos(g), sin(g), 0],
[-sin(g), cos(g), 0],
[0, 0, 1]])
# Rotation matrix of xyz' in relation to xyz:
Rxyz = Rz @ Ry @ Rx
Math(r'\mathbf{R}_{\mathbf{lG},\,xyz}=' + sym.latex(Rxyz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
For instance, let's calculate the numerical rotation matrix for these sequential elemental rotations by $90^o$ around $x,y,z$:
###Code
R = sym.lambdify((a, b, g), Rxyz, 'numpy')
R = R(np.pi/2, np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(90^o, 90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(3, chop=True))))
###Output
_____no_output_____
###Markdown
Once again, let's compare the above matrix and the correspondent previous figure to see if it makes sense. But remember that this matrix is the Global-to-local rotation matrix, $\mathbf{R}_{\mathbf{lG},\,xyz}$, where the coordinates of the local basis' versors are rows, not columns, in this matrix. With this detail in mind, one can see that the previous figure and matrix also agree: the rotated $x$ axis (first row of the above matrix) is at the $\mathbf{Z}$ direction $[0,0,1]$, the rotated $y$ axis (second row) is at the $\mathbf{-Y}$ direction $[0,-1,0]$, and the rotated $z$ axis (third row) is at the $\mathbf{X}$ direction $[1,0,0]$.In fact, this example didn't serve to distinguish versors as rows or columns because the $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrix above is symmetric! Let's look on the resultant matrix for the example above after only the first two rotations, $\mathbf{R}_{\mathbf{lG},\,xy}$ to understand this difference:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xy\,}(90^o, 90^o) =' + \
sym.latex(sym.Matrix(R).n(3, chop=True))))
###Output
_____no_output_____
###Markdown
Comparing this matrix with the third plot in the figure, we see that the coordinates of versor $x$ in the Global coordinate system are $[0,1,0]$, i.e., local axis $x$ is aligned with Global axis $Y$, and this versor is indeed the first row, not first column, of the matrix above. Confer the other two rows. What are then in the columns of the local-to-Global rotation matrix? The columns are the coordinates of Global basis' versors in the local coordinate system! For example, the first column of the matrix above is the coordinates of $X$, which is aligned with $z$: $[0,0,1]$. Below you can test any sequence of rotation, around the local coordinates. Just change the matrix R, and the angles of the variables $\alpha$, $\beta$ and $\gamma$. In the example below is the rotation around the local basis, in the sequence x,y,z, with the angles $\alpha=\pi/3$ rad, $\beta=\pi/4$ rad and $\gamma=\pi/2$ rad.
###Code
sys.path.insert(1, r'./../functions') # add to pythonpath
%matplotlib notebook
from CCSbasis import CCSbasis
R = Rz*Ry*Rx
R = sym.lambdify((a, b, g), R, 'numpy')
alpha = np.pi/3
beta = np.pi/4
gamma = np.pi/6
R = R(alpha, beta, gamma)
e1 = np.array([[1,0,0]])
e2 = np.array([[0,1,0]])
e3 = np.array([[0,0,1]])
basis = np.vstack((e1,e2,e3))
basisRot = R @ basis
CCSbasis(Oijk=np.array([0,0,0]), Oxyz=np.array([0,0,0]),
ijk=basisRot.T, xyz=basis.T, vector=False)
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is equivalent to minus rotations in the other coordinate systemRemember that we saw for the elemental rotations that it's equivalent to instead of rotating the local coordinate system, $xyz$, by $\alpha, \beta, \gamma$ in relation to axes of the Global coordinate system, to rotate the Global coordinate system, $\mathbf{XYZ}$, by $-\alpha, -\beta, -\gamma$ in relation to the axes of the local coordinate system. The same property applies to a sequence of rotations: rotations of $xyz$ in relation to $\mathbf{XYZ}$ by $\alpha, \beta, \gamma$ result in the same matrix as rotations of $\mathbf{XYZ}$ in relation to $xyz$ by $-\alpha, -\beta, -\gamma$: $$ \begin{array}{l l}\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) & = \mathbf{R_{Gl,\,Z}}(\gamma)\, \mathbf{R_{Gl,\,Y}}(\beta)\, \mathbf{R_{Gl,\,X}}(\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,z\,}(-\gamma)\, \mathbf{R}_{\mathbf{lG},\,y\,}(-\beta)\, \mathbf{R}_{\mathbf{lG},\,x\,}(-\alpha) \\& = \mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)\end{array}$$Confer that by examining the $\mathbf{R_{Gl,\,XYZ}}$ and $\mathbf{R}_{\mathbf{lG},\,xyz}$ matrices above.Let's verify this property with Sympy:
###Code
RXYZ = RZ*RY*RX
# Rotation matrix of xyz in relation to XYZ:
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) ='))
display(Math(sym.latex(RXYZ, mat_str='matrix')))
# Elemental rotation matrices of XYZ in relation to xyz and negate all angles:
Rx_neg = sym.Matrix([[1, 0, 0], [0, cos(-a), -sin(-a)], [0, sin(-a), cos(-a)]]).T
Ry_neg = sym.Matrix([[cos(-b), 0, sin(-b)], [0, 1, 0], [-sin(-b), 0, cos(-b)]]).T
Rz_neg = sym.Matrix([[cos(-g), -sin(-g), 0], [sin(-g), cos(-g), 0], [0, 0, 1]]).T
# Rotation matrix of XYZ in relation to xyz:
Rxyz_neg = Rz_neg * Ry_neg * Rx_neg
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma) ='))
display(Math(sym.latex(Rxyz_neg, mat_str='matrix')))
# Check that the two matrices are equal:
display(Math(r'\mathbf{R_{Gl,\,XYZ\,}}(\alpha,\beta,\gamma) \;==\;' + \
r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(-\alpha,-\beta,-\gamma)'))
RXYZ == Rxyz_neg
###Output
_____no_output_____
###Markdown
Rotations in a coordinate system is the transpose of inverse order of rotations in the other coordinate systemThere is another property of the rotation matrices for the different coordinate systems: the rotation matrix, for example from the Global to the local coordinate system for the $xyz$ sequence, is just the transpose of the rotation matrix for the inverse operation (from the local to the Global coordinate system) of the inverse sequence ($\mathbf{ZYX}$) and vice-versa:$$ \begin{array}{l l}\mathbf{R}_{\mathbf{lG},\,xyz}(\alpha,\beta,\gamma) & = \mathbf{R}_{\mathbf{lG},\,z\,} \mathbf{R}_{\mathbf{lG},\,y\,} \mathbf{R}_{\mathbf{lG},\,x} \\& = \mathbf{R_{Gl,\,Z\,}^{-1}} \mathbf{R_{Gl,\,Y\,}^{-1}} \mathbf{R_{Gl,\,X\,}^{-1}} \\& = \mathbf{R_{Gl,\,Z\,}^{T}} \mathbf{R_{Gl,\,Y\,}^{T}} \mathbf{R_{Gl,\,X\,}^{T}} \\& = (\mathbf{R_{Gl,\,X\,}} \mathbf{R_{Gl,\,Y\,}} \mathbf{R_{Gl,\,Z}})^\mathbf{T} \\& = \mathbf{R_{Gl,\,ZYX\,}^{T}}(\gamma,\beta,\alpha)\end{array}$$Where we used the properties that the inverse of the rotation matrix (which is orthonormal) is its transpose and that the transpose of a product of matrices is equal to the product of their transposes in reverse order.Let's verify this property with Sympy:
###Code
RZYX = RX * RY * RZ
Rxyz = Rz * Ry * Rx
display(Math(r'\mathbf{R_{Gl,\,ZYX\,}^T}=' + sym.latex(RZYX.T, mat_str='matrix')))
display(Math(r'\mathbf{R}_{\mathbf{lG},\,xyz\,}(\alpha,\beta,\gamma) \,==\,' + \
r'\mathbf{R_{Gl,\,ZYX\,}^T}(\gamma,\beta,\alpha)'))
Rxyz == RZYX.T
###Output
_____no_output_____
###Markdown
Sequence of rotations of a VectorWe saw in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynbRotation-of-a-Vector) that the rotation matrix can also be used to rotate a vector (in fact, a point, image, solid, etc.) by a given angle around an axis of the coordinate system. Let's investigate that for the 3D case using the example earlier where a book was rotated in different orders and around the Global and local coordinate systems. Before any rotation, the point shown in that figure as a round black dot on the spine of the book has coordinates $\mathbf{P}=[0, 1, 2]$ (the book has thickness 0, width 1, and height 2). After the first sequence of rotations shown in the figure (rotated around $X$ and $Y$ by $90^0$ each time), $\mathbf{P}$ has coordinates $\mathbf{P}=[1, -2, 0]$ in the global coordinate system. Let's verify that:
###Code
P = np.array([[0, 1, 2]]).T
RXY = RY*RX
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. -2. 0.]]
###Markdown
As expected. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, but still around the Global coordinate system.Although we are performing vector rotation, where we don't need the concept of transformation between coordinate systems, in the example above we used the local-to-Global rotation matrix, $\mathbf{R_{Gl}}$. As we saw in the notebook for the 2D transformation, when we use this matrix, it performs a counter-clockwise (positive) rotation. If we want to rotate the vector in the clockwise (negative) direction, we can use the very same rotation matrix entering a negative angle or we can use the inverse rotation matrix, the Global-to-local rotation matrix, $\mathbf{R_{lG}}$ and a positive (negative of negative) angle, because $\mathbf{R_{Gl}}(\alpha) = \mathbf{R_{lG}}(-\alpha)$, but bear in mind that even in this latter case we are rotating around the Global coordinate system! Consider now that we want to deduce algebraically the position of the point $\mathbf{P}$ after the rotations around the local coordinate system as shown in the second set of examples in the figure with the sequence of book rotations. The point has the same initial position, $\mathbf{P}=[0, 1, 2]$, and after the rotations around $x$ and $y$ by $90^0$ each time, what is the position of this point? It's implicit in this question that the new desired position is in the Global coordinate system because the local coordinate system rotates with the book and the point never changes its position in the local coordinate system. So, by inspection of the figure, the new position of the point is $\mathbf{P1}=[2, 0, 1]$. Let's naively try to deduce this position by repeating the steps as before:
###Code
Rxy = Ry*Rx
R = sym.lambdify((a, b), Rxy, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 1. 2. -0.]]
###Markdown
The wrong answer. The problem is that we defined the rotation of a vector using the local-to-Global rotation matrix. One correction solution for this problem is to continuing using the multiplication of the Global-to-local rotation matrices, $\mathbf{R}_{xy} = \mathbf{R}_y\,\mathbf{R}_x$, transpose $\mathbf{R}_{xy}$ to get the Global-to-local coordinate system, $\mathbf{R_{XY}}=\mathbf{R^T}_{xy}$, and then rotate the vector using this matrix:
###Code
Rxy = Ry*Rx
RXY = Rxy.T
R = sym.lambdify((a, b), RXY, 'numpy')
R = R(np.pi/2, np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer.Another solution is to understand that when using the Global-to-local rotation matrix, counter-clockwise rotations (as performed with the book the figure) are negative, not positive, and that when dealing with rotations with the Global-to-local rotation matrix the order of matrix multiplication is inverted, for example, it should be $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$ (an added underscore to remind us this is not the convention adopted here).
###Code
R_xy = Rx*Ry
R = sym.lambdify((a, b), R_xy, 'numpy')
R = R(-np.pi/2, -np.pi/2)
P1 = np.dot(R, P)
print('P1 =', P1.T)
###Output
P1 = [[ 2. -0. 1.]]
###Markdown
The correct answer. The reader is invited to deduce the position of point $\mathbf{P}$ after the inverse order of rotations, around the local coordinate system.In fact, you will find elsewhere texts about rotations in 3D adopting this latter convention as the standard, i.e., they introduce the Global-to-local rotation matrix and describe sequence of rotations algebraically as matrix multiplication in the direct order, $\mathbf{R\_}_{xyz} = \mathbf{R}_x\,\mathbf{R}_y\,\mathbf{R}_z$, the inverse we have done in this text. It's all a matter of convention, just that. The 12 different sequences of Euler anglesThe Euler angles are defined in terms of rotations around a rotating local coordinate system. As we saw for the sequence of rotations around $x, y, z$, the axes of the local rotated coordinate system are not fixed in space because after the first elemental rotation, the other two axes rotate. Other sequences of rotations could be produced without combining axes of the two different coordinate systems (Global and local) for the definition of the rotation axes. There is a total of 12 different sequences of three elemental rotations that are valid and may be used for describing the rotation of a coordinate system with respect to another coordinate system:$$ xyz \quad xzy \quad yzx \quad yxz \quad zxy \quad zyx $$$$ xyx \quad xzx \quad yzy \quad yxy \quad zxz \quad zyz $$The first six sequences (first row) are all around different axes, they are usually referred as Cardan or Tait–Bryan angles. The other six sequences (second row) have the first and third rotations around the same axis, but keep in mind that the axis for the third rotation is not at the same place anymore because it changed its orientation after the second rotation. The sequences with repeated axes are known as proper or classic Euler angles.Which order to use it is a matter of convention, but because the order affects the results, it's fundamental to follow a convention and report it. In Engineering Mechanics (including Biomechanics), the $xyz$ order is more common; in Physics the $zxz$ order is more common (but the letters chosen to refer to the axes are arbitrary, what matters is the directions they represent). In Biomechanics, the order for the Cardan angles is most often based on the angle of most interest or of most reliable measurement. Accordingly, the axis of flexion/extension is typically selected as the first axis, the axis for abduction/adduction is the second, and the axis for internal/external rotation is the last one. We will see about this order later. The $zyx$ order is commonly used to describe the orientation of a ship or aircraft and the rotations are known as the nautical angles: yaw, pitch and roll, respectively (see next figure). Figure. The principal axes of an aircraft and the names for the rotations around these axes (image from Wikipedia). If instead of rotations around the rotating local coordinate system we perform rotations around the fixed Global coordinate system, we will have other 12 different sequences of three elemental rotations, these are called simply rotation angles. So, in total there are 24 possible different sequences of three elemental rotations, but the 24 orders are not independent; with the 12 different sequences of Euler angles at the local coordinate system we can obtain the other 12 sequences at the Global coordinate system.The Python function `euler_rotmat.py` (code at the end of this text) determines the rotation matrix in algebraic form for any of the 24 different sequences (and sequences with only one or two axes can be inputed). This function also determines the rotation matrix in numeric form if a list of up to three angles are inputed.For instance, the rotation matrix in algebraic form for the $zxz$ order of Euler angles at the local coordinate system and the correspondent rotation matrix in numeric form after three elemental rotations by $90^o$ each are:
###Code
from euler_rotmat import euler_rotmat
Ra, Rn = euler_rotmat(order='zxz', frame='local', angles=[90, 90, 90])
###Output
_____no_output_____
###Markdown
Line of nodesThe second axis of rotation in the rotating coordinate system is also referred as the nodal axis or line of nodes; this axis coincides with the intersection of two perpendicular planes, one from each Global (fixed) and local (rotating) coordinate systems. The figure below shows an example of rotations and the nodal axis for the $xyz$ sequence of the Cardan angles. Figure. First row: example of rotations for the $xyz$ sequence of the Cardan angles. The Global (fixed) $XYZ$ coordinate system is shown in green, the local (rotating) $xyz$ coordinate system is shown in blue. The nodal axis (N, shown in red) is defined by the intersection of the $YZ$ and $xy$ planes and all rotations can be described in relation to this nodal axis or to a perpendicular axis to it. Second row: starting from no rotation, the local coordinate system is rotated by $\alpha$ around the $x$ axis, then by $\beta$ around the rotated $y$ axis, and finally by $\gamma$ around the twice rotated $z$ axis. Note that the line of nodes coincides with the $y$ axis for the second rotation. Determination of the Euler anglesOnce a convention is adopted, the corresponding three Euler angles of rotation can be found. For example, for the $\mathbf{R}_{xyz}$ rotation matrix:
###Code
R = euler_rotmat(order='xyz', frame='local')
###Output
_____no_output_____
###Markdown
The corresponding Cardan angles for the `xyz` sequence can be given by:\begin{equation}\begin{array}{}\alpha = \arctan\left(\dfrac{\sin(\alpha)}{\cos(\alpha)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{21}}{\;\;\;\mathbf{R}_{22}}\right) \\\\\beta = \arctan\left(\dfrac{\sin(\beta)}{\cos(\beta)}\right) = \arctan\left(\dfrac{\mathbf{R}_{20}}{\sqrt{\mathbf{R}_{00}^2+\mathbf{R}_{10}^2}}\right) \\ \\\gamma = \arctan\left(\dfrac{\sin(\gamma)}{\cos(\gamma)}\right) = \arctan\left(\dfrac{-\mathbf{R}_{10}}{\;\;\;\mathbf{R}_{00}}\right)\end{array}\end{equation}Note that we prefer to use the mathematical function `arctan2` rather than simply `arcsin`, `arccos` or `arctan` because the latter cannot for example distinguish $45^o$ from $135^o$ and also for better numerical accuracy. See the text [Angular kinematics in a plane (2D)](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/KinematicsAngular2D.ipynb) for more on these issues.And here is a Python function to compute the Euler angles of rotations from the Global to the local coordinate system for the $xyz$ Cardan sequence:
###Code
def euler_angles_from_rot_xyz(rot_matrix, unit='deg'):
""" Compute Euler angles from rotation matrix in the xyz sequence."""
import numpy as np
R = np.array(rot_matrix, copy=False).astype(np.float64)[:3, :3]
angles = np.zeros(3)
angles[0] = np.arctan2(-R[2, 1], R[2, 2])
angles[1] = np.arctan2( R[2, 0], np.sqrt(R[0, 0]**2 + R[1, 0]**2))
angles[2] = np.arctan2(-R[1, 0], R[0, 0])
if unit[:3].lower() == 'deg': # convert from rad to degree
angles = np.rad2deg(angles)
return angles
###Output
_____no_output_____
###Markdown
For instance, consider sequential rotations of 45$^o$ around $x,y,z$. The resultant rotation matrix is:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local',
angles=[45, 45, 45], showA=False)
###Output
_____no_output_____
###Markdown
Let's check that calculating back the Cardan angles from this rotation matrix using the `euler_angles_from_rot_xyz()` function:
###Code
euler_angles_from_rot_xyz(Rn, unit='deg')
###Output
_____no_output_____
###Markdown
We could implement a function to calculate the Euler angles for any of the 12 sequences (in fact, plus another 12 sequences if we consider all the rotations from and to the two coordinate systems), but this is tedious. There is a smarter solution using the concept of [quaternion](http://en.wikipedia.org/wiki/Quaternion), but we will not see that now. Let's see a problem with using Euler angles known as gimbal lock. Gimbal lock[Gimbal lock](http://en.wikipedia.org/wiki/Gimbal_lock) is the loss of one degree of freedom in a three-dimensional coordinate system that occurs when an axis of rotation is placed parallel with another previous axis of rotation and two of the three rotations will be around the same direction given a certain convention of the Euler angles. This "locks" the system into rotations in a degenerate two-dimensional space. The system is not really locked in the sense it can't be moved or reach the other degree of freedom, but it will need an extra rotation for that. For instance, let's look at the $zxz$ sequence of rotations by the angles $\alpha, \beta, \gamma$:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz} & = \mathbf{R_{z}} \mathbf{R_{x}} \mathbf{R_{z}} \\ \\& = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & \cos\beta & \sin\beta \\0 & -\sin\beta & \cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}Which results in:
###Code
a, b, g = sym.symbols('alpha, beta, gamma')
# Elemental rotation matrices of xyz (local):
Rz = sym.Matrix([[cos(a), sin(a), 0],
[-sin(a), cos(a), 0],
[0, 0, 1]])
Rx = sym.Matrix([[1, 0, 0],
[0, cos(b), sin(b)],
[0, -sin(b), cos(b)]])
Rz2 = sym.Matrix([[cos(g), sin(g), 0],
[-sin(g), cos(g), 0],
[0, 0, 1]])
# Rotation matrix for the zxz sequence:
Rzxz = Rz2*Rx*Rz
Math(r'\mathbf{R}_{zxz}=' + sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Let's examine what happens with this rotation matrix when the rotation around the second axis ($x$) by $\beta$ is zero:\begin{equation}\begin{array}{l l}\mathbf{R}_{zxz}(\alpha, \beta=0, \gamma) = \begin{bmatrix}\cos\gamma & \sin\gamma & 0\\-\sin\gamma & \cos\gamma & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\begin{bmatrix}\cos\alpha & \sin\alpha & 0\\-\sin\alpha & \cos\alpha & 0 \\0 & 0 & 1\end{bmatrix}\end{array}\end{equation}The second matrix is the identity matrix and has no effect on the product of the matrices, which will be:
###Code
Rzxz = Rz2*Rz
Math(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=' + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Which simplifies to:
###Code
Rzxz = sym.simplify(Rzxz)
Math(r'\mathbf{R}_{xyz}(\alpha, \beta=0, \gamma)=' + \
sym.latex(Rzxz, mat_str='matrix'))
###Output
_____no_output_____
###Markdown
Despite different values of $\alpha$ and $\gamma$ the result is a single rotation around the $z$ axis given by the sum $\alpha+\gamma$. In this case, of the three degrees of freedom one was lost (the other degree of freedom was set by $\beta=0$). For movement analysis, this means for example that one angle will be undetermined because everything we know is the sum of the two angles obtained from the rotation matrix. We can set the unknown angle to zero but this is arbitrary.In fact, we already dealt with another example of gimbal lock when we looked at the $xyz$ sequence with rotations by $90^o$. See the figure representing these rotations again and perceive that the first and third rotations were around the same axis because the second rotation was by $90^o$. Let's do the matrix multiplication replacing only the second angle by $90^o$ (and let's use the `euler_rotmat.py`:
###Code
Ra, Rn = euler_rotmat(order='xyz', frame='local',
angles=[None, 90., None], showA=False)
###Output
_____no_output_____
###Markdown
Once again, one degree of freedom was lost and we will not be able to uniquely determine the three angles for the given rotation matrix and sequence.Possible solutions to avoid the gimbal lock are: choose a different sequence; do not rotate the system by the angle that puts the system in gimbal lock (in the examples above, avoid $\beta=90^o$); or add an extra fourth parameter in the description of the rotation angles. But if we have a physical system where we measure or specify exactly three Euler angles in a fixed sequence to describe or control it, and we can't avoid the system to assume certain angles, then we might have to say "Houston, we have a problem". A famous situation where such a problem occurred was during the Apollo 13 mission. This is an actual conversation between crew and mission control during the Apollo 13 mission (Corke, 2011):>`Mission clock: 02 08 12 47` **Flight**: *Go, Guidance.* **Guido**: *He’s getting close to gimbal lock there.* **Flight**: *Roger. CapCom, recommend he bring up C3, C4, B3, B4, C1 and C2 thrusters, and advise he’s getting close to gimbal lock.* **CapCom**: *Roger.* *Of note, it was not a gimbal lock that caused the accident with the the Apollo 13 mission, the problem was an oxygen tank explosion.* Determination of the rotation matrixA typical way to determine the rotation matrix for a rigid body in biomechanics is to use motion analysis to measure the position of at least three non-collinear markers placed on the rigid body, and then calculate a basis with these positions, analogue to what we have described in the notebook [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb). BasisIf we have the position of three markers: **m1**, **m2**, **m3**, a basis (formed by three orthogonal versors) can be found as: - First axis, **v1**, the vector **m2-m1**; - Second axis, **v2**, the cross product between the vectors **v1** and **m3-m1**; - Third axis, **v3**, the cross product between the vectors **v1** and **v2**. Then, each of these vectors are normalized resulting in three orthogonal versors. For example, given the positions m1 = [1,0,0], m2 = [0,1,0], m3 = [0,0,1], a basis can be found:
###Code
m1 = np.array([1, 0, 0])
m2 = np.array([0, 1, 0])
m3 = np.array([0, 0, 1])
v1 = m2 - m1
v2 = np.cross(v1, m3 - m1)
v3 = np.cross(v1, v2)
print('Versors:')
v1 = v1/np.linalg.norm(v1)
print('v1 =', v1)
v2 = v2/np.linalg.norm(v2)
print('v2 =', v2)
v3 = v3/np.linalg.norm(v3)
print('v3 =', v3)
print('\nNorm of each versor:\n',
np.linalg.norm(np.cross(v1, v2)),
np.linalg.norm(np.cross(v1, v3)),
np.linalg.norm(np.cross(v2, v3)))
###Output
Versors:
v1 = [-0.7071 0.7071 0. ]
v2 = [0.5774 0.5774 0.5774]
v3 = [ 0.4082 0.4082 -0.8165]
Norm of each versor:
1.0 1.0 1.0000000000000002
###Markdown
Remember from the text [Rigid-body transformations in a plane (2D)](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/Transformation2D.ipynb) that the versors of this basis are the columns of the $\mathbf{R_{Gl}}$ and the rows of the $\mathbf{R_{lG}}$ rotation matrices, for instance:
###Code
RlG = np.array([v1, v2, v3])
print('Rotation matrix from Global to local coordinate system:\n', RlG)
###Output
Rotation matrix from Global to local coordinate system:
[[-0.7071 0.7071 0. ]
[ 0.5774 0.5774 0.5774]
[ 0.4082 0.4082 -0.8165]]
###Markdown
And the corresponding angles of rotation using the $xyz$ sequence are:
###Code
euler_angles_from_rot_xyz(RlG)
###Output
_____no_output_____
###Markdown
These angles don't mean anything now because they are angles of the axes of the arbitrary basis we computed. In biomechanics, if we want an anatomical interpretation of the coordinate system orientation, we define the versors of the basis oriented with anatomical axes (e.g., for the shoulder, one versor would be aligned with the long axis of the upper arm) as seen [in this notebook about reference frames](https://nbviewer.jupyter.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb). Determination of the rotation matrix between two local coordinate systemsSimilarly to the [bidimensional case](https://nbviewer.jupyter.org/github/bmclab/bmc/blob/master/notebooks/Transformation2D.ipynb), to compute the rotation matrix between two local coordinate systems we can use the rotation matrices of both coordinate systems:\begin{equation} R_{l_1l_2} = R_{Gl_1}^TR_{Gl_2}\end{equation}After this, the Euler angles between both coordinate systems can be found using the `arctan2` function as shown previously. Translation and RotationConsider the case where the local coordinate system is translated and rotated in relation to the Global coordinate system as illustrated in the next figure. Figure. A point in three-dimensional space represented in two coordinate systems, with one system translated and rotated. The position of point $\mathbf{P}$ originally described in the local coordinate system, but now described in the Global coordinate system in vector form is:$$ \mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} $$This means that we first *disrotate* the local coordinate system and then correct for the translation between the two coordinate systems. Note that we can't invert this order: the point position is expressed in the local coordinate system and we can't add this vector to another vector expressed in the Global coordinate system, first we have to convert the vectors to the same coordinate system.If now we want to find the position of a point at the local coordinate system given its position in the Global coordinate system, the rotation matrix and the translation vector, we have to invert the expression above:$$ \begin{array}{l l}\mathbf{P_G} = \mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{R_{Gl}^{-1}}\cdot\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{L_G} + \mathbf{R_{Gl}}\mathbf{P_l}\right) \implies \\\\\mathbf{R_{Gl}^{-1}}\mathbf{P_G} = \mathbf{R_{Gl}^{-1}}\mathbf{L_G} + \mathbf{R_{Gl}^{-1}}\mathbf{R_{Gl}}\mathbf{P_l} \implies \\\\\mathbf{P_l} = \mathbf{R_{Gl}^{-1}}\left(\mathbf{P_G}-\mathbf{L_G}\right) = \mathbf{R_{Gl}^T}\left(\mathbf{P_G}-\mathbf{L_G}\right) \;\;\;\;\; \text{or} \;\;\;\;\; \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) \end{array} $$The expression above indicates that to perform the inverse operation, to go from the Global to the local coordinate system, we first translate and then rotate the coordinate system. Transformation matrixIt is possible to combine the translation and rotation operations in only one matrix, called the transformation matrix:\begin{equation}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix} =\begin{bmatrix}. & . & . & \mathbf{L_{X}} \\. & \mathbf{R_{Gl}} & . & \mathbf{L_{Y}} \\. & . & . & \mathbf{L_{Z}} \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P}_x \\\mathbf{P}_y \\\mathbf{P}_z \\1\end{bmatrix}\end{equation}Or simply:$$ \mathbf{P_G} = \mathbf{T_{Gl}}\mathbf{P_l} $$Remember that in general the transformation matrix is not orthonormal, i.e., its inverse is not equal to its transpose.The inverse operation, to express the position at the local coordinate system in terms of the Global reference system, is:$$ \mathbf{P_l} = \mathbf{T_{Gl}^{-1}}\mathbf{P_G} $$And in matrix form:\begin{equation}\begin{bmatrix}\mathbf{P_x} \\\mathbf{P_y} \\\mathbf{P_z} \\1\end{bmatrix} =\begin{bmatrix}\cdot & \cdot & \cdot & \cdot \\\cdot & \mathbf{R^{-1}_{Gl}} & \cdot & -\mathbf{R^{-1}_{Gl}}\:\mathbf{L_G} \\\cdot & \cdot & \cdot & \cdot \\0 & 0 & 0 & 1\end{bmatrix}\begin{bmatrix}\mathbf{P_X} \\\mathbf{P_Y} \\\mathbf{P_Z} \\1\end{bmatrix}\end{equation} Example with actual motion analysis data *The data for this example is taken from page 183 of David Winter's book.* Consider the following marker positions placed on a leg (described in the laboratory coordinate system with coordinates $x, y, z$ in cm, the $x$ axis points forward and the $y$ axes points upward): lateral malleolus (**lm** = [2.92, 10.10, 18.85]), medial malleolus (**mm** = [2.71, 10.22, 26.52]), fibular head (**fh** = [5.05, 41.90, 15.41]), and medial condyle (**mc** = [8.29, 41.88, 26.52]). Define the ankle joint center as the centroid between the **lm** and **mm** markers and the knee joint center as the centroid between the **fh** and **mc** markers. An anatomical coordinate system for the leg can be defined as: the quasi-vertical axis ($y$) passes through the ankle and knee joint centers; a temporary medio-lateral axis ($z$) passes through the two markers on the malleolus, an anterior-posterior as the cross product between the two former calculated orthogonal axes, and the origin at the ankle joint center. a) Calculate the anatomical coordinate system for the leg as described above. b) Calculate the rotation matrix and the translation vector for the transformation from the anatomical to the laboratory coordinate system. c) Calculate the position of each marker and of each joint center at the anatomical coordinate system. d) Calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent).
###Code
# calculation of the joint centers
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
print('Poition of the ankle joint center:', ajc)
print('Poition of the knee joint center:', kjc)
# calculation of the anatomical coordinate system axes (basis)
y = kjc - ajc
x = np.cross(y, mm - lm)
z = np.cross(x, y)
print('Versors:')
x = x/np.linalg.norm(x)
y = y/np.linalg.norm(y)
z = z/np.linalg.norm(z)
print('x =', x)
print('y =', y)
print('z =', z)
Oleg = ajc
print('\nOrigin =', Oleg)
# Rotation matrices
RGl = np.array([x, y , z]).T
print('Rotation matrix from the anatomical to the laboratory coordinate system:\n', RGl)
RlG = RGl.T
print('\nRotation matrix from the laboratory to the anatomical coordinate system:\n', RlG)
# Translational vector
OG = np.array([0, 0, 0]) # Laboratory coordinate system origin
LG = Oleg - OG
print('Translational vector from the anatomical to the laboratory coordinate system:\n', LG)
###Output
Translational vector from the anatomical to the laboratory coordinate system:
[ 2.815 10.16 22.685]
###Markdown
To get the coordinates from the laboratory (global) coordinate system to the anatomical (local) coordinate system:$$ \mathbf{P_l} = \mathbf{R_{lG}}\left(\mathbf{P_G}-\mathbf{L_G}\right) $$
###Code
# position of each marker and of each joint center at the anatomical coordinate system
mml = np.dot(RlG, (mm - LG)) # equivalent to the algebraic expression RlG*(mm - LG).T
lml = np.dot(RlG, (lm - LG))
fhl = np.dot(RlG, (fh - LG))
mcl = np.dot(RlG, (mc - LG))
ajcl = np.dot(RlG, (ajc - LG))
kjcl = np.dot(RlG, (kjc - LG))
print('Coordinates of mm in the anatomical system:\n', mml)
print('Coordinates of lm in the anatomical system:\n', lml)
print('Coordinates of fh in the anatomical system:\n', fhl)
print('Coordinates of mc in the anatomical system:\n', mcl)
print('Coordinates of kjc in the anatomical system:\n', kjcl)
print('Coordinates of ajc in the anatomical system (origin):\n', ajcl)
###Output
Coordinates of mm in the anatomical system:
[-0. -0.1592 3.8336]
Coordinates of lm in the anatomical system:
[-0. 0.1592 -3.8336]
Coordinates of fh in the anatomical system:
[-1.7703 32.1229 -5.5078]
Coordinates of mc in the anatomical system:
[ 1.7703 31.8963 5.5078]
Coordinates of kjc in the anatomical system:
[ 0. 32.0096 0. ]
Coordinates of ajc in the anatomical system (origin):
[0. 0. 0.]
###Markdown
Further reading- Read pages 1136-1164 of the 21th chapter of the [Ruina and Rudra's book] (http://ruina.tam.cornell.edu/Book/index.html) about elementary introduction to 3D rigid-body dynamics. Video lectures on the Internet- Khan Academy: [Rotação em R3 ao redor do eixo x](https://pt.khanacademy.org/math/linear-algebra/matrix-transformations/lin-trans-examples/v/rotation-in-r3-around-the-x-axis)- [Sec. 10.9 - Euler Angles](https://www.youtube.com/watch?v=PLWfDgX9E6s)- [Tema 05 - Rotação | Aula 03 - Rotação em torno de um eixo](https://www.youtube.com/watch?v=oKQyBzDVwSU) Problems1. For the example about how the order of rotations of a rigid body affects the orientation shown in a figure above, deduce the rotation matrices for each of the 4 cases shown in the figure. For the first two cases, deduce the rotation matrices from the global to the local coordinate system and for the other two examples, deduce the rotation matrices from the local to the global coordinate system. 2. Consider the data from problem 7 in the notebook [Frame of reference](http://nbviewer.ipython.org/github/BMClab/bmc/blob/master/notebooks/ReferenceFrame.ipynb) where the following anatomical landmark positions are given (units in meters): RASIS=[0.5,0.8,0.4], LASIS=[0.55,0.78,0.1], RPSIS=[0.3,0.85,0.2], and LPSIS=[0.29,0.78,0.3]. Deduce the rotation matrices for the global to anatomical coordinate system and for the anatomical to global coordinate system. 3. For the data from the last example, calculate the Cardan angles using the $zxy$ sequence for the orientation of the leg with respect to the laboratory (but remember that the letters chosen to refer to axes are arbitrary, what matters is the directions they represent). References- Corke P (2011) [Robotics, Vision and Control: Fundamental Algorithms in MATLAB](http://www.petercorke.com/RVC/). Springer-Verlag Berlin. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - [Maths - Euler Angles](http://www.euclideanspace.com/maths/geometry/rotations/euler/). - Murray RM, Li Z, Sastry SS (1994) [A Mathematical Introduction to Robotic Manipulation](http://www.cds.caltech.edu/~murray/mlswiki/index.php/Main_Page). Boca Raton, CRC Press. - Ruina A, Rudra P (2013) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press. - Siciliano B, Sciavicco L, Villani L, Oriolo G (2009) [Robotics - Modelling, Planning and Control](http://books.google.com.br/books/about/Robotics.html?hl=pt-BR&id=jPCAFmE-logC). Springer-Verlag London.- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, USA: Wiley. - Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics. Function `euler_rotmatrix.py`
###Code
# %load ./../functions/euler_rotmat.py
#!/usr/bin/env python
"""Euler rotation matrix given sequence, frame, and angles."""
from __future__ import division, print_function
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = 'euler_rotmat.py v.1 2014/03/10'
def euler_rotmat(order='xyz', frame='local', angles=None, unit='deg',
str_symbols=None, showA=True, showN=True):
"""Euler rotation matrix given sequence, frame, and angles.
This function calculates the algebraic rotation matrix (3x3) for a given
sequence ('order' argument) of up to three elemental rotations of a given
coordinate system ('frame' argument) around another coordinate system, the
Euler (or Eulerian) angles [1]_.
This function also calculates the numerical values of the rotation matrix
when numerical values for the angles are inputed for each rotation axis.
Use None as value if the rotation angle for the particular axis is unknown.
The symbols for the angles are: alpha, beta, and gamma for the first,
second, and third rotations, respectively.
The matrix product is calulated from right to left and in the specified
sequence for the Euler angles. The first letter will be the first rotation.
The function will print and return the algebraic rotation matrix and the
numerical rotation matrix if angles were inputed.
Parameters
----------
order : string, optional (default = 'xyz')
Sequence for the Euler angles, any combination of the letters
x, y, and z with 1 to 3 letters is accepted to denote the
elemental rotations. The first letter will be the first rotation.
frame : string, optional (default = 'local')
Coordinate system for which the rotations are calculated.
Valid values are 'local' or 'global'.
angles : list, array, or bool, optional (default = None)
Numeric values of the rotation angles ordered as the 'order'
parameter. Enter None for a rotation whith unknown value.
unit : str, optional (default = 'deg')
Unit of the input angles.
str_symbols : list of strings, optional (default = None)
New symbols for the angles, for instance, ['theta', 'phi', 'psi']
showA : bool, optional (default = True)
True (1) displays the Algebraic rotation matrix in rich format.
False (0) to not display.
showN : bool, optional (default = True)
True (1) displays the Numeric rotation matrix in rich format.
False (0) to not display.
Returns
-------
R : Matrix Sympy object
Rotation matrix (3x3) in algebraic format.
Rn : Numpy array or Matrix Sympy object (only if angles are inputed)
Numeric rotation matrix (if values for all angles were inputed) or
a algebraic matrix with some of the algebraic angles substituted
by the corresponding inputed numeric values.
Notes
-----
This code uses Sympy, the Python library for symbolic mathematics, to
calculate the algebraic rotation matrix and shows this matrix in latex form
possibly for using with the IPython Notebook, see [1]_.
References
----------
.. [1] http://nbviewer.ipython.org/github/duartexyz/BMC/blob/master/Transformation3D.ipynb
Examples
--------
>>> # import function
>>> from euler_rotmat import euler_rotmat
>>> # Default options: xyz sequence, local frame and show matrix
>>> R = euler_rotmat()
>>> # XYZ sequence (around global (fixed) coordinate system)
>>> R = euler_rotmat(frame='global')
>>> # Enter numeric values for all angles and show both matrices
>>> R, Rn = euler_rotmat(angles=[90, 90, 90])
>>> # show what is returned
>>> euler_rotmat(angles=[90, 90, 90])
>>> # show only the rotation matrix for the elemental rotation at x axis
>>> R = euler_rotmat(order='x')
>>> # zxz sequence and numeric value for only one angle
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, 0, None])
>>> # input values in radians:
>>> import numpy as np
>>> R, Rn = euler_rotmat(order='zxz', angles=[None, np.pi, None], unit='rad')
>>> # shows only the numeric matrix
>>> R, Rn = euler_rotmat(order='zxz', angles=[90, 0, None], showA='False')
>>> # Change the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['theta', 'phi', 'psi'])
>>> # Negativate the angles' symbols
>>> R = euler_rotmat(order='zxz', str_symbols=['-theta', '-phi', '-psi'])
>>> # all algebraic matrices for all possible sequences for the local frame
>>> s=['xyz','xzy','yzx','yxz','zxy','zyx','xyx','xzx','yzy','yxy','zxz','zyz']
>>> for seq in s: R = euler_rotmat(order=seq)
>>> # all algebraic matrices for all possible sequences for the global frame
>>> for seq in s: R = euler_rotmat(order=seq, frame='global')
"""
import numpy as np
import sympy as sym
try:
from IPython.core.display import Math, display
ipython = True
except:
ipython = False
angles = np.asarray(np.atleast_1d(angles), dtype=np.float64)
if ~np.isnan(angles).all():
if len(order) != angles.size:
raise ValueError("Parameters 'order' and 'angles' (when " +
"different from None) must have the same size.")
x, y, z = sym.symbols('x, y, z')
sig = [1, 1, 1]
if str_symbols is None:
a, b, g = sym.symbols('alpha, beta, gamma')
else:
s = str_symbols
if s[0][0] == '-': s[0] = s[0][1:]; sig[0] = -1
if s[1][0] == '-': s[1] = s[1][1:]; sig[1] = -1
if s[2][0] == '-': s[2] = s[2][1:]; sig[2] = -1
a, b, g = sym.symbols(s)
var = {'x': x, 'y': y, 'z': z, 0: a, 1: b, 2: g}
# Elemental rotation matrices for xyz (local)
cos, sin = sym.cos, sym.sin
Rx = sym.Matrix([[1, 0, 0], [0, cos(x), sin(x)], [0, -sin(x), cos(x)]])
Ry = sym.Matrix([[cos(y), 0, -sin(y)], [0, 1, 0], [sin(y), 0, cos(y)]])
Rz = sym.Matrix([[cos(z), sin(z), 0], [-sin(z), cos(z), 0], [0, 0, 1]])
if frame.lower() == 'global':
Rs = {'x': Rx.T, 'y': Ry.T, 'z': Rz.T}
order = order.upper()
else:
Rs = {'x': Rx, 'y': Ry, 'z': Rz}
order = order.lower()
R = Rn = sym.Matrix(sym.Identity(3))
str1 = r'\mathbf{R}_{%s}( ' %frame # last space needed for order=''
#str2 = [r'\%s'%var[0], r'\%s'%var[1], r'\%s'%var[2]]
str2 = [1, 1, 1]
for i in range(len(order)):
Ri = Rs[order[i].lower()].subs(var[order[i].lower()], sig[i] * var[i])
R = Ri * R
if sig[i] > 0:
str2[i] = '%s:%s' %(order[i], sym.latex(var[i]))
else:
str2[i] = '%s:-%s' %(order[i], sym.latex(var[i]))
str1 = str1 + str2[i] + ','
if ~np.isnan(angles).all() and ~np.isnan(angles[i]):
if unit[:3].lower() == 'deg':
angles[i] = np.deg2rad(angles[i])
Rn = Ri.subs(var[i], angles[i]) * Rn
#Rn = sym.lambdify(var[i], Ri, 'numpy')(angles[i]) * Rn
str2[i] = str2[i] + '=%.0f^o' %np.around(np.rad2deg(angles[i]), 0)
else:
Rn = Ri * Rn
Rn = sym.simplify(Rn) # for trigonometric relations
try:
# nsimplify only works if there are symbols
Rn2 = sym.latex(sym.nsimplify(Rn, tolerance=1e-8).n(chop=True, prec=4))
except:
Rn2 = sym.latex(Rn.n(chop=True, prec=4))
# there are no symbols, pass it as Numpy array
Rn = np.asarray(Rn)
if showA and ipython:
display(Math(str1[:-1] + ') =' + sym.latex(R, mat_str='matrix')))
if showN and ~np.isnan(angles).all() and ipython:
str2 = ',\;'.join(str2[:angles.size])
display(Math(r'\mathbf{R}_{%s}(%s)=%s' %(frame, str2, Rn2)))
if np.isnan(angles).all():
return R
else:
return R, Rn
###Output
_____no_output_____
###Markdown
Appendix How to load .trc files Using Pandas, to load a .trc file, we must specify the parameters: - 'sep': separator between columns - 'header': by default, Pandas will infer the header and read the first line as the header - 'skiprows': a .trc file has 6 columns of text file before the numerica data
###Code
import numpy as np
import pandas as pd
data = pd.read_csv('./../data/walk.trc', sep='\t', header=None, skiprows=6)
data
###Output
_____no_output_____
###Markdown
But now the columns of the pandas dataframe don't have names and it will be easier if the columns have as names the marker's name (line 4 of the .trc file) and its direction (line 5). The solution is to first read only the header of the .trc file to get the markers' names and directions and read a second time only to get the numeric data. We wrote a function to do that, named 'read_trc.py' and it is stored in the functions directory of the BMC repository. Here is how to use this function:
###Code
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
from read_trc import read_trc
h, data = read_trc('./../data/walk.trc', fname2='', dropna=False, na=0.0, fmt='uni')
data
h, data = read_trc('./../data/walk.trc', fname2='', dropna=False, na=0.0, fmt='multi')
data
###Output
Opening file "./../data/walk.trc" ... Number of markers changed from 28 to 55.
done.
|
upload_cyber_graph_in_ch.ipynb
|
###Markdown
Upload Cyber Graph into Clickhouse
###Code
from config import execute
DROP_TABLE = False
CREATE_TABLE = False
###Output
_____no_output_____
###Markdown
Create Tables
###Code
if DROP_TABLE:
execute('''DROP TABLE IF EXISTS cyberlinks''')
execute('''DROP TABLE IF EXISTS relevance''')
if CREATE_TABLE:
execute('''
CREATE TABLE IF NOT EXISTS cyberlinks(
id INTEGER,
object_from String,
object_to String,
subject String,
timestamp TIMESTAMP,
height INTEGER,
txhash String,
karma INTEGER
)
ENGINE = ReplacingMergeTree()
ORDER BY (id,object_from,object_to,subject)
''')
execute('''
CREATE TABLE IF NOT EXISTS relevance(
id INTEGER,
object String,
height INTEGER,
rank FLOAT
)
ENGINE = ReplacingMergeTree()
ORDER BY (object,height)
''')
###Output
_____no_output_____
|
face_detection_colab_webcam.ipynb
|
###Markdown
###Code
# import dependencies
from IPython.display import display, Javascript, Image
from google.colab.output import eval_js
from base64 import b64decode, b64encode
import cv2
import numpy as np
import PIL
import io
import html
import time
face_cascade = cv2.CascadeClassifier('/content/drive/MyDrive/Colab Notebooks/haarcascade_frontalface_default.xml')
# function to convert the JavaScript object into an OpenCV image
def js_to_image(js_reply):
"""
Params:
js_reply: JavaScript object containing image from webcam
Returns:
img: OpenCV BGR image
"""
# decode base64 image
image_bytes = b64decode(js_reply.split(',')[1])
# convert bytes to numpy array
jpg_as_np = np.frombuffer(image_bytes, dtype=np.uint8)
# decode numpy array into OpenCV BGR image
img = cv2.imdecode(jpg_as_np, flags=1)
return img
# function to convert OpenCV Rectangle bounding box image into base64 byte string to be overlayed on video stream
def bbox_to_bytes(bbox_array):
"""
Params:
bbox_array: Numpy array (pixels) containing rectangle to overlay on video stream.
Returns:
bytes: Base64 image byte string
"""
# convert array into PIL image
bbox_PIL = PIL.Image.fromarray(bbox_array, 'RGBA')
iobuf = io.BytesIO()
# format bbox into png for return
bbox_PIL.save(iobuf, format='png')
# format return string
bbox_bytes = 'data:image/png;base64,{}'.format((str(b64encode(iobuf.getvalue()), 'utf-8')))
return bbox_bytes
###Output
_____no_output_____
###Markdown
Webcam Images
###Code
def take_photo(filename='photo.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
// Wait for Capture to be clicked.
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
# get photo data
data = eval_js('takePhoto({})'.format(quality))
# get OpenCV format image
img = js_to_image(data)
# grayscale img
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
print(gray.shape)
# get face bounding box coordinates using Haar Cascade
faces = face_cascade.detectMultiScale(gray)
# draw face bounding box on image
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# save image
cv2.imwrite(filename, img)
return filename
try:
filename = take_photo('photo.jpg')
print('Saved to {}'.format(filename))
# Show the image which was just taken.
display(Image(filename))
except Exception as err:
# Errors will be thrown if the user does not have a webcam or if they do not
# grant the page permission to access it.
print(str(err))
###Output
_____no_output_____
###Markdown
Webcam Videos
###Code
# JavaScript to properly create our live video stream using our webcam as input
def video_stream():
js = Javascript('''
var video;
var div = null;
var stream;
var captureCanvas;
var imgElement;
var labelElement;
var pendingResolve = null;
var shutdown = false;
function removeDom() {
stream.getVideoTracks()[0].stop();
video.remove();
div.remove();
video = null;
div = null;
stream = null;
imgElement = null;
captureCanvas = null;
labelElement = null;
}
function onAnimationFrame() {
if (!shutdown) {
window.requestAnimationFrame(onAnimationFrame);
}
if (pendingResolve) {
var result = "";
if (!shutdown) {
captureCanvas.getContext('2d').drawImage(video, 0, 0, 640, 480);
result = captureCanvas.toDataURL('image/jpeg', 0.8)
}
var lp = pendingResolve;
pendingResolve = null;
lp(result);
}
}
async function createDom() {
if (div !== null) {
return stream;
}
div = document.createElement('div');
div.style.border = '2px solid black';
div.style.padding = '3px';
div.style.width = '100%';
div.style.maxWidth = '600px';
document.body.appendChild(div);
const modelOut = document.createElement('div');
modelOut.innerHTML = "<span>Status:</span>";
labelElement = document.createElement('span');
labelElement.innerText = 'No data';
labelElement.style.fontWeight = 'bold';
modelOut.appendChild(labelElement);
div.appendChild(modelOut);
video = document.createElement('video');
video.style.display = 'block';
video.width = div.clientWidth - 6;
video.setAttribute('playsinline', '');
video.onclick = () => { shutdown = true; };
stream = await navigator.mediaDevices.getUserMedia(
{video: { facingMode: "environment"}});
div.appendChild(video);
imgElement = document.createElement('img');
imgElement.style.position = 'absolute';
imgElement.style.zIndex = 1;
imgElement.onclick = () => { shutdown = true; };
div.appendChild(imgElement);
const instruction = document.createElement('div');
instruction.innerHTML =
'<span style="color: red; font-weight: bold;">' +
'When finished, click here or on the video to stop this demo</span>';
div.appendChild(instruction);
instruction.onclick = () => { shutdown = true; };
video.srcObject = stream;
await video.play();
captureCanvas = document.createElement('canvas');
captureCanvas.width = 640; //video.videoWidth;
captureCanvas.height = 480; //video.videoHeight;
window.requestAnimationFrame(onAnimationFrame);
return stream;
}
async function stream_frame(label, imgData) {
if (shutdown) {
removeDom();
shutdown = false;
return '';
}
var preCreate = Date.now();
stream = await createDom();
var preShow = Date.now();
if (label != "") {
labelElement.innerHTML = label;
}
if (imgData != "") {
var videoRect = video.getClientRects()[0];
imgElement.style.top = videoRect.top + "px";
imgElement.style.left = videoRect.left + "px";
imgElement.style.width = videoRect.width + "px";
imgElement.style.height = videoRect.height + "px";
imgElement.src = imgData;
}
var preCapture = Date.now();
var result = await new Promise(function(resolve, reject) {
pendingResolve = resolve;
});
shutdown = false;
return {'create': preShow - preCreate,
'show': preCapture - preShow,
'capture': Date.now() - preCapture,
'img': result};
}
''')
display(js)
def video_frame(label, bbox):
data = eval_js('stream_frame("{}", "{}")'.format(label, bbox))
return data
# start streaming video from webcam
video_stream()
# label for video
label_html = 'Capturing...'
# initialze bounding box to empty
bbox = ''
count = 0
while True:
js_reply = video_frame(label_html, bbox)
if not js_reply:
break
# convert JS response to OpenCV Image
img = js_to_image(js_reply["img"])
# create transparent overlay for bounding box
bbox_array = np.zeros([480,640,4], dtype=np.uint8)
# grayscale image for face detection
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# get face region coordinates
faces = face_cascade.detectMultiScale(gray)
# get face bounding box for overlay
for (x,y,w,h) in faces:
bbox_array = cv2.rectangle(bbox_array,(x,y),(x+w,y+h),(255,0,0),2)
bbox_array[:,:,3] = (bbox_array.max(axis = 2) > 0 ).astype(int) * 255
# convert overlay of bbox into bytes
bbox_bytes = bbox_to_bytes(bbox_array)
# update bbox so next frame gets new overlay
bbox = bbox_bytes
###Output
_____no_output_____
|
pandas/general_characteristics.ipynb
|
###Markdown
`index`: shows the `Index` labels
###Code
df.index
###Output
_____no_output_____
###Markdown
`columns`: shows the column labels
###Code
df.columns
df.describe()
df.info()
df.dtypes
df.size
df.shape
###Output
_____no_output_____
|
source/examples/basics/gog/stat_corr.ipynb
|
###Markdown
CorrelationSome plots visualize a transformation of the original data set. Use a stat parameter to choose a common transformation to visualize.Each stat creates additional variables to map aesthetics to. These variables use a common ..name.. syntax.Look at the examples below.
###Code
import pandas as pd
from lets_plot import *
from lets_plot.bistro.corr import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
p1 = corr_plot(df, flip=False).points().build() + ggtitle('corr_plot(): stat="corr" (default)')
p2 = ggplot(df) + geom_point(aes(size='..corr_abs..'), stat='corr', shape=21, color='black') + \
scale_fill_gradient2(name='Corr', low='#ca0020', mid='#f7f7f7', high='#0571b0', limits=[-1, 1]) + \
scale_size_area(max_size=1, guide='none') + \
coord_fixed() + \
ggtitle('geom_point(): stat="corr"') + \
theme(axis_title='blank', axis_line='blank', panel_grid='blank')
w, h = 400, 300
bunch = GGBunch()
bunch.add_plot(p1, 0, 0, w, h)
bunch.add_plot(p2, w, 0, w, h)
bunch.show()
###Output
_____no_output_____
###Markdown
CorrelationSome plots visualize a transformation of the original data set. Use a stat parameter to choose a common transformation to visualize.Each stat creates additional variables to map aesthetics to. These variables use a common ..name.. syntax.Look at the examples below.
###Code
import pandas as pd
from lets_plot import *
from lets_plot.bistro.corr import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
p1 = corr_plot(df, flip=False).points().build() + ggtitle('corr_plot(): stat="corr" (default)')
p2 = ggplot(df) + geom_point(aes(size='..corr_abs..'), stat='corr', shape=21, color='black') + \
scale_fill_gradient2(name='Corr', low='#ca0020', mid='#f7f7f7', high='#0571b0', limits=[-1, 1]) + \
scale_size_area(max_size=1, guide='none') + \
coord_fixed() + \
ggtitle('geom_point(): stat="corr"') + \
theme(axis_title='blank', axis_line='blank', panel_grid='blank')
w, h = 400, 300
bunch = GGBunch()
bunch.add_plot(p1, 0, 0, w, h)
bunch.add_plot(p2, w, 0, w, h)
bunch.show()
###Output
_____no_output_____
|
examples/losses_example.ipynb
|
###Markdown
deeptrack.lossesThis example introduces the module deeptrack.losses. 1. What is a loss?Losses are functions that return some representation of the error of the model during training. In DeepTrack 2.0, we extend the set of loss functions provided by Keras with loss functions specifically for image-to-image transformations. 2. SetupWe will exemplify the provided loss functions using some mock inputs: a 2x2x1 tensor of ones and a 2x2x1 tensor of zeros.
###Code
import deeptrack.losses as losses
from keras import backend as K
import numpy as np
truthly = K.constant(np.ones((2, 2, 1)))
falsely = K.constant(np.zeros((2, 2, 1)))
def evaluate(loss_function):
print("Error with true positive:", K.eval(loss_function(truthly, truthly)))
print("Error with false positive:", K.eval(loss_function(falsely, truthly)))
print("Error with false negative:", K.eval(loss_function(truthly, falsely)))
print("Error with true negative:", K.eval(loss_function(falsely, falsely)))
###Output
_____no_output_____
###Markdown
3. flatten()Flatten wraps a loss function, and converts the input to one dimension arrays. This is essential for certain loss functions.
###Code
from tensorflow.keras.losses import mse
evaluate(mse)
evaluate(losses.flatten(mse))
###Output
Error with true positive: 0.0
Error with false positive: 1.0
Error with false negative: 1.0
Error with true negative: 0.0
###Markdown
4. sigmoid()Sigmoid applies a sigmoid transformation to the prediction.
###Code
evaluate(losses.flatten(losses.sigmoid(mse)))
###Output
Error with true positive: 0.07232948
Error with false positive: 0.53444666
Error with false negative: 0.25
Error with true negative: 0.25
###Markdown
5. weighted_crossentropy()Binary crossentropy with weighted classes. Typically for u-net segmentation tasks with uneven classes. Note that false negative is penalized ten times as harsh as false positive.
###Code
evaluate(losses.flatten(losses.weighted_crossentropy(weight=(10, 1))))
###Output
Error with true positive: -9.091964e-05
Error with false positive: 0.8373037
Error with false negative: 8.373037
Error with true negative: -9.091963e-06
###Markdown
6. nd_mean_squared_errorMean square error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_error)
###Output
Error with true positive: 0.0
Error with false positive: 1.0
Error with false negative: 1.0
Error with true negative: 0.0
###Markdown
6. nd_mean_squared_logarithmic_errorMean square log error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_logarithmic_error)
###Output
Error with true positive: 0.0
Error with false positive: 0.48045287
Error with false negative: 0.48045287
Error with true negative: 0.0
###Markdown
7. nd_poissonPoisson error with flattened inputs.
###Code
evaluate(losses.nd_poisson)
###Output
Error with true positive: 0.9999999
Error with false positive: 1.0
Error with false negative: 16.118095
Error with true negative: 0.0
###Markdown
8. nd_squared_hingePoisson error with flattened inputs.
###Code
evaluate(losses.nd_squared_hinge)
###Output
Error with true positive: 0.0
Error with false positive: 4.0
Error with false negative: 1.0
Error with true negative: 1.0
###Markdown
9. nd_binary_crossentropySquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_binary_crossentropy)
###Output
Error with true positive: -0.0
Error with false positive: 15.333239
Error with false negative: 15.424949
Error with true negative: -0.0
###Markdown
11. nd_mean_absolute_percentage_errorSquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_mean_absolute_percentage_error)
###Output
Error with true positive: 0.0
Error with false positive: 1000000000.0
Error with false negative: 100.0
Error with true negative: 0.0
###Markdown
deeptrack.lossesThis example introduces the module deeptrack.losses. 1. What is a loss?Losses are functions that return some representation of the error of the model during training. In DeepTrack 2.0, we extend the set of loss functions provided by Keras with loss functions specifically for image-to-image transformations. 2. SetupWe will exemplify the provided loss functions using some mock inputs: a 2x2x1 tensor of ones and a 2x2x1 tensor of zeros.
###Code
import deeptrack.losses as losses
from tensorflow.keras import backend as K
import numpy as np
truthly = K.constant(np.ones((2, 2, 1)))
falsely = K.constant(np.zeros((2, 2, 1)))
def evaluate(loss_function):
print("Error with true positive:", K.eval(loss_function(truthly, truthly)))
print("Error with false positive:", K.eval(loss_function(falsely, truthly)))
print("Error with false negative:", K.eval(loss_function(truthly, falsely)))
print("Error with true negative:", K.eval(loss_function(falsely, falsely)))
###Output
_____no_output_____
###Markdown
3. flatten()Flatten wraps a loss function, and converts the input to one dimension arrays. This is essential for certain loss functions.
###Code
from tensorflow.keras.losses import mse
evaluate(mse)
evaluate(losses.flatten(mse))
###Output
Error with true positive: 0.0
Error with false positive: 1.0
Error with false negative: 1.0
Error with true negative: 0.0
###Markdown
4. sigmoid()Sigmoid applies a sigmoid transformation to the prediction.
###Code
evaluate(losses.flatten(losses.sigmoid(mse)))
###Output
Error with true positive: 0.072329514
Error with false positive: 0.5344466
Error with false negative: 0.25
Error with true negative: 0.25
###Markdown
5. weighted_crossentropy()Binary crossentropy with weighted classes. Typically for u-net segmentation tasks with uneven classes. Note that false negative is penalized ten times as harsh as false positive.
###Code
evaluate(losses.flatten(losses.weighted_crossentropy(weight=(10, 1))))
###Output
Error with true positive: -9.091964e-05
Error with false positive: 0.8373037
Error with false negative: 8.373037
Error with true negative: -9.091963e-06
###Markdown
6. nd_mean_squared_errorMean square error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_error)
###Output
Error with true positive: 0.0
Error with false positive: 1.0
Error with false negative: 1.0
Error with true negative: 0.0
###Markdown
6. nd_mean_squared_logarithmic_errorMean square log error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_logarithmic_error)
###Output
Error with true positive: 0.0
Error with false positive: 0.48045287
Error with false negative: 0.48045287
Error with true negative: 0.0
###Markdown
7. nd_poissonPoisson error with flattened inputs.
###Code
evaluate(losses.nd_poisson)
###Output
Error with true positive: 0.9999999
Error with false positive: 1.0
Error with false negative: 16.118095
Error with true negative: 0.0
###Markdown
8. nd_squared_hingePoisson error with flattened inputs.
###Code
evaluate(losses.nd_squared_hinge)
###Output
Error with true positive: 0.0
Error with false positive: 4.0
Error with false negative: 1.0
Error with true negative: 1.0
###Markdown
9. nd_binary_crossentropySquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_binary_crossentropy)
###Output
Error with true positive: -0.0
Error with false positive: 15.333239
Error with false negative: 15.424949
Error with true negative: -0.0
###Markdown
11. nd_mean_absolute_percentage_errorSquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_mean_absolute_percentage_error)
###Output
Error with true positive: 0.0
Error with false positive: 1000000000.0
Error with false negative: 100.0
Error with true negative: 0.0
###Markdown
deeptrack.lossesThis example introduces the module deeptrack.losses. 1. What is a loss?Losses are functions that return some representation of the error of the model during training. In DeepTrack 2.0, we extend the set of loss functions provided by Keras with loss functions specifically for image-to-image transformations. 2. SetupWe will exemplify the provided loss functions using some mock inputs: a 2x2x1 tensor of ones and a 2x2x1 tensor of zeros.
###Code
import deeptrack.losses as losses
from tensorflow.keras import backend as K
import numpy as np
truthly = K.constant(np.ones((2, 2, 1)))
falsely = K.constant(np.zeros((2, 2, 1)))
def evaluate(loss_function):
print("Error with true positive:", K.eval(loss_function(truthly, truthly)))
print("Error with false positive:", K.eval(loss_function(falsely, truthly)))
print("Error with false negative:", K.eval(loss_function(truthly, falsely)))
print("Error with true negative:", K.eval(loss_function(falsely, falsely)))
###Output
_____no_output_____
###Markdown
3. flatten()Flatten wraps a loss function, and converts the input to one dimension arrays. This is essential for certain loss functions.
###Code
from tensorflow.keras.losses import mse
evaluate(mse)
evaluate(losses.flatten(mse))
###Output
_____no_output_____
###Markdown
4. sigmoid()Sigmoid applies a sigmoid transformation to the prediction.
###Code
evaluate(losses.flatten(losses.sigmoid(mse)))
###Output
_____no_output_____
###Markdown
5. weighted_crossentropy()Binary crossentropy with weighted classes. Typically for u-net segmentation tasks with uneven classes. Note that false negative is penalized ten times as harsh as false positive.
###Code
evaluate(losses.flatten(losses.weighted_crossentropy(weight=(10, 1))))
###Output
_____no_output_____
###Markdown
6. nd_mean_squared_errorMean square error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_error)
###Output
_____no_output_____
###Markdown
6. nd_mean_squared_logarithmic_errorMean square log error with flattened inputs.
###Code
evaluate(losses.nd_mean_squared_logarithmic_error)
###Output
_____no_output_____
###Markdown
7. nd_poissonPoisson error with flattened inputs.
###Code
evaluate(losses.nd_poisson)
###Output
_____no_output_____
###Markdown
8. nd_squared_hingePoisson error with flattened inputs.
###Code
evaluate(losses.nd_squared_hinge)
###Output
_____no_output_____
###Markdown
9. nd_binary_crossentropySquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_binary_crossentropy)
###Output
_____no_output_____
###Markdown
11. nd_mean_absolute_percentage_errorSquared hinge error with flattened inputs.
###Code
evaluate(losses.nd_mean_absolute_percentage_error)
###Output
_____no_output_____
|
utilities/oqrec_recordings/run_in_place_analysis.ipynb
|
###Markdown
Run In Place Analysis=================================This notebook contains the analysis for implementing walk in place inside the godot_oculus_quest_toolkitThe implementation is based on the paper [*A Walking-in-Place Method for Virtual Reality Using Position and Orientation Tracking*](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6165345/pdf/sensors-18-02832.pdf)
###Code
# NOTE: restart kernel when swithcing between inline and notebook
# inline will store images inside notebook; notebook will allow interactive plots
%matplotlib inline
#%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import json, os
import glm
import numpy as np
os.listdir("data/") # list the files in a subolder
rec = json.load(open('data/walkInPlace_001_runningAround.oqrec')) # load the recording as a json
# Print some info about the loaded recording
print("num_frames = %d; fps = %d; start_time = %s" % (rec["num_frames"], rec["target_fps"], rec["start_time"]))
print("Keys in the recording are:")
for k in rec.keys(): print('"'+k+'"', end = ' ') # show the keys of all the arrays in the recording
fig=plt.figure(figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k') # setting the size of the output
plt.plot(rec["head_position"][1::3]) # plot only the y coordinate of the head position
# this function uses the euler angles (in radian) and creates a 4x4 matrix that
# has the same layout as a godot basis: i.e.
def godotEulerToMat4(x, y, z):
return glm.transpose(glm.rotate(glm.mat4(), -z, glm.vec3(0.0, 0.0, 1.0)) * \
glm.rotate(glm.mat4(), -x, glm.vec3(1.0, 0.0, 0.0)) * \
glm.rotate(glm.mat4(), -y, glm.vec3(0.0, 1.0, 0.0)));
# note: there is probably a much more elegant solution to this
def rotationArrayToViewDir_Y(a):
ret = [0.0] * (len(a)//3)
for i in range(0, len(a), 3):
ret[i//3] = -(godotEulerToMat4(a[i], a[i+1], a[i+2])[2].y); # basis.z.y ()
return ret;
Cup = -0.06;
Cdown = -0.177;
# this is required to adjust for the different headset height based on if the user is looking up, down or straight
def _get_viewdir_corrected_height(h, viewdir_y):
if (viewdir_y >= 0.0):
return h + Cup * viewdir_y;
else:
return h + Cdown * viewdir_y;
def processHeightCorrection(h, s):
ret = [0.0] * (len(h))
for i in range(0, len(h)):
ret[i] = _get_viewdir_corrected_height(h[i], s[i]);
return ret;
hpy = np.array(rec["head_position"][1::3])
hoy = np.array(rotationArrayToViewDir_Y(rec["head_orientation"]))
hpy_corrected = np.array(processHeightCorrection(hpy, hoy));
fig=plt.figure(figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k') # setting the size of the output
plt.plot(hpy) # plot only the y coordinate of the head position
plt.plot(hpy_corrected) # plot only the y coordinate of the head position
plt.plot([1.8]*rec['num_frames'])
print(np.var(hpy) * 10000.0)
print(np.var(hpy_corrected) * 10000.0)
print(np.average(hpy))
print(np.average(hpy_corrected))
_height_ringbuffer_size = 5; # 5 seems fine in most of the cases so far; but maybe 7 could also work
_height_ringbuffer_pos = 0;
_height_ringbuffer = [0] * _height_ringbuffer_size;
_step_local_detect_threshold = 0.003; # local difference
_step_height_min_detect_threshold = 0.02; # This might need some tweaking now to avoid missed steps
_step_height_max_detect_threshold = 0.1; # This might need some tweaking now to avoid missed steps
_variance_height_detect_threshold = 0.0005;
variance_buffer = [];
_current_height_estimate = 0.0; # an estimate of the current actual head height to be able to detect step height
# I assume there is some shaking
def _get_buffered_height(i):
return _height_ringbuffer[(_height_ringbuffer_pos + i) % _height_ringbuffer_size];
def _store_height_in_buffer(y):
global _height_ringbuffer_pos;
global _height_ringbuffer;
global _height_ringbuffer_size;
_height_ringbuffer[_height_ringbuffer_pos] = y;
_height_ringbuffer_pos = (_height_ringbuffer_pos + 1) % _height_ringbuffer_size;
def detect_step():
min_value = _get_buffered_height(0);
global variance_buffer;
global _current_height_estimate;
average = min_value;
max_diff = 0.0;
p = 0;
for i in range(1, _height_ringbuffer_size):
val = _get_buffered_height(i);
average += val;
if (val < min_value):
min_value = val;
p = i;
average = average / _height_ringbuffer_size;
variance = 0.0;
for i in range(0, _height_ringbuffer_size):
val = _get_buffered_height(i);
variance = variance + abs(average - val);
variance = variance / _height_ringbuffer_size;
variance_buffer.append(variance * 100);
# if there is not much variation in the last _height_ringbuffer_size values we take the average as our current heigh
# assuming that we are not in a step process then
if (variance <= _variance_height_detect_threshold):
_current_height_estimate = average;
# this is now the actual step detection based on that the center value of the ring buffer is the actual minimum (the turning point)
# and also the defined thresholds to minimize false detections as much as possible
dist = _current_height_estimate - min_value;
if (p == _height_ringbuffer_size // 2
and dist > _step_height_min_detect_threshold
and dist < _step_height_max_detect_threshold
#and (_get_buffered_height(0) - min_value) > _step_local_detect_threshold # this can avoid some local mis predicitons
):
return 1;
else: return 0;
res = [];
height = [];
def sim():
_current_height_estimate = hpy_corrected[0];
c = 0;
last_c = 0;
for v in hpy_corrected:
_store_height_in_buffer(v)
step = detect_step()
res.append(step);
if (step == 1):
#print(c - last_c, end=' ');
last_c = c;
height.append(_current_height_estimate);
c+=1;
sim();
st = 0;
num = 2800;
fig=plt.figure(figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k') # setting the size of the output
plt.plot(np.array(res[st:st+num]) * 0.1 + 1.9)
plt.plot((hpy_corrected[st:st+num]))
plt.plot((height[st:st+num]))
###Output
_____no_output_____
|
labs/05_conv_nets_2/Fully_Convolutional_Neural_Networks.ipynb
|
###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import warnings
import numpy as np
from scipy.misc import imread as scipy_imread, imresize as scipy_imresize
import matplotlib.pyplot as plt
np.random.seed(1)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
from keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
#print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$ Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = Flatten()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py/resnet50.py Our Version- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a _pointwise convolution_).- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
import keras
from keras.engine import Layer
import keras.backend as K
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note that we substract the logits by their maximum to
# make the softmax more numerically stable.
def call(self, x, mask=None):
e = K.exp(x - K.max(x, axis=self.axis, keepdims=True))
s = K.sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes)
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's wrap the `SoftmaxMap` class into a test model to process our test data:
###Code
from keras.models import Sequential
model = Sequential([SoftmaxMap(input_shape=(w, h, n_classes))])
model.output_shape
softmax_mapped_data = model.predict(random_data)
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
Exercise- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).- You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from keras.layers import Convolution2D
from keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet.predict(np.random.randn(1, 200, 300, 3))
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from keras.applications.imagenet_utils import preprocess_input
from skimage.io import imread
from skimage.transform import resize
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
prediction_map = fully_conv_ResNet.predict(img)
return prediction_map
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
_____no_output_____
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
_____no_output_____
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(prediction_map, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
each_dog_proba_map = prediction_map[0, :, :, class_ids]
# this style of indexing a tensor by an other array has the following shape effect:
# (H, W, 1000) indexed by (118) ==> (118, H, W)
any_dog_proba_map = each_dog_proba_map.sum(axis=0)
print("size of heatmap: " + str(any_dog_proba_map.shape))
return any_dog_proba_map
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(400, 640)` - `(800, 1280)` - `(1600, 2560)`- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
###Output
_____no_output_____
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
# %load solutions/geom_avg.py
from skimage.transform import resize
heatmap_1_r = resize(heatmap_1, (50,80), mode='reflect', preserve_range=True, anti_aliasing=True)
heatmap_2_r = resize(heatmap_2, (50,80), mode='reflect', preserve_range=True, anti_aliasing=True)
heatmap_3_r = resize(heatmap_3, (50,80), mode='reflect', preserve_range=True, anti_aliasing=True)
heatmap_geom_avg = np.power(heatmap_1_r * heatmap_2_r * heatmap_3_r, 0.333)
display_img_and_heatmap("dog.jpg", heatmap_geom_avg)
###Output
_____no_output_____
###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import warnings
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
import tensorflow as tf
from tensorflow.keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
#print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$ Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = GlobalAveragePooling2D()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py Our Version- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a *pointwise convolution*).- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
from tensorflow.keras import layers
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(layers.Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note also that we substract the logits by their maximum to
# make the softmax numerically stable.
def call(self, x, mask=None):
e = tf.exp(x - tf.math.reduce_max(x, axis=self.axis, keepdims=True))
s = tf.math.reduce_sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes).astype("float32")
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's create a `SoftmaxMap` function from the layer and process our test data:
###Code
softmaxMap = SoftmaxMap()
softmax_mapped_data = softmaxMap(random_data).numpy()
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
Exercise- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).- You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet(np.random.randn(1, 200, 300, 3)).numpy()
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from tensorflow.keras.applications.imagenet_utils import preprocess_input
from skimage.io import imread
from skimage.transform import resize
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
prediction_map = fully_conv_ResNet(img).numpy()
return prediction_map
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
_____no_output_____
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
_____no_output_____
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(prediction_map, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
each_dog_proba_map = prediction_map[0, :, :, class_ids]
# this style of indexing a tensor by an other array has the following shape effect:
# (H, W, 1000) indexed by (118) ==> (118, H, W)
any_dog_proba_map = each_dog_proba_map.sum(axis=0)
print("size of heatmap: " + str(any_dog_proba_map.shape))
return any_dog_proba_map
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 or 4 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(200, 320)` - `(400, 640)` - `(800, 1280)` - `(1600, 2560)` (optional, requires a lot of memory)- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
###Output
_____no_output_____
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
from skimage.transform import resize
# TODO
# %load solutions/geom_avg.py
###Output
_____no_output_____
###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import warnings
import numpy as np
from scipy.misc import imread as scipy_imread, imresize as scipy_imresize
import matplotlib.pyplot as plt
np.random.seed(1)
# Wrapper functions to disable annoying warnings:
def imread(*args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return scipy_imread(*args, **kwargs)
def imresize(*args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return scipy_imresize(*args, **kwargs)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
from keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
#print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$ Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = Flatten()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py/resnet50.py Our Version- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a _pointwise convolution_).- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
import keras
from keras.engine import Layer
import keras.backend as K
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note that we substract the logits by their maximum to
# make the softmax more numerically stable.
def call(self, x, mask=None):
e = K.exp(x - K.max(x, axis=self.axis, keepdims=True))
s = K.sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes)
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's wrap the `SoftmaxMap` class into a test model to process our test data:
###Code
from keras.models import Sequential
model = Sequential([SoftmaxMap(input_shape=(w, h, n_classes))])
model.output_shape
softmax_mapped_data = model.predict(random_data)
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
Exercise- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).- You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from keras.layers import Convolution2D
from keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet.predict(np.random.randn(1, 200, 300, 3))
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from keras.applications.imagenet_utils import preprocess_input
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = imresize(img_raw, size=img_size).astype("float32")
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
z = fully_conv_ResNet.predict(img)
return z
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
_____no_output_____
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
_____no_output_____
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(z, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
x = z[0, :, :, class_ids].sum(axis=0)
print("size of heatmap: " + str(x.shape))
return x
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(400, 640)` - `(800, 1280)` - `(1600, 2560)`- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
###Output
_____no_output_____
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
# %load solutions/geom_avg.py
###Output
_____no_output_____
###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import warnings
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
import tensorflow as tf
from tensorflow.keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
# batch size x width x height x nb channels
print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
base_model(np.random.randn(1, 224, 224, 3)).shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$ Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = GlobalAveragePooling2D()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py Our Version- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a *pointwise convolution*).- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
from tensorflow.keras import layers
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(layers.Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note also that we substract the logits by their maximum to
# make the softmax numerically stable.
def call(self, x, mask=None):
e = tf.exp(x - tf.math.reduce_max(x, axis=self.axis, keepdims=True))
s = tf.math.reduce_sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes).astype("float32")
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's create a `SoftmaxMap` function from the layer and process our test data:
###Code
softmaxMap = SoftmaxMap()
softmax_mapped_data = softmaxMap(random_data).numpy()
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
Exercise- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).- You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# Take the output of the last layer of the convnet
# layer:
x = base_model.layers[-1].output # equivalent à base_model(input)
# A 1x1 convolution, with 1000 output channels, one per class
x = Convolution2D(1000, (1, 1), name='conv1000')(x)
# Softmax on last axis of tensor to normalize the class
# predictions in each spatial area
output = SoftmaxMap(axis=-1)(x)
fully_conv_ResNet = Model(inputs=input, outputs=output)
# A 1x1 convolution applies a Dense to each spatial grid location
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet(np.random.randn(1, 200, 300, 3)).numpy()
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from tensorflow.keras.applications.imagenet_utils import preprocess_input
from skimage.io import imread
from skimage.transform import resize
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
prediction_map = fully_conv_ResNet(img).numpy()
return prediction_map
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
Image shape before resizing: (1600, 2560, 3)
Image batch size shape before forward pass: (1, 800, 600, 3)
prediction map shape (1, 25, 19, 1000)
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
dalmatian, coach dog, carriage dog
Mexican hairless
Newfoundland, Newfoundland dog
basenji
Leonberg
pug, pug-dog
Great Pyrenees
Rhodesian ridgeback
vizsla, Hungarian pointer
German short-haired pointer
...
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(prediction_map, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
each_dog_proba_map = prediction_map[0, :, :, class_ids]
# this style of indexing a tensor by an other array has the following shape effect:
# (H, W, 1000) indexed by (118) ==> (118, H, W)
any_dog_proba_map = each_dog_proba_map.sum(axis=0)
print("size of heatmap: " + str(any_dog_proba_map.shape))
return any_dog_proba_map
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 or 4 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(200, 320)` - `(400, 640)` - `(800, 1280)` - `(1600, 2560)` (optional, requires a lot of memory)- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
s = "n02084071"
probas_1 = forward_pass_resize("dog.jpg", (200, 320))
heatmap_1 = build_heatmap(probas_1, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_1)
probas_2 = forward_pass_resize("dog.jpg", (400, 640))
heatmap_2 = build_heatmap(probas_2, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_2)
probas_3 = forward_pass_resize("dog.jpg", (800, 1280))
heatmap_3 = build_heatmap(probas_3, synset=s)
display_img_and_heatmap("dog.jpg", heatmap_3)
# We observe that heatmap_1 and heatmap_2 gave coarser
# segmentations than heatmap_3. However, heatmap_3
# has small artifacts outside of the dog area
# heatmap_3 encodes more local, texture level information
# about the dog, while lower resolutions will encode more
# semantic information about the full object
# combining them is probably a good idea!
###Output
Image shape before resizing: (1600, 2560, 3)
Image batch size shape before forward pass: (1, 200, 320, 3)
size of heatmap: (7, 10)
Image shape before resizing: (1600, 2560, 3)
Image batch size shape before forward pass: (1, 400, 640, 3)
size of heatmap: (13, 20)
Image shape before resizing: (1600, 2560, 3)
Image batch size shape before forward pass: (1, 800, 1280, 3)
size of heatmap: (25, 40)
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
from skimage.transform import resize
# TODO
# %load solutions/geom_avg.py
from skimage.transform import resize
heatmap_1_r = resize(heatmap_1, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_2_r = resize(heatmap_2, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_3_r = resize(heatmap_3, (50,80), mode='reflect',
preserve_range=True, anti_aliasing=True)
heatmap_geom_avg = np.power(heatmap_1_r * heatmap_2_r * heatmap_3_r, 0.333)
display_img_and_heatmap("dog.jpg", heatmap_geom_avg)
###Output
_____no_output_____
###Markdown
Fully Convolutional Neural NetworksObjectives:- Load a CNN model pre-trained on ImageNet- Transform the network into a Fully Convolutional Network - Apply the network perform weak segmentation on images
###Code
%matplotlib inline
import warnings
import numpy as np
# Note that you need a version of scipy inferior to 1.2.0
# in order to use the following two functions
from scipy.misc import imread as scipy_imread, imresize as scipy_imresize
import matplotlib.pyplot as plt
np.random.seed(1)
# Load a pre-trained ResNet50
# We use include_top = False for now,
# as we'll import output Dense Layer later
import tensorflow as tf
from tensorflow.keras.applications.resnet50 import ResNet50
base_model = ResNet50(include_top=False)
print(base_model.output_shape)
#print(base_model.summary())
res5c = base_model.layers[-1]
type(res5c)
res5c.output_shape
###Output
_____no_output_____
###Markdown
Fully convolutional ResNet- Out of the `res5c` residual block, the resnet outputs a tensor of shape $W \times H \times 2048$. - For the default ImageNet input, $224 \times 224$, the output size is $7 \times 7 \times 2048$ Regular ResNet layers The regular ResNet head after the base model is as follows: ```pyx = base_model.outputx = GlobalAveragePooling2D()(x)x = Dense(1000)(x)x = Softmax()(x)```Here is the full definition of the model: https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py Our Version- We want to retrieve the labels information, which is stored in the Dense layer. We will load these weights afterwards- We will change the Dense Layer to a Convolution2D layer to keep spatial information, to output a $W \times H \times 1000$.- We can use a kernel size of (1, 1) for that new Convolution2D layer to pass the spatial organization of the previous layer unchanged (it's called a *pointwise convolution*).- We want to apply a softmax only on the last dimension so as to preserve the $W \times H$ spatial information. A custom SoftmaxWe build the following Custom Layer to apply a softmax only to the last dimension of a tensor:
###Code
from tensorflow.keras import layers
# A custom layer in Keras must implement the four following methods:
class SoftmaxMap(layers.Layer):
# Init function
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(SoftmaxMap, self).__init__(**kwargs)
# There's no parameter, so we don't need this one
def build(self, input_shape):
pass
# This is the layer we're interested in:
# very similar to the regular softmax but note the additional
# that we accept x.shape == (batch_size, w, h, n_classes)
# which is not the case in Keras by default.
# Note also that we substract the logits by their maximum to
# make the softmax numerically stable.
def call(self, x, mask=None):
e = tf.exp(x - tf.math.reduce_max(x, axis=self.axis, keepdims=True))
s = tf.math.reduce_sum(e, axis=self.axis, keepdims=True)
return e / s
# The output shape is the same as the input shape
def get_output_shape_for(self, input_shape):
return input_shape
###Output
_____no_output_____
###Markdown
Let's check that we can use this layer to normalize the classes probabilities of some random spatial predictions:
###Code
n_samples, w, h, n_classes = 10, 3, 4, 5
random_data = np.random.randn(n_samples, w, h, n_classes).astype("float32")
random_data.shape
###Output
_____no_output_____
###Markdown
Because those predictions are random, if we some accross the classes dimensions we get random values instead of class probabilities that would need to some to 1:
###Code
random_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Let's create a `SoftmaxMap` function from the layer and process our test data:
###Code
softmaxMap = SoftmaxMap()
softmax_mapped_data = softmaxMap(random_data).numpy()
softmax_mapped_data.shape
###Output
_____no_output_____
###Markdown
All the values are now in the [0, 1] range:
###Code
softmax_mapped_data[0]
###Output
_____no_output_____
###Markdown
The last dimension now approximately sum to one, we can therefore be used as class probabilities (or parameters for a multinouli distribution):
###Code
softmax_mapped_data[0].sum(axis=-1)
###Output
_____no_output_____
###Markdown
Note that the highest activated channel for each spatial location is still the same before and after the softmax map. The ranking of the activations is preserved as softmax is a monotonic function (when considered element-wise):
###Code
random_data[0].argmax(axis=-1)
softmax_mapped_data[0].argmax(axis=-1)
###Output
_____no_output_____
###Markdown
Exercise- What is the shape of the convolution kernel we want to apply to replace the Dense ?- Build the fully convolutional model as described above. We want the output to preserve the spatial dimensions but output 1000 channels (one channel per class).- You may introspect the last elements of `base_model.layers` to find which layer to remove- You may use the Keras Convolution2D(output_channels, filter_w, filter_h) layer and our SotfmaxMap to normalize the result as per-class probabilities.- For now, ignore the weights of the new layer(s) (leave them initialized at random): just focus on making the right architecture with the right output shape.
###Code
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.models import Model
input = base_model.layers[0].input
# TODO: compute per-area class probabilites
output = input
fully_conv_ResNet = Model(inputs=input, outputs=output)
# %load solutions/fully_conv.py
###Output
_____no_output_____
###Markdown
You can use the following random data to check that it's possible to run a forward pass on a random RGB image:
###Code
prediction_maps = fully_conv_ResNet(np.random.randn(1, 200, 300, 3)).numpy()
prediction_maps.shape
###Output
_____no_output_____
###Markdown
How do you explain the resulting output shape?The class probabilities should sum to one in each area of the output map:
###Code
prediction_maps.sum(axis=-1)
###Output
_____no_output_____
###Markdown
Loading Dense weights- We provide the weights and bias of the last Dense layer of ResNet50 in file `weights_dense.h5`- Our last layer is now a 1x1 convolutional layer instead of a fully connected layer
###Code
import h5py
with h5py.File('weights_dense.h5', 'r') as h5f:
w = h5f['w'][:]
b = h5f['b'][:]
last_layer = fully_conv_ResNet.layers[-2]
print("Loaded weight shape:", w.shape)
print("Last conv layer weights shape:", last_layer.get_weights()[0].shape)
# reshape the weights
w_reshaped = w.reshape((1, 1, 2048, 1000))
# set the conv layer weights
last_layer.set_weights([w_reshaped, b])
###Output
_____no_output_____
###Markdown
A forward pass- We define the following function to test our new network. - It resizes the input to a given size, then uses `model.predict` to compute the output
###Code
from tensorflow.keras.applications.imagenet_utils import preprocess_input
from skimage.io import imread
from skimage.transform import resize
def forward_pass_resize(img_path, img_size):
img_raw = imread(img_path)
print("Image shape before resizing: %s" % (img_raw.shape,))
img = resize(img_raw, img_size, mode='reflect', preserve_range=True)
img = preprocess_input(img[np.newaxis])
print("Image batch size shape before forward pass:", img.shape)
prediction_map = fully_conv_ResNet(img).numpy()
return prediction_map
output = forward_pass_resize("dog.jpg", (800, 600))
print("prediction map shape", output.shape)
###Output
_____no_output_____
###Markdown
Finding dog-related classesImageNet uses an ontology of concepts, from which classes are derived. A synset corresponds to a node in the ontology.For example all species of dogs are children of the synset [n02084071](http://image-net.org/synset?wnid=n02084071) (Dog, domestic dog, Canis familiaris):
###Code
# Helper file for importing synsets from imagenet
import imagenet_tool
synset = "n02084071" # synset corresponding to dogs
ids = imagenet_tool.synset_to_dfs_ids(synset)
print("All dog classes ids (%d):" % len(ids))
print(ids)
for dog_id in ids[:10]:
print(imagenet_tool.id_to_words(dog_id))
print('...')
###Output
_____no_output_____
###Markdown
Unsupervised heatmap of the class "dog"The following function builds a heatmap from a forward pass. It sums the representation for all ids corresponding to a synset
###Code
def build_heatmap(prediction_map, synset):
class_ids = imagenet_tool.synset_to_dfs_ids(synset)
class_ids = np.array([id_ for id_ in class_ids if id_ is not None])
each_dog_proba_map = prediction_map[0, :, :, class_ids]
# this style of indexing a tensor by an other array has the following shape effect:
# (H, W, 1000) indexed by (118) ==> (118, H, W)
any_dog_proba_map = each_dog_proba_map.sum(axis=0)
print("size of heatmap: " + str(any_dog_proba_map.shape))
return any_dog_proba_map
def display_img_and_heatmap(img_path, heatmap):
dog = imread(img_path)
plt.figure(figsize=(12, 8))
plt.subplot(1, 2, 1)
plt.imshow(dog)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(heatmap, interpolation='nearest', cmap="viridis")
plt.axis('off')
###Output
_____no_output_____
###Markdown
**Exercise**- What is the size of the heatmap compared to the input image?- Build 3 dog heatmaps from `"dog.jpg"`, with the following sizes: - `(400, 640)` - `(800, 1280)` - `(1600, 2560)`- What do you observe? You may plot a heatmap using the above function `display_img_and_heatmap`. You might also want to reuse `forward_pass_resize` to compute the class maps them-selves
###Code
# dog synset
s = "n02084071"
# TODO
# %load solutions/build_heatmaps.py
###Output
_____no_output_____
###Markdown
Combining the 3 heatmapsBy combining the heatmaps at different scales, we obtain a much better information about the location of the dog.**Bonus**- Combine the three heatmap by resizing them to a similar shape, and averaging them- A geometric norm will work better than standard average!
###Code
from skimage.transform import resize
# TODO
# %load solutions/geom_avg.py
###Output
_____no_output_____
|
Week2_Rydberg_Atoms/OceanSDK_implmentation.ipynb
|
###Markdown
Quantum annealingQuantum annealing is a variation of the simulated thermal annealing approach for solving NP-hard optimization problems. D-Wave Systems created a full-stack framework ([Leap2](https://www.dwavesys.com/take-leap)) to run quantum annealing algorithms on both simulators and real quantum devices. The access to their systems uses an API mechanism for which registration is required. As part of the CDL, all users should have got a license and can access the real quantum devices. Follow the instructions [here](https://docs.ocean.dwavesys.com/en/stable/docs_cli.html) to set up your access via API to the D-Wave Systems.For the simulation part, only requirement is the installation of the package [dwave-ocean-sdk](https://pypi.org/project/dwave-ocean-sdk/).In the following we show how to build a graph for the UD-MIS problem and solve it by means of:- simulated thermal annealing- quantum annealing simulation- quantum annealing on a real device
###Code
import numpy as np
import networkx as nx
from matplotlib import pyplot as plt
# d'wave pkgs
import dimod
import dwave_networkx as dnx
from dwave_qbsolv import QBSolv
# pkgs to run the code on the QPU
from dwave.system.samplers import DWaveSampler
from dwave.system.composites import EmbeddingComposite
###Output
_____no_output_____
###Markdown
Define a function to calculate the edges of the UD-MIS problem
###Code
def get_edges(graph):
Nv = len(graph)
edges = np.zeros((Nv, Nv))
for i in range(Nv - 1):
xi, yi = graph[i]
for j in range(i + 1, Nv):
xj, yj = graph[j]
dij = np.sqrt((xi - xj) ** 2. + (yi - yj) ** 2.)
if dij <= 1.0:
edges[i, j] = 1
return np.argwhere(edges == 1)
###Output
_____no_output_____
###Markdown
Define the graph for the UD-MIS problem. To build the graph, we use the networkx package. This is useful as the D-Wave library provides a wrapper to solve the MIS problem in this specific form.
###Code
graph = [(0.3461717838632017, 1.4984640297338632),
(0.6316400411846113, 2.5754677320579895),
(1.3906262250927481, 2.164978861396621),
(0.66436005100802, 0.6717919819739032),
(0.8663329771713457, 3.3876341010035995),
(1.1643107343501296, 1.0823066243402013)
]
edges = get_edges(graph)
G = nx.Graph()
for edge in edges:
G.add_edge(edge[0], edge[1])
# plot MIS nodes with different color
pos = nx.circular_layout(G) # positions for all nodes
# nodes from MIS
nx.draw_networkx_nodes(G, pos)
# edges
nx.draw_networkx_edges(G, pos)
# labels
nx.draw_networkx_labels(G, pos, font_size=12, font_family="sans-serif")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Choose a sampler among:1. simulated annealing sampler2. quantum annealing simulator3. quantum annealing on a quantum device (requires API token to Leap2 services)by uncommenting the coresponding line.
###Code
# 1. the simulated annealing sampler
# sampler = dimod.SimulatedAnnealingSampler() # Simulated annealing
# 2. the quantum simulator
sampler = QBSolv()
# 3. running on the actual D-Wave QPU
# sampler = EmbeddingComposite(DWaveSampler())
###Output
_____no_output_____
###Markdown
The set of maximum independent nodes is then obtained by calling the method maximum_independent_set from the dwave_networkx package:
###Code
indep_nodes = dnx.maximum_independent_set(G, sampler)
print(f'Independent nodes: {indep_nodes}')
###Output
Independent nodes: [2, 3, 4]
###Markdown
We now plot the solution by coloring in red the independent nodes:
###Code
# plot MIS nodes with different color
pos = nx.circular_layout(G) # positions for all nodes
# nodes from MIS
nx.draw_networkx_nodes(G, pos, nodelist=indep_nodes, node_size=500, node_color='red')
nx.draw_networkx_nodes(G, pos, nodelist=[n for n in list(G.nodes) if n not in indep_nodes], node_size=500)
# edges
nx.draw_networkx_edges(G, pos)
# labels
nx.draw_networkx_labels(G, pos, font_size=12, font_family="sans-serif")
plt.axis("off")
plt.show()
###Output
_____no_output_____
|
docs/tutorials/Getting_started.ipynb
|
###Markdown
Getting started============ This is a quick overview of multiple capabilities of ``pvfactors``:- create a PV array- use the engine to update the PV array- plot the PV array 2D geometry for a given timestamp index- run a timeseries bifacial simulation using the "full mode"- run a timeseries bifacial simulation using the "fast mode" Imports and settings
###Code
# Import external libraries
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pandas as pd
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
# Settings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=300)
###Output
_____no_output_____
###Markdown
Get timeseries inputs
###Code
df_inputs = pd.DataFrame(
{'solar_zenith': [20., 50.],
'solar_azimuth': [110., 250.],
'surface_tilt': [10., 20.],
'surface_azimuth': [90., 270.],
'dni': [1000., 900.],
'dhi': [50., 100.],
'albedo': [0.2, 0.2]},
index=[datetime(2017, 8, 31, 11), datetime(2017, 8, 31, 15)]
)
df_inputs
###Output
_____no_output_____
###Markdown
Prepare some PV array parameters
###Code
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'gcr': 0.4, # ground coverage ratio
}
###Output
_____no_output_____
###Markdown
Create a PV array and update it with the engine Use the ``PVEngine`` and the ``OrderedPVArray`` to run simulations
###Code
from pvfactors.engine import PVEngine
from pvfactors.geometry import OrderedPVArray
# Create an ordered PV array
pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)
# Create engine using the PV array
engine = PVEngine(pvarray)
# Fit engine to data: which will update the pvarray object as well
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
df_inputs.albedo)
###Output
_____no_output_____
###Markdown
The user can then plot the PV array 2D geometry for any of the simulation timestamp
###Code
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(1, ax)
plt.show()
###Output
_____no_output_____
###Markdown
Run simulation using the full mode The "full mode" allows the user to run the irradiance calculations by accounting for the equilibrium of reflections between all the surfaces in the system. So it is more precise than the "fast mode", and it happens to be almost as fast.
###Code
# Create a function that will build a report from the simulation and return the
# incident irradiance on the back surface of the middle PV row
def fn_report(pvarray): return pd.DataFrame({'qinc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc')})
# Run full mode simulation
report = engine.run_full_mode(fn_build_report=fn_report)
# Print results (report is defined by report function passed by user)
df_report_full = report.assign(timestamps=df_inputs.index).set_index('timestamps')
print('Incident irradiance on back surface of middle PV row: \n')
df_report_full
###Output
Incident irradiance on back surface of middle PV row:
###Markdown
Run simulation using the fast mode The "fast mode" allows the user to get slightly faster but less accurate results for the incident irradiance on the back surface of a single PV row. It assumes that the incident irradiance values on surfaces other than back surfaces are known (e.g. from the Perez transposition model).
###Code
# Run the fast mode calculation on the middle PV row: use the same report function as previously
df_report_fast = engine.run_fast_mode(fn_build_report=fn_report, pvrow_index=1)
# Print the results
print('Incident irradiance on back surface of middle PV row: \n')
df_report_fast
###Output
Incident irradiance on back surface of middle PV row:
###Markdown
Getting started============ This is a quick overview of multiple capabilities of ``pvfactors``:- create a PV array- use the engine to update the PV array- plot the PV array 2D geometry for a given timestamp index- run a timeseries bifacial simulation using the "fast mode"- run a timeseries bifacial simulation using the "full mode" Imports and settings
###Code
# Import external libraries
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pandas as pd
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
# Settings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=300)
###Output
_____no_output_____
###Markdown
Get timeseries inputs
###Code
df_inputs = pd.DataFrame(
{'solar_zenith': [20., 50.],
'solar_azimuth': [110., 250.],
'surface_tilt': [10., 20.],
'surface_azimuth': [90., 270.],
'dni': [1000., 900.],
'dhi': [50., 100.],
'albedo': [0.2, 0.2]},
index=[datetime(2017, 8, 31, 11), datetime(2017, 8, 31, 15)]
)
df_inputs
###Output
_____no_output_____
###Markdown
Prepare some PV array parameters
###Code
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'gcr': 0.4, # ground coverage ratio
}
###Output
_____no_output_____
###Markdown
Create a PV array and update it with the engine Use the ``PVEngine`` and the ``OrderedPVArray`` to run simulations
###Code
from pvfactors.engine import PVEngine
from pvfactors.geometry import OrderedPVArray
# Create an ordered PV array
pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)
# Create engine using the PV array
engine = PVEngine(pvarray)
# Fit engine to data: which will update the pvarray object as well
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
df_inputs.albedo)
###Output
_____no_output_____
###Markdown
The user can then plot the PV array 2D geometry for any of the simulation timestamp
###Code
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(1, ax)
plt.show()
###Output
_____no_output_____
###Markdown
Run simulation using the fast mode The "fast mode" allows the user to get almost instantaneous calculation results for the incident irradiance on the back surface of the PV rows. It assumes that the incident irradiance values on surfaces other than back surfaces are known (e.g. from the Perez transposition model).
###Code
# Create a function that will build a report from the simulation and return the
# incident irradiance on the back surface of the middle PV row
def fn_report(pvarray): return pd.DataFrame({'qinc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc')})
# Run the fast mode calculation on the middle PV row
df_report_fast = engine.run_fast_mode(fn_build_report=fn_report, pvrow_index=1)
# Print the results
print('Incident irradiance on back surface of middle PV row: \n')
df_report_fast
###Output
Incident irradiance on back surface of middle PV row:
###Markdown
Run simulation using the full mode The "full mode" allows the user to run the irradiance calculations by accounting for the equilibrium of reflections between all the surfaces in the system. So it is more precise than the "fast mode", but it takes a little longer to run.
###Code
# Create a function that will build a report: here we use an example that reports irradiance on center PV row
from pvfactors.report import example_fn_build_report
# Run full mode simulation
report = engine.run_full_mode(fn_build_report=example_fn_build_report)
# Print results (report is defined by report function passed by user)
df_report_full = pd.DataFrame(report, index=df_inputs.index)
print('Incident irradiance on back surface of middle PV row: \n')
df_report_full[['qinc_back']]
###Output
Incident irradiance on back surface of middle PV row:
|
demos/location-based-recommendations/06-creating-grafana-dashboard.ipynb
|
###Markdown
Creating the Grafana dashboard Setup grafana service * Go to the services screen* Create Grafana service or use an existing one Get grafana package
###Code
!pip install git+https://github.com/v3io/grafwiz.git
from grafwiz import *
###Output
_____no_output_____
###Markdown
Deploy worldmap datasource
###Code
# Grafana internal cluster address (will be http://{grafana-service-name})
grafana_url = 'http://grafana'
###Output
_____no_output_____
###Markdown
Using the nuclio function created at the previous notebook ([nuclio-worldmap](05-nuclio-worldmap-grafana-integration.ipynb)) with the kubernetes dns.
###Code
# frames_url = http://{worldmap-function-name}:8080
worldmap = DataSource(name='worldmap', frames_url='http://worldmap-endpoint:8080')
worldmap.deploy(grafana_url)
###Output
Datasource worldmap already exists
Datasource worldmap created successfully
|
examples/Postgres_Executor_Example.ipynb
|
###Markdown
This demo requires that you have a local posgres SQL database already set up. If you have not done this yet, you can download the PostgreSQL installer here: https://www.postgresql.org/download/. Follow the instructions to get your database environment setup. Once you have your PostgreSQL environment set up, you can upload the example [car dataset](https://github.com/lux-org/lux-datasets/blob/master/data/car.csv) to your database using the script found [here](https://github.com/thyneb19/lux/blob/Database-Executor/lux/data/upload_car_data.py). To connect Lux to your PostgreSQL database, you will first need to create a psycopg2 connection. After that you will be able to specify this connection in the Lux config, and connect a Lux DataFrame to a table as shown below.
###Code
import lux
import psycopg2
import pandas as pd
connection = psycopg2.connect("host=localhost dbname=postgres user=postgres password=lux")
sql_df = lux.LuxDataFrame()
lux.config.set_SQL_connection(connection)
sql_df.set_SQL_table("cars")
###Output
_____no_output_____
###Markdown
Once the Lux Dataframe has been connected to a database table, the parameters necessary to run Lux' recommendation system will automatically be populated.
###Code
#you can view the variable datatypes here
sql_df.data_type
###Output
_____no_output_____
###Markdown
Now that the connection between your DataFrame and your database has been established, you can leverage all of Lux's visual recommendation tools. For a more in-depth look at Lux's functions, check out the main repository [here](https://github.com/lux-org/lux).
###Code
#call the Lux DataFrame to view general variable distributions and relationships.
#You will see that the DataFrame contains the columns of your database table, but is otherwise empty.
#Data is processed as much as possible on the database end, and is only brought in locally when needed to create visualizations.
sql_df
#you can specify intents just the same as the default Lux system
from lux.vis import Clause
#here we specify that we are interested in a graph containing the variables 'milespergal' and 'cylinders'
#we also specify that we want to apply a filter 'horsepower > 150' to this visualization
sql_df.set_intent(["milespergal", 'cylinders', Clause(attribute ="horsepower", filter_op=">", value=150)])
sql_df
###Output
_____no_output_____
###Markdown
You can also use Lux's Vis package to generate visualizations without having to pull in or process data from your database manually. Instead, you can specify visualization channels and create graphs as shown below.
###Code
from lux.vis.Vis import Vis
from lux.vis.Vis import Clause
#Create a new Lux Clause for each variable you want to use in your graph
#Specify how you want to use the variable in the graph via the channel parameter.
#The channel parameter will specify whether or not a variable is used on the x or y axis, or used to color datapoints
x_clause = Clause(attribute = "acceleration", channel = "x")
y_clause = Clause(attribute = "milespergal", channel = "y")
color_clause = Clause(attribute = 'cylinders', channel = "color")
#you can also create filters on your data using Lux Clauses like so
filter_clause = Clause(attribute ="origin", filter_op="=", value='USA')
#to create the graph, create a Lux Vis object with the list of your Clauses as the parameter
new_vis = Vis([x_clause, y_clause, color_clause, filter_clause])
#to fetch the data necessary for the graph, use the refresh_source function.
#the refresh_source function takes in a Lux DataFrame, in this case you can specify the one connected to your database table
new_vis.refresh_source(sql_df)
new_vis
###Output
_____no_output_____
|
annual_seasonal_notebooks/051820_cp_NY_NYC_Seasonal_IQR_BoxPlot.ipynb
|
###Markdown
Seasonal New York County Air Pollution Analysis: IQR, Box Plot, Outliers **Motivation for Questions & Objective**The team and I set out to explore pollution measurements and dissect data to answer the questions we collectively discussed. Los Angeles has always been surround by smog. We were curious about the air quality standards in LA and around the United States. Does the United States population care, or aware of what is considered good and moderate air quality? Will EV cars help with pollution? Lastly, the recent Covid pandemic led us to realize air pollution has cleared due to quarantine. When the quarantine is lifted, should we continue to wear masks?These topics led me to my questions which questions consisted of: * LA is smoggy... would like to know what our standards are for air quality.* How bad is bad?* Percentage Annually we live in neutral/good air? **Exploration**EPA.gov provided full sample dataset via their API. The sample dataset consist of the county name searched from state and county number queried. EPA.gov API requests if a specific county is being queried, you must pass the two digit state number and three digit county number along with the Sample Data URL. The numbers for county and state can be found in the list data URL. Sample data included site location measurements for nearly every hour of the day, for everyday of the year. The amount of information allowed me to separate the data in many ways including by season. Measurements by the day, and hour, can provide the percentage of time we live in good or neutral air. Lastly, every time frame includes sample measurements. EPA.gov has listed the PM 2.5 health index which informs us the severity levels in air pollution. Combining measurements with specific thresholds can give us answers on "How bad is bad?" **Data Cleanup**The json received from our API call includes a lot of parameters I immediately did not need. The following paramters are kept and used througout the dataset in various ways: "state_code", "state", "county_code", "county", "site_number", parameter", "sample_measurement", "units_of_measure", "latitude", "longitude","date_local", "time_local" "date_gmt", "time_gmt", "method_type", "method_code", "method", "date_of_last_change", "cbsa_code". Pandas, Matplotlib, NumPy, and several other libraries are used in this notebook. The index is set to date_gmt for date time indexing. Date_gmt is removed and replaced with site_number as times for certain plots. One value had appeared to be a single outlier, of the outliers, in the California dataset. At a measurement of '995.6'. The value was one of 38400 measurements. It was removed at the beginning. A column named 'color' was added to the main dataset for all states. The value inside represented the hex color number related to the sample measurement. If the value is between 0-12.0, the green (5cb85c) hex number is added in the color column. Sample measurement 12.1-35.4 receives yellow (dbe36b), 35.5-55.4 orange (e8970c) and 55.5+ red (e80c0c). Colors are added for labeling on the scatter plot. **Analysis**Seasonal Analysis provided a high-level overview of air pollution and health standards we lived in throughout 2019. Cases such as Los Angeles, CA showed Los Angeles residents live in bad air quality for a third of the year. More than any of the top 5 most populated cities in the United States. Los Angeles is plagues with self-pollution such as automobiles but also unfortunate with weather. Rain drought has not been able to clear the air which triggered dangerous fires across the state. The fall season graph represents October fires destroying air quality. Another trend found in all seasonal and annual analysis of top 5 most populated cities include bad air conditions around New Years and the Fourth of July. Holidays involving fireworks make air quality dangerous for about a week.Weather and Holidays are a factor in air pollution but in some cases, may help air pollution. New York experiences storms around the same time California experiences fires. The fall storms have cleared the air quality in New York which usually would have higher PM 2.5 measurements. Box plot information displays each site location measurements for the period (ex: annual/seasonal). Each locations box plot displays the lowest reading and highest reading by box arm, Inter Quartile Range (IQR), Average measurement, and outliers. After researching outliers, outliers tend to be one or two days throughout the year and may represent holidays such as the fourth of July. **Summary**Annual and Seasonal datasets with higher populated areas involve higher air pollution measurements. Of the top 5 most populated cities, 3 have moderate air pollution levels a quarter of the year. Moderate levels emphasize 'people with respiratory or heart disease, the elderly and children should limit prolong exertion.' In this case, Los Angeles is our focus due to location of the report. Los Angeles air pollution is negatively impacted by multiple factors. Automobile and weather pollution contribute to the consistently moderate air quality the population of Los Angeles has grown accustom to.**Implications**I would like to continue to research weather measurements by day and the correlation it may have around air pollution by region. Also, following automobile emissions, we may be able to positively impact city air pollutions by thinking about our next big auto purchase. EV's are moving forward and can provide great positive impacts to any regions air quality measurements. Table of Contents* [Retrieving and Clean Data](dataclean)* [Season Sample Measurement Analysis](beginseason) * [Spring](springdata) * [Summer](summerdata) * [Fall](falldata) * [Winter](winterdata) ----
###Code
%matplotlib inline
# Dependencies and Setup
import matplotlib.cbook as cbook
import matplotlib.dates as dates
import matplotlib.ticker as ticker
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import pandas as pd
import numpy as np
import json
import copy
from typing import List
import csv
###Output
_____no_output_____
###Markdown
Import saved csv. CSV includes data generated from 'StateCountySampleDataDF()' function.Clean data.
###Code
# Import from .CSV
ny_nyc_df = pd.read_csv("./csv_data/newyork_ny.csv")
# Verify california is in dataframe
assert ny_nyc_df['state'][0] == 'New York'
# Make copy before filtering
ny_filtered_df = ny_nyc_df.copy()
ny_filtered_df.head()
###Output
_____no_output_____
###Markdown
Clean dataset.
###Code
ny_filtered_df['sample_measurement'].max()
# Select certain columns to use for mapping,
ny_filtered_df=ny_filtered_df[['county','site_number','sample_measurement','latitude','longitude','time_local','date_gmt','time_gmt','date_of_last_change']]
# Drop nan values and empty values
ny_filtered_df=ny_filtered_df.dropna()
# Remove all numbers below 0 and above 400
ny_filtered_df=ny_filtered_df[(ny_filtered_df[['sample_measurement']] > 0).all(axis=1) & (ny_filtered_df[['sample_measurement']] < 400).all(axis=1)]
ny_filtered_noIndexChange = ny_filtered_df
# Assign date_gmt DateTimeIndex. Allows for date searching throug index.
ny_filtered_df['date_gmt']=pd.DatetimeIndex(ny_filtered_df['date_gmt'])
#Set index to date_gmt
ny_filtered_df = ny_filtered_df.set_index('date_gmt')
###Output
_____no_output_____
###Markdown
Clean Copy dataset.
###Code
# Copy of clean data
ny_nyc_clean_data = ny_filtered_df.copy()
###Output
_____no_output_____
###Markdown
Add Column: Add Color to specific sample measurementUsed in scatter plot later.
###Code
# Color thresholds
colors_list = []
for row in ny_nyc_clean_data['sample_measurement']:
# if more than a value,
if row > 55.5:
# Append color value
colors_list.append('#e80c0c')
# else, if more than a value,
elif row > 35.5:
# Append color value
colors_list.append('#e8970c')
elif row > 12.1:
# Append color value
colors_list.append('#dbe36b')
elif row > 0:
# Append color value
colors_list.append('#5cb85c')
else:
# Append a failing grade
colors_list.append('Failed')
#print(f'failed to apply color on row {row}')
# Create a column from the list
ny_nyc_clean_data['color'] = colors_list
ny_nyc_clean_data['color'].value_counts()
# Check for Failed values. No Failed values.
assert any(ny_nyc_clean_data['color'] == 'Failed') == False
###Output
_____no_output_____
###Markdown
* 40.81976,-73.94825= West Harlen, NY (135)* 40.81976,-73.94825= Lower Manhattan, NY (134)* 40.81976,-73.94825= Hudson Heights, NY (115)* 40.81976,-73.94825= East Village, NY (128)
###Code
###Output
_____no_output_____
###Markdown
Begin Season Sample Measurement Analysis* Spring, Summer, Fall, Winter * Scatter Plot * IQR Results * Box Plots * Outliers Season: Spring
###Code
# Sort by date_gmt and time_gmt
spring_ny_nyc = ny_nyc_clean_data['2019-03-01':'2019-05-31']
###Output
_____no_output_____
###Markdown
**Reset index for and set new index for box plot.**
###Code
spring_ny_nyc_box=spring_ny_nyc.reset_index().set_index('site_number').sort_values(by=['site_number','time_gmt'], ascending=True)
###Output
_____no_output_____
###Markdown
Plot: Spring Input dataframe name below. Make loop later.
###Code
# dataframe to plot
plot_df = spring_ny_nyc.copy()
#ax.plot(date, site_number_1103.sample_measurement)
plot_y_information = plot_df.sample_measurement
color_threshold = plot_df.color
# Assigm date as index of dataframe. Dataframe index has dates
date = plot_df.index.astype('O')
# Data for plt
x = date
y = plot_y_information
c = color_threshold.values
# Plot subplots
fig, ax = plt.subplots(figsize=(15,10))
scatter = ax.scatter(x,y, c=c)
# Set month in xaxis by searching date format
ax.xaxis.set_major_locator(dates.MonthLocator())
# 16 is a slight approximation since months differ in number of days.
ax.xaxis.set_minor_locator(dates.MonthLocator(bymonthday=16))
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
# loop for custom tickers. Assign marker size and center text
for tick in ax.xaxis.get_minor_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
imid = len(plot_df) // 2
ax.set_xlabel(str(date[imid].year))
ax.set_ylabel('PM 2.5 Sample Measurements', size=15)
ax.set_title('Spring 2019: New York City County, NY', size=18)
plt.grid(True)
# Labels for legend
green = mpatches.Patch(color='green', label='Good: 0-12.0')
yellow = mpatches.Patch(color='yellow', label='Moderate: 12.1-35.4')
orange = mpatches.Patch(color='orange', label='Sensitive/Unhealth: 35.5-55.4')
red = mpatches.Patch(color='red', label='Unhealthy: 55.5+')
#Call legend
plt.legend(handles = [green,yellow,orange,red])
# Save an image of the chart and print it to the screen
plt.savefig("./Images/NY_spring_pm25_scatter.png")
plt.show()
###Output
_____no_output_____
###Markdown
Site Number Seperation: Spring
###Code
# Seperate by site_number
spring_ny_nyc['site_number'].unique()
# Identify Site Number: 1103. Filter out values affiliated
spring_site_number_135=spring_ny_nyc[(spring_ny_nyc[['site_number']]==135).all(axis=1)]
spring_site_number_134=spring_ny_nyc[(spring_ny_nyc[['site_number']]==134).all(axis=1)]
spring_site_number_115=spring_ny_nyc[(spring_ny_nyc[['site_number']]==115).all(axis=1)]
spring_site_number_128=spring_ny_nyc[(spring_ny_nyc[['site_number']]==128).all(axis=1)]
spring_site_number_135=spring_site_number_135[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
spring_site_number_134=spring_site_number_134[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
spring_site_number_115=spring_site_number_115[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
spring_site_number_128=spring_site_number_128[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
# Make a list of site_numbers. Used in for loops to grab specific data by site number & used for new df header.
filtered_ny_nyc_box_list = spring_ny_nyc.site_number.sort_values().unique().tolist()
filtered_ny_nyc_box_list
###Output
_____no_output_____
###Markdown
Begin IQR for Box Plots
###Code
# Look through sample measurement list and count the amount of mice in each drug regimen.
# Create an empty list to fill with for loop
measurement_quartile_spring=[]
# Search through filtered_ca_la_box_data with '.loc[site_number, 'sample_measurement' column]' and get the quartile
for i in filtered_ny_nyc_box_list:
location = spring_ny_nyc_box.loc[i, 'sample_measurement'].quantile(q=[.25, .5, .75])
# append results to measurement_quartile list before moving to next value in tumor_regimen_list
measurement_quartile_spring.append(location)
measurement_quartile_spring
iqr_all_spring = []
# loop through measurement_quartile range (0-10). Find IQR by selecting one value at a time in the measurement_quartile list.
for i in range(len(measurement_quartile_spring)):
iqr = (measurement_quartile_spring[i][0.75])-(measurement_quartile_spring[i][0.25])
# Append finding to iqr_all list before moving to next value
iqr_all_spring.append(iqr)
#print(iqr_all_spring)
# Round numbers to 1 number after decimal.
round_iqr_all_spring = [round(num, 2) for num in iqr_all_spring]
# Show list. Verify we have correct amount
assert len(iqr_all_spring) == 4
host_site_num=['115', '128', '134', '135']
# Column/Label list
headers_list=['Hudson Heights, NY (115)','East Village, NY (128)','Lower Manhattan, NY (134)','West Harlen, NY (135)']
# Combine both for loop generated list into one.
measurements_iqr_all_spring = [dict(zip(headers_list, round_iqr_all_spring))]
measurements_iqr_all_spring
###Output
_____no_output_____
###Markdown
Begin Box Plots
###Code
# Values for plotting
box_values_spring = round_iqr_all_spring
# Sort to determine outliers
values_sorted_spring = sorted(box_values_spring)
print(values_sorted_spring)
# Sample measurement values per site location
spring_site_135_measurements = spring_site_number_135['sample_measurement']
spring_site_134_measurements = spring_site_number_134['sample_measurement']
spring_site_115_measurements = spring_site_number_115['sample_measurement']
spring_site_128_measurements = spring_site_number_128['sample_measurement']
# Generate a box plot of the sample measurement of each site location, annually
measurement_plot_info = [spring_site_115_measurements,spring_site_128_measurements,spring_site_134_measurements,spring_site_135_measurements]
fig, ax = plt.subplots(figsize=(10, 10))
pos = np.array(range(len(measurement_plot_info))) + 1
bp = ax.boxplot(measurement_plot_info, sym='k+', showfliers=True)
ax.set_xticklabels(headers_list, rotation=45)
ax.set_xlabel('Site Locations: New York County, NY')
ax.set_ylabel('PM 2.5 Measurements')
ax.set_title('New York County, NY: Seasonal (Spring) Site Location Sample Measurements')
plt.savefig("./Images/NY_spring_pm25_BoxPlot.png")
print(f'Number of Samples Measured: {len(spring_ny_nyc)}')
plt.show()
###Output
Number of Samples Measured: 8098
###Markdown
Potential OutliersFiltering by outlier values and dates may tell us which day/month of the year has larger than normal air pollution measurements.
###Code
# Hudson Heights, NY (115) Outliers
hh_spring_outliers = bp["fliers"][2].get_data()[1]
hh_spring_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(spring_site_115_measurements)}')
print(f'Total amount of outliers: {len(hh_spring_outliers)}')
print(f'Hudson Heights Outlier values: {hh_spring_outliers}')
# East Village, NY (128) Outliers
ev_spring_outliers = bp["fliers"][3].get_data()[1]
ev_spring_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(spring_site_128_measurements)}')
print(f'Total amount of outliers: {len(ev_spring_outliers)}')
print(f'East Village Outlier values: {ev_spring_outliers}')
# Lower Manhattan, NY (134) Outliers
lm_spring_outliers = bp["fliers"][1].get_data()[1]
lm_spring_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(spring_site_134_measurements)}')
print(f'Total amount of outliers: {len(lm_spring_outliers)}')
print(f'Lower Manhattan Outlier values: {lm_spring_outliers}')
# West harlem, NY (135) Outliers
westharlem_spring_outliers = bp["fliers"][0].get_data()[1]
westharlem_spring_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(spring_site_135_measurements)}')
print(f'Total amount of outliers: {len(westharlem_spring_outliers)}')
print(f'West harlem Outlier values: {westharlem_spring_outliers}')
###Output
Total amount samples in set: 2161
Total amount of outliers: 31
West harlem Outlier values: [17.6 17.7 17.8 18.4 18.5 18.6 18.6 18.7 18.8 19. 19.1 19.2 19.3 19.4
19.5 19.6 20.1 20.1 20.3 20.5 20.9 21.7 21.7 21.9 22.4 23.5 23.7 24.1
24.8 27.2 35.1]
###Markdown
Season: Summer
###Code
# Sort by date_gmt and time_gmt
summer_ny_nyc = ny_nyc_clean_data['2019-06-01':'2019-08-31'].sort_values(["date_gmt", "time_gmt"])
###Output
_____no_output_____
###Markdown
**Reset index for and set new index for box plot.**
###Code
summer_ny_nyc_box=summer_ny_nyc.reset_index().set_index('site_number').sort_values(by=['site_number','time_gmt'], ascending=True)
###Output
_____no_output_____
###Markdown
Plot: Summer Input dataframe name below. Make loop later.
###Code
# dataframe to plot
plot_df = summer_ny_nyc.copy()
#ax.plot(date, site_number_1103.sample_measurement)
plot_y_information = plot_df.sample_measurement
color_threshold = plot_df.color
# Assigm date as index of dataframe. Dataframe index has dates
date = plot_df.index.astype('O')
# Data for plt
x = date
y = plot_y_information
c = color_threshold.values
# Plot subplots
fig, ax = plt.subplots(figsize=(15,10))
scatter = ax.scatter(x,y, c=c)
# Set month in xaxis by searching date format
ax.xaxis.set_major_locator(dates.MonthLocator())
# 16 is a slight approximation since months differ in number of days.
ax.xaxis.set_minor_locator(dates.MonthLocator(bymonthday=16))
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
# loop for custom tickers. Assign marker size and center text
for tick in ax.xaxis.get_minor_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
imid = len(plot_df) // 2
ax.set_xlabel(str(date[imid].year))
ax.set_ylabel('PM 2.5 Sample Measurements', size=15)
ax.set_title('Summer 2019: New York County, NY', size=18)
plt.grid(True)
# Labels for legend
green = mpatches.Patch(color='green', label='Good: 0-12.0')
yellow = mpatches.Patch(color='yellow', label='Moderate: 12.1-35.4')
orange = mpatches.Patch(color='orange', label='Sensitive/Unhealth: 35.5-55.4')
red = mpatches.Patch(color='red', label='Unhealthy: 55.5+')
#Call legend
plt.legend(handles = [green,yellow,orange,red])
# Save an image of the chart and print it to the screen
plt.savefig("./Images/NY_summer_pm25_scatter.png")
plt.show()
# Seperate by site_number
summer_ny_nyc['site_number'].unique()
# Identify Site Number: 1103. Filter out values affiliated
summer_site_number_135=summer_ny_nyc[(summer_ny_nyc[['site_number']]==135).all(axis=1)]
summer_site_number_134=summer_ny_nyc[(summer_ny_nyc[['site_number']]==134).all(axis=1)]
summer_site_number_115=summer_ny_nyc[(summer_ny_nyc[['site_number']]==115).all(axis=1)]
summer_site_number_128=summer_ny_nyc[(summer_ny_nyc[['site_number']]==128).all(axis=1)]
summer_site_number_135=summer_site_number_135[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
summer_site_number_134=summer_site_number_134[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
summer_site_number_115=summer_site_number_115[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
summer_site_number_128=summer_site_number_128[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
# Make a list of site_numbers. Used in for loops to grab specific data by site number & used for new df header.
filtered_ny_nyc_box_list = summer_ny_nyc.site_number.sort_values().unique().tolist()
filtered_ny_nyc_box_list
###Output
_____no_output_____
###Markdown
Begin IQR for Box Plots
###Code
# Look through sample measurement list and count the amount of mice in each drug regimen.
# Create an empty list to fill with for loop
measurement_quartile_summer=[]
# Search through filtered_ca_la_box_data with '.loc[site_number, 'sample_measurement' column]' and get the quartile
for i in filtered_ny_nyc_box_list:
location = summer_ny_nyc_box.loc[i, 'sample_measurement'].quantile(q=[.25, .5, .75])
# append results to measurement_quartile list before moving to next value in tumor_regimen_list
measurement_quartile_summer.append(location)
measurement_quartile_summer
iqr_all_summer = []
# loop through measurement_quartile range (0-10). Find IQR by selecting one value at a time in the measurement_quartile list.
for i in range(len(measurement_quartile_summer)):
iqr = (measurement_quartile_summer[i][0.75])-(measurement_quartile_summer[i][0.25])
# Append finding to iqr_all list before moving to next value
iqr_all_summer.append(iqr)
#print(iqr_all_summer)
# Round numbers to 1 number after decimal.
round_iqr_all_summer = [round(num, 2) for num in iqr_all_summer]
# Show list. Verify we have correct amount
assert len(iqr_all_summer) == 4
# Combine both for loop generated list into one.
measurements_iqr_all_summer = [dict(zip(headers_list, round_iqr_all_summer))]
measurements_iqr_all_summer
###Output
_____no_output_____
###Markdown
Begin Box Plots
###Code
# Values for plotting
box_values_summer = round_iqr_all_summer
# Sort to determine outliers
values_sorted_summer = sorted(box_values_summer)
print(values_sorted_summer)
# Sample measurement values per site location
summer_site_135_measurements = summer_site_number_135['sample_measurement']
summer_site_134_measurements = summer_site_number_134['sample_measurement']
summer_site_115_measurements = summer_site_number_115['sample_measurement']
summer_site_128_measurements = summer_site_number_128['sample_measurement']
# Generate a box plot of the sample measurement of each site location, annually
measurement_plot_info = [summer_site_115_measurements,summer_site_128_measurements,summer_site_134_measurements,summer_site_135_measurements]
fig, ax = plt.subplots(figsize=(10, 10))
pos = np.array(range(len(measurement_plot_info))) + 1
bp = ax.boxplot(measurement_plot_info, sym='k+', showfliers=True)
ax.set_xticklabels(headers_list, rotation=45)
ax.set_xlabel('Site Locations: New York County, NY')
ax.set_ylabel('PM 2.5 Measurements')
ax.set_title('New York County, NY: Seasonal (Summer) Site Location Sample Measurements')
plt.savefig("./Images/NY_summer_pm25_BoxPlot.png")
print(f'Number of Samples Measured: {len(summer_ny_nyc)}')
plt.show()
###Output
Number of Samples Measured: 6851
###Markdown
Potential OutliersFiltering by outlier values and dates may tell us which day/month of the year has larger than normal air pollution measurements.
###Code
# Hudson Heights, NY (115) Outliers
hh_summer_outliers = bp["fliers"][2].get_data()[1]
hh_summer_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(spring_site_115_measurements)}')
print(f'Total amount of outliers: {len(hh_summer_outliers)}')
print(f'Hudson Heights Outlier values: {hh_summer_outliers}')
# East Village, NY (128) Outliers
ev_summer_outliers = bp["fliers"][3].get_data()[1]
ev_summer_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(summer_site_128_measurements)}')
print(f'Total amount of outliers: {len(ev_summer_outliers)}')
print(f'East Village Outlier values: {ev_summer_outliers}')
# Lower Manhattan, NY (134) Outliers
lm_summer_outliers = bp["fliers"][1].get_data()[1]
lm_summer_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(summer_site_134_measurements)}')
print(f'Total amount of outliers: {len(lm_summer_outliers)}')
print(f'Lower Manhattan Outlier values: {lm_summer_outliers}')
# West harlem, NY (135) Outliers
westharlem_summer_outliers = bp["fliers"][0].get_data()[1]
westharlem_summer_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(summer_site_135_measurements)}')
print(f'Total amount of outliers: {len(westharlem_summer_outliers)}')
print(f'West harlem Outlier values: {westharlem_summer_outliers}')
###Output
Total amount samples in set: 1882
Total amount of outliers: 51
West harlem Outlier values: [20.1 20.2 20.2 20.2 20.3 20.3 20.4 20.5 20.8 20.8 20.8 21.2 21.4 21.6
21.8 21.8 21.8 21.9 21.9 22. 22.1 22.5 22.6 22.7 23.5 23.7 24.3 24.6
24.8 25.1 25.2 26.3 26.5 26.6 26.7 26.8 27.1 27.3 27.5 27.9 28.6 30.4
31.3 31.4 32. 33. 33.8 34.9 42.3 57.6 63.1]
###Markdown
Season: Fall
###Code
# Sort by date_gmt and time_gmt
fall_ny_nyc = ny_nyc_clean_data['2019-09-01':'2019-11-30'].sort_values(["date_gmt", "time_gmt"])
###Output
_____no_output_____
###Markdown
**Reset index for and set new index for box plot.**
###Code
fall_ny_nyc_box=fall_ny_nyc.reset_index().set_index('site_number').sort_values(by=['site_number','time_gmt'], ascending=True)
###Output
_____no_output_____
###Markdown
Plot: Fall Input dataframe name below. Make loop later.
###Code
# dataframe to plot
plot_df = fall_ny_nyc.copy()
#ax.plot(date, site_number_1103.sample_measurement)
plot_y_information = plot_df.sample_measurement
color_threshold = plot_df.color
# Assigm date as index of dataframe. Dataframe index has dates
date = plot_df.index.astype('O')
# Data for plt
x = date
y = plot_y_information
c = color_threshold.values
# Plot subplots
fig, ax = plt.subplots(figsize=(15,10))
scatter = ax.scatter(x,y, c=c)
# Set month in xaxis by searching date format
ax.xaxis.set_major_locator(dates.MonthLocator())
# 16 is a slight approximation since months differ in number of days.
ax.xaxis.set_minor_locator(dates.MonthLocator(bymonthday=16))
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
# loop for custom tickers. Assign marker size and center text
for tick in ax.xaxis.get_minor_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
imid = len(plot_df) // 2
ax.set_xlabel(str(date[imid].year))
ax.set_ylabel('PM 2.5 Sample Measurements', size=15)
ax.set_title('Fall 2019: New York County, NY', size=18)
plt.grid(True)
# Labels for legend
green = mpatches.Patch(color='green', label='Good: 0-12.0')
yellow = mpatches.Patch(color='yellow', label='Moderate: 12.1-35.4')
orange = mpatches.Patch(color='orange', label='Sensitive/Unhealth: 35.5-55.4')
red = mpatches.Patch(color='red', label='Unhealthy: 55.5+')
#Call legend
plt.legend(handles = [green,yellow,orange,red])
# Save an image of the chart and print it to the screen
plt.savefig("./Images/NY_fall_pm25_scatter.png")
plt.show()
###Output
_____no_output_____
###Markdown
Site Number Seperation: Fall
###Code
# Seperate by site_number
fall_ny_nyc['site_number'].unique()
# Identify Site Number: 1103. Filter out values affiliated
fall_site_number_135=fall_ny_nyc[(fall_ny_nyc[['site_number']]==135).all(axis=1)]
fall_site_number_134=fall_ny_nyc[(fall_ny_nyc[['site_number']]==134).all(axis=1)]
fall_site_number_115=fall_ny_nyc[(fall_ny_nyc[['site_number']]==115).all(axis=1)]
fall_site_number_128=fall_ny_nyc[(fall_ny_nyc[['site_number']]==128).all(axis=1)]
fall_site_number_135=fall_site_number_135[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
fall_site_number_134=fall_site_number_134[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
fall_site_number_115=fall_site_number_115[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
fall_site_number_128=fall_site_number_128[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
# Make a list of site_numbers. Used in for loops to grab specific data by site number & used for new df header.
filtered_ny_nyc_box_list = fall_ny_nyc.site_number.sort_values().unique().tolist()
filtered_ny_nyc_box_list
###Output
_____no_output_____
###Markdown
Begin IQR for Box Plots
###Code
# Look through sample measurement list and count the amount of mice in each drug regimen.
# Create an empty list to fill with for loop
measurement_quartile_fall=[]
# Search through filtered_ca_la_box_data with '.loc[site_number, 'sample_measurement' column]' and get the quartile
for i in filtered_ny_nyc_box_list:
location = fall_ny_nyc_box.loc[i, 'sample_measurement'].quantile(q=[.25, .5, .75])
# append results to measurement_quartile list before moving to next value in tumor_regimen_list
measurement_quartile_fall.append(location)
measurement_quartile_fall
iqr_all_fall = []
# loop through measurement_quartile range (0-10). Find IQR by selecting one value at a time in the measurement_quartile list.
for i in range(len(measurement_quartile_fall)):
iqr = (measurement_quartile_fall[i][0.75])-(measurement_quartile_fall[i][0.25])
# Append finding to iqr_all list before moving to next value
iqr_all_fall.append(iqr)
#print(iqr_all_spring)
# Round numbers to 1 number after decimal.
round_iqr_all_fall = [round(num, 2) for num in iqr_all_fall]
# Show list. Verify we have correct amount
assert len(iqr_all_fall) >= 3
# Combine both for loop generated list into one.
measurements_iqr_all_fall = [dict(zip(headers_list, round_iqr_all_fall))]
measurements_iqr_all_fall
###Output
_____no_output_____
###Markdown
Begin Box Plots
###Code
# Values for plotting
box_values_fall = round_iqr_all_fall
# Sort to determine outliers
values_sorted_fall = sorted(box_values_fall)
print(values_sorted_fall)
# Sample measurement values per site location
fall_site_135_measurements = fall_site_number_135['sample_measurement']
fall_site_134_measurements = fall_site_number_134['sample_measurement']
fall_site_115_measurements = fall_site_number_115['sample_measurement']
fall_site_128_measurements = fall_site_number_128['sample_measurement']
# Generate a box plot of the sample measurement of each site location, annually
measurement_plot_info = [fall_site_115_measurements,fall_site_134_measurements,fall_site_135_measurements]
fig, ax = plt.subplots(figsize=(10, 10))
pos = np.array(range(len(measurement_plot_info))) + 1
bp = ax.boxplot(measurement_plot_info, sym='k+', showfliers=True)
ax.set_xticklabels(headers_list, rotation=45)
ax.set_xlabel('Site Locations: New York County, NY')
ax.set_ylabel('PM 2.5 Measurements')
ax.set_title('New York County, NY: Seasonal (Fall) Site Location Sample Measurements')
plt.savefig("./Images/NY_fall_pm25_BoxPlot.png")
print(f'Number of Samples Measured: {len(fall_ny_nyc)}. No measurements for East Village.')
plt.show()
###Output
Number of Samples Measured: 6028. No measurements for East Village.
###Markdown
Potential OutliersFiltering by outlier values and dates may tell us which day/month of the year has larger than normal air pollution measurements.
###Code
# Hudson Heights, NY (115) Outliers
hh_fall_outliers = bp["fliers"][2].get_data()[1]
hh_fall_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(fall_site_115_measurements)}')
print(f'Total amount of outliers: {len(hh_fall_outliers)}')
print(f'Hudson Heights Outlier values: {hh_fall_outliers}')
# East Village, NY (128) Outliers
# ev_fall_outliers = bp["fliers"][3].get_data()[1]
# ev_fall_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(fall_site_128_measurements)}')
# print(f'Total amount of outliers: {len(ev_fall_outliers)}')
# print(f'East Village Outlier values: {ev_fall_outliers}')
# Lower Manhattan, NY (134) Outliers
lm_fall_outliers = bp["fliers"][1].get_data()[1]
lm_fall_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(fall_site_134_measurements)}')
print(f'Total amount of outliers: {len(lm_fall_outliers)}')
print(f'Lower Manhattan Outlier values: {lm_fall_outliers}')
# West harlem, NY (135) Outliers
westharlem_fall_outliers = bp["fliers"][0].get_data()[1]
westharlem_fall_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(fall_site_135_measurements)}')
print(f'Total amount of outliers: {len(westharlem_fall_outliers)}')
print(f'West harlem Outlier values: {westharlem_fall_outliers}')
###Output
Total amount samples in set: 2004
Total amount of outliers: 66
West harlem Outlier values: [16.3 16.4 16.6 16.6 16.7 16.7 16.7 16.7 16.8 16.9 16.9 16.9 17. 17.
17. 17.1 17.1 17.1 17.2 17.3 17.3 17.5 17.7 17.8 18.4 18.4 18.5 18.5
18.6 18.6 18.6 18.7 18.9 19. 19.1 19.2 19.2 19.3 19.5 19.7 19.7 19.8
19.9 20.1 20.1 20.2 20.2 20.3 20.3 20.3 20.5 20.5 20.5 20.5 20.9 21.6
21.6 21.6 22.4 22.9 23.4 24. 24.3 24.5 25.3 31.9]
###Markdown
Season: Winter
###Code
# Beginning of the year Winter
# Sort by date_gmt and time_gmt
winter_ny_nyc = ny_nyc_clean_data['2019-01-01':'2019-02'].sort_values(["date_gmt", "time_gmt"])
winter_ny_nyc
###Output
_____no_output_____
###Markdown
**Reset index for and set new index for box plot.**
###Code
winter_ny_nyc_box=winter_ny_nyc.reset_index().set_index('site_number').sort_values(by=['site_number','time_gmt'], ascending=True)
###Output
_____no_output_____
###Markdown
Plot: Winter Input dataframe name below. Make loop later.
###Code
# dataframe to plot
plot_df = winter_ny_nyc.copy()
#ax.plot(date, site_number_1103.sample_measurement)
plot_y_information = plot_df.sample_measurement
color_threshold = plot_df.color
# Assigm date as index of dataframe. Dataframe index has dates
date = plot_df.index.astype('O')
# Data for plt
x = date
y = plot_y_information
c = color_threshold.values
# Plot subplots
fig, ax = plt.subplots(figsize=(15,10))
scatter = ax.scatter(x,y, c=c)
# Set month in xaxis by searching date format
ax.xaxis.set_major_locator(dates.MonthLocator())
# 16 is a slight approximation since months differ in number of days.
ax.xaxis.set_minor_locator(dates.MonthLocator(bymonthday=16))
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
# loop for custom tickers. Assign marker size and center text
for tick in ax.xaxis.get_minor_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
imid = len(plot_df) // 2
ax.set_xlabel(str(date[imid].year))
ax.set_ylabel('PM 2.5 Sample Measurements', size=15)
ax.set_title('Winter 2019: New York County, NY', size=18)
plt.grid(True)
# Labels for legend
green = mpatches.Patch(color='green', label='Good: 0-12.0')
yellow = mpatches.Patch(color='yellow', label='Moderate: 12.1-35.4')
orange = mpatches.Patch(color='orange', label='Sensitive/Unhealth: 35.5-55.4')
red = mpatches.Patch(color='red', label='Unhealthy: 55.5+')
#Call legend
plt.legend(handles = [green,yellow,orange,red])
# Save an image of the chart and print it to the screen
plt.savefig("./Images/NY_winter_pm25_scatter.png")
plt.show()
# Seperate by site_number
winter_ny_nyc['site_number'].unique()
# Identify Site Number: 1103. Filter out values affiliated
winter_site_number_135=winter_ny_nyc[(winter_ny_nyc[['site_number']]==135).all(axis=1)]
winter_site_number_134=winter_ny_nyc[(winter_ny_nyc[['site_number']]==134).all(axis=1)]
winter_site_number_115=winter_ny_nyc[(winter_ny_nyc[['site_number']]==115).all(axis=1)]
winter_site_number_128=winter_ny_nyc[(winter_ny_nyc[['site_number']]==128).all(axis=1)]
winter_site_number_135=winter_site_number_135[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
winter_site_number_134=winter_site_number_134[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
winter_site_number_115=winter_site_number_115[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
winter_site_number_128=winter_site_number_128[['site_number','sample_measurement','time_gmt']].reset_index().set_index('site_number').sort_values(by=['time_gmt'], ascending=True)
# Make a list of site_numbers. Used in for loops to grab specific data by site number & used for new df header.
filtered_ny_nyc_box_list = winter_ny_nyc.site_number.sort_values().unique().tolist()
filtered_ny_nyc_box_list
###Output
_____no_output_____
###Markdown
Begin IQR for Box Plots
###Code
# Look through sample measurement list and count the amount of mice in each drug regimen.
# Create an empty list to fill with for loop
measurement_quartile_winter=[]
# Search through filtered_ca_la_box_data with '.loc[site_number, 'sample_measurement' column]' and get the quartile
for i in filtered_ny_nyc_box_list:
location = winter_ny_nyc_box.loc[i, 'sample_measurement'].quantile(q=[.25, .5, .75])
# append results to measurement_quartile list before moving to next value in tumor_regimen_list
measurement_quartile_winter.append(location)
measurement_quartile_winter
iqr_all_winter = []
# loop through measurement_quartile range (0-10). Find IQR by selecting one value at a time in the measurement_quartile list.
for i in range(len(measurement_quartile_winter)):
iqr = (measurement_quartile_winter[i][0.75])-(measurement_quartile_winter[i][0.25])
# Append finding to iqr_all list before moving to next value
iqr_all_winter.append(iqr)
#print(iqr_all_spring)
# Round numbers to 1 number after decimal.
round_iqr_all_winter = [round(num, 2) for num in iqr_all_winter]
# Show list. Verify we have correct amount
assert len(iqr_all_winter) == 4
# Combine both for loop generated list into one.
measurements_iqr_all_winter = [dict(zip(headers_list, round_iqr_all_winter))]
measurements_iqr_all_winter
###Output
_____no_output_____
###Markdown
Begin Box Plots
###Code
# Values for plotting
box_values_winter = round_iqr_all_winter
# Sort to determine outliers
values_sorted_winter = sorted(box_values_winter)
print(values_sorted_winter)
# Sample measurement values per site location
winter_site_135_measurements = winter_site_number_135['sample_measurement']
winter_site_134_measurements = winter_site_number_134['sample_measurement']
winter_site_115_measurements = winter_site_number_115['sample_measurement']
winter_site_128_measurements = winter_site_number_128['sample_measurement']
# Generate a box plot of the sample measurement of each site location, annually
measurement_plot_info = [winter_site_115_measurements,winter_site_128_measurements,winter_site_134_measurements,winter_site_135_measurements]
fig, ax = plt.subplots(figsize=(10, 10))
pos = np.array(range(len(measurement_plot_info))) + 1
bp = ax.boxplot(measurement_plot_info, sym='k+', showfliers=True)
ax.set_xticklabels(headers_list, rotation=45)
ax.set_xlabel('Site Locations: New York County, NY')
ax.set_ylabel('PM 2.5 Measurements')
ax.set_title('New York County, CA: Seasonal (Winter) Site Location Sample Measurements')
plt.savefig("./Images/NY_winter_pm25_BoxPlot.png")
print(f'Number of Samples Measured: {len(winter_ny_nyc)}')
plt.show()
###Output
Number of Samples Measured: 5503
###Markdown
Potential OutliersFiltering by outlier values and dates may tell us which day/month of the year has larger than normal air pollution measurements. Potential OutliersFiltering by outlier values and dates may tell us which day/month of the year has larger than normal air pollution measurements.
###Code
# East Village, NY (128) Outliers
ev_winter_outliers = bp["fliers"][3].get_data()[1]
ev_winter_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(winter_site_128_measurements)}')
print(f'Total amount of outliers: {len(ev_winter_outliers)}')
print(f'East Village Outlier values: {ev_winter_outliers}')
# Hudson Heights, NY (115) Outliers
hh_winter_outliers = bp["fliers"][2].get_data()[1]
hh_winter_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(winter_site_115_measurements)}')
print(f'Total amount of outliers: {len(hh_winter_outliers)}')
print(f'Hudson Heights Outlier values: {hh_winter_outliers}')
# Lower Manhattan, NY (134) Outliers
lm_winter_outliers = bp["fliers"][1].get_data()[1]
lm_winter_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(winter_site_134_measurements)}')
print(f'Total amount of outliers: {len(lm_winter_outliers)}')
print(f'Lower Manhattan Outlier values: {lm_winter_outliers}')
# West harlem, NY (135) Outliers
westharlem_winter_outliers = bp["fliers"][0].get_data()[1]
westharlem_winter_outliers.sort(axis=-1, kind='quicksort', order=None)
print(f'Total amount samples in set: {len(winter_site_135_measurements)}')
print(f'Total amount of outliers: {len(westharlem_winter_outliers)}')
print(f'West harlem Outlier values: {westharlem_winter_outliers}')
###Output
Total amount samples in set: 1396
Total amount of outliers: 65
West harlem Outlier values: [20.1 20.2 20.2 20.2 20.4 20.5 20.8 20.8 21. 21.1 21.3 21.3 21.3 21.4
21.4 21.4 21.4 21.4 21.4 21.6 21.6 21.6 21.7 21.9 21.9 21.9 21.9 22.
22. 22. 22.1 22.3 22.4 22.4 22.5 22.6 22.7 22.7 22.7 22.8 22.8 23.
23.1 23.3 23.5 23.6 23.6 23.8 23.8 23.9 23.9 24.6 24.6 24.7 24.9 25.
25.3 25.7 26.7 26.8 28.7 29.7 30. 31.2 31.3]
|
notebooks/benchmark/method/ld_prune/lsh/01-data-gen.ipynb
|
###Markdown
LSH Data GeneratorExports PLINK, parquet, and zarr datasets for a single dataset (currently either simulated or sampled from HapMap)
###Code
import os
import hail as hl
import numpy as np
from gwas_analysis.dask import io
from pysnptools import snpreader
import gwas_analysis.simulation.datasets as gsd
import dask.array as da
%run {os.environ['NB_DIR']}/nb.py
%run $BENCHMARK_METHOD_DIR/common.py
sample_rate = .1
ds_name = DATASET_HM
# sample_rate = 1
# ds_name = DATASET_SIM
ds_config = DATASET_CONFIG[ds_name]
ds_export_path = dataset_path(ds_name, sr=sample_rate)
hail_init()
if ds_name == DATASET_SIM:
# Make sure a single contig is used for comparison to Hail results
# (Hail returns 0s for variants on unequal contigs)
mt = gsd.get_ldsim_dataset(n_variants=256, n_samples=6, n_contigs=1, seed=1)
else:
mt = hl.import_plink(
*plink_files(osp.dirname(ds_config['path']), osp.basename(ds_config['path'])),
skip_invalid_loci=False,
reference_genome=ds_config['reference_genome']
)
mt = mt.filter_rows(mt.locus.contig == '1')
mt = mt.sample_rows(p=sample_rate, seed=1)
print('Shape before removing rows with no variance:', mt.count())
mt = mt.annotate_rows(stdev=hl.agg.stats(mt.GT.n_alt_alleles()).stdev)
mt = mt.filter_rows(mt.stdev > 0)
mt.count()
###Output
2020-02-23 23:51:03 Hail: INFO: Found 165 samples in fam file.
2020-02-23 23:51:03 Hail: INFO: Found 1457897 variants in bim file.
2020-02-23 23:51:11 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:11 Hail: INFO: reading 1 of 2 data partitions
###Markdown
Export Plink
###Code
def export(mt, path):
hl.export_plink(
mt, path,
fam_id=mt.fam_id,
pat_id=mt.pat_id,
mat_id=mt.mat_id,
is_female=mt.is_female,
pheno=mt.is_case,
varid=mt.rsid
)
export(mt, ds_export_path)
ds_export_path
###Output
2020-02-23 23:51:36 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:36 Hail: INFO: reading 1 of 2 data partitions
2020-02-23 23:51:41 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:41 Hail: INFO: reading 1 of 2 data partitions
2020-02-23 23:51:46 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:46 Hail: INFO: reading 1 of 2 data partitions
2020-02-23 23:51:51 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:51 Hail: INFO: reading 1 of 2 data partitions
2020-02-23 23:51:56 Hail: INFO: Coerced sorted dataset
2020-02-23 23:51:56 Hail: INFO: reading 1 of 2 data partitions
2020-02-23 23:52:04 Hail: INFO: merging 2 files totalling 428.7K...
2020-02-23 23:52:04 Hail: INFO: while writing:
/home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1.bed
merge time: 23.359ms
2020-02-23 23:52:04 Hail: INFO: merging 1 files totalling 303.6K...
2020-02-23 23:52:04 Hail: INFO: while writing:
/home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1.bim
merge time: 16.373ms
2020-02-23 23:52:05 Hail: INFO: merging 16 files totalling 4.0K...
2020-02-23 23:52:05 Hail: INFO: while writing:
/home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1.fam
merge time: 9.343ms
2020-02-23 23:52:05 Hail: INFO: wrote 10451 variants and 165 samples to '/home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1'
###Markdown
Export Parquet
###Code
# Note: mean imputation might be useful here
bm = hl.linalg.BlockMatrix.from_entry_expr(hl.coalesce(mt.GT.n_alt_alleles(), -1))
bt = bm.to_table_row_major()
bt.describe()
path = ds_export_path + '.parquet'
bt.to_spark().write.parquet(path, mode='overwrite')
!du -ch $path
###Output
452K /home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1.parquet
452K total
###Markdown
Export Zarr
###Code
client = get_dask_client(n_workers=4)
client
gt = da.from_array(io.BedArray(snpreader.Bed(ds_export_path, count_A1=True)), lock=False)
# Convert 0=missing, 1=homo ref, etc to -1=missing, 0=homo ref
gt = gt.astype(np.int8) - 1
gt
np.unique(gt.compute(), return_counts=True)
path = ds_export_path + '.zarr'
gt.to_zarr(path, overwrite=True)
!du -ch $path
###Output
828K /home/eczech/data/gwas/benchmark/datasets/hapmap-sr=0.1.zarr
828K total
|
colab_notebook.ipynb
|
###Markdown
Installing and running libraries
###Code
#installing folium for better visualisations
!pip install folium
#installing neptune client
!pip install neptune-client
#import neptune
import neptune.new as neptune
# Connect your code to Neptune new version
import os
myProject = 'gaelkbertrand/google-stock-prediction'
neptune.init(project=myProject,
api_token='eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vYXBwLm5lcHR1bmUuYWkiLCJhcGlfdXJsIjoiaHR0cHM6Ly9hcHAubmVwdHVuZS5haSIsImFwaV9rZXkiOiI4YTc4ZTU1My01NGY3LTQ2YmMtODMxMi1iZWYxMzY4ZjQ2YzMifQ==')
#running neptune to log runs: not necessary if the above line run perfectly
#run = neptune.init(project=project_name, api_token=api_token)
###Output
https://app.neptune.ai/gaelkbertrand/google-stock-prediction/e/GOOG-1
###Markdown
Setting up a Google Drive working directory
###Code
#setting up a working_directory
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
! ls /content/drive/My\ Drive/1.ASP2022
import os
# This helps you setup your working directory to a folder in your Google Drive.
# Your files should be saved in your Google Drive!
# the base Google Drive directory
root_dir = "/content/drive/My Drive/"
# choose where you want your project files to be saved
ML_final_project = "1.ASP2022/iml_final_project"
def create_and_set_working_directory(ML_final_project):
# check if your project folder exists. if not, it will be created.
if os.path.isdir(root_dir + ML_final_project) == False:
os.mkdir(root_dir + ML_final_project)
print(root_dir + ML_final_project + ' did not exist but was created.')
# change the OS to use your project folder as the working directory
os.chdir(root_dir + ML_final_project)
# create a test file to make sure it shows up in the right place
!touch 'new_file_in_working_directory.txt'
print('\nYour working directory was changed to ' + root_dir + ML_final_project + \
"\n\nAn empty text file was created there. You can also run !pwd to confirm the current working directory." )
create_and_set_working_directory(ML_final_project)
###Output
Mounted at /content/drive
20220112_201807.mp4 'Calendars '
'Academic Documents' 'My account info.zip'
'Action plans' new_file_in_working_directory.txt
Assignments 'Online gaming competition Tech Min.pptx'
'Bank complaints' Research
Bullshit Work
/content/drive/My Drive/1.ASP2022/iml_final_project did not exist but was created.
Your working directory was changed to /content/drive/My Drive/1.ASP2022/iml_final_project
An empty text file was created there. You can also run !pwd to confirm the current working directory.
###Markdown
Install other libraries for ML
###Code
#installing other dependencies
import os
import pandas as pd
import numpy as np
# for reproducibility of my prediction results
np.random.seed(42)
from datetime import date
from matplotlib import pyplot as plt
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from keras.models import Sequential, Model
from keras.models import Model
from keras.layers import Dense, Dropout, LSTM, Input, Activation, concatenate
import tensorflow as tf
tf.random.set_seed(42)
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
import urllib.request, json
os.chdir('/content/drive/My Drive/1.ASP2022/iml_final_project')
###Output
_____no_output_____
###Markdown
Recheck and verify your working directory
###Code
!pwd
###Output
/content/drive/My Drive/1.ASP2022/iml_final_project
###Markdown
The ML project Importing Google.inc stock data from Alpha Vantage
###Code
#import data from 'Alpha Vantage'
data_source= 'alphavantage'
if data_source == 'alphavantage':
api_key = 'eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vYXBwLm5lcHR1bmUuYWkiLCJhcGlfdXJsIjoiaHR0cHM6Ly9hcHAubmVwdHVuZS5haSIsImFwaV9rZXkiOiI4YTc4ZTU1My01NGY3LTQ2YmMtODMxMi1iZWYxMzY4ZjQ2YzMifQ=='
# here, write your desired stock ticker symbol
ticker = 'GOOGL'
# This is the JSON file with all the stock prices data
url_string = "https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=%s&outputsize=full&apikey=%s"%(ticker,api_key)
# Save data to this file
fileName = 'stock_market_data-%s.csv'%ticker
### get the low, high, close, and open prices
if not os.path.exists(fileName):
with urllib.request.urlopen(url_string) as url:
data = json.loads(url.read().decode())
# pull the desired stock market data
data = data['Time Series (Daily)']
df = pd.DataFrame(columns=['Date','Low','High','Close','Open'])
for key,val in data.items():
date = dt.datetime.strptime(key, '%Y-%m-%d')
data_row = [date.date(),float(val['3. low']),float(val['2. high']),
float(val['4. close']),float(val['1. open'])]
df.loc[-1,:] = data_row
df.index = df.index + 1
df.to_csv(fileName)
else:
print('Loading data from local')
df = pd.read_csv(fileName)
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
# Sort this DataFrame by date
stockprices = df.sort_values('Date')
###Output
_____no_output_____
###Markdown
Define functions that helps in calculating the performance: RMSE (Root Mean Square Error), MAPE (Mean absolute percentage error)
###Code
#### Define helper functions to calculate the metrics RMSE and MAPE ####
def calculate_rmse(y_true, y_pred):
"""
Here, you can calculate the Root Mean Squared Error (RMSE)
"""
rmse = np.sqrt(np.mean((y_true-y_pred)**2))
return rmse
### The effectiveness of prediction method is measured in terms of the Mean Absolute Percentage Error (MAPE) and RMSE
def calculate_mape(y_true, y_pred):
"""
Here, you can calculate the Mean Absolute Percentage Error (MAPE) %
"""
y_pred, y_true = np.array(y_pred), np.array(y_true)
mape = np.mean(np.abs((y_true-y_pred) / y_true))*100
return mape
###Output
_____no_output_____
###Markdown
Splitting the data into a training and test set
###Code
## Split the time-series data into a training sequence X and output value Y
def extract_seqX_outcomeY(data, N, offset):
"""
Arguments explanation:
data - dataset
N - window size, e.g., 60 for 60 days
offset - position to start the split
"""
X, y = [], []
for i in range(offset, len(data)):
X.append(data[i-N:i])
y.append(data[i])
return np.array(X), np.array(y)
#### Train-Test split for time-series ####
test_ratio = 0.2
training_ratio = 1 - test_ratio
train_size = int(training_ratio * len(stockprices))
test_size = int(test_ratio * len(stockprices))
print("train_size: " + str(train_size))
print("test_size: " + str(test_size))
train = stockprices[:train_size][['Date', 'Close']]
test = stockprices[train_size:][['Date', 'Close']]
###Output
train_size: 3514
test_size: 878
###Markdown
The LSTM (Long Short Term Memory) Model Simple Moving Average calculation
###Code
stockprices = stockprices.set_index('Date')
### For meduim-term trading
def plot_stock_trend(var, cur_title, stockprices=stockprices, logNeptune=True, logmodelName='Simple MA'):
ax = stockprices[['Close', var,'200day']].plot(figsize=(20, 10))
plt.grid(False)
plt.title(cur_title)
plt.axis('tight')
plt.ylabel('Stock Price ($)')
if logNeptune:
npt_exp.log_image(f'Plot of Stock Predictions with {logmodelName}', ax.get_figure())
def calculate_perf_metrics(var, logNeptune=True, logmodelName='Simple MA'):
### RMSE
rmse = calculate_rmse(np.array(stockprices[train_size:]['Close']), np.array(stockprices[train_size:][var]))
### MAPE
mape = calculate_mape(np.array(stockprices[train_size:]['Close']), np.array(stockprices[train_size:][var]))
if logNeptune:
npt_exp.send_metric('RMSE', rmse)
npt_exp.log_metric('RMSE', rmse)
npt_exp.send_metric('MAPE (%)', mape)
npt_exp.log_metric('MAPE (%)', mape)
return rmse, mape
# Here, 20 days represent the 22 trading days in a month
window_size = 50
CURRENT_MODEL = 'LSTM'
###Output
_____no_output_____
###Markdown
The model
###Code
#The model based on the model built by: Neptune.AI (https://neptune.ai/). Treat this as a formal citation of a reference.
#neptune.create_experiment()
if CURRENT_MODEL == 'LSTM':
layer_units, optimizer = 50, 'adam'
cur_epochs = 15
cur_batch_size = 20
cur_LSTM_pars = {'units': layer_units,
'optimizer': optimizer,
'batch_size': cur_batch_size,
'epochs': cur_epochs
}
# Create an experiment and log the model in Neptune new verison
npt_exp = neptune.init(
api_token="eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vYXBwLm5lcHR1bmUuYWkiLCJhcGlfdXJsIjoiaHR0cHM6Ly9hcHAubmVwdHVuZS5haSIsImFwaV9rZXkiOiI4YTc4ZTU1My01NGY3LTQ2YmMtODMxMi1iZWYxMzY4ZjQ2YzMifQ==",
project=myProject,
name='LSTM',
description='Google-stock-prediction-machine-learning',
tags=['stockprediction', 'LSTM','neptune'])
npt_exp['LSTMPars'] = cur_LSTM_pars
## Here, you have to use the past N stock prices for training to predict the N+1th closing price
# scale
scaler = StandardScaler()
scaled_data = scaler.fit_transform(stockprices[['Close']])
scaled_data_train = scaled_data[:train.shape[0]]
X_train, y_train = extract_seqX_outcomeY(scaled_data_train, window_size, window_size)
### Build the LSTM model and log your model summary to Neptune to check on performance ###
def Run_LSTM(X_train, layer_units=50, logNeptune=True, NeptuneProject=None):
inp = Input(shape=(X_train.shape[1], 1))
x = LSTM(units=layer_units, return_sequences=True)(inp)
x = LSTM(units=layer_units)(x)
out = Dense(1, activation='linear')(x)
model = Model(inp, out)
# Compile the LSTM neural network
model.compile(loss = 'mean_squared_error', optimizer = 'adam')
## log into Neptune
if logNeptune:
model.summary(print_fn=lambda x: NeptuneProject['model_summary'].log(x))
return model
#Run the model
model = Run_LSTM(X_train, layer_units=layer_units, logNeptune=True, NeptuneProject=npt_exp)
history = model.fit(X_train, y_train, epochs=cur_epochs, batch_size=cur_batch_size,
verbose=1, validation_split=0.1, shuffle=True)
#Now, you can predict stock prices using the past window_size stock prices
def preprocess_testdat(data=stockprices, scaler=scaler, window_size=window_size, test=test):
raw = data['Close'][len(data) - len(test) - window_size:].values
raw = raw.reshape(-1,1)
raw = scaler.transform(raw)
X_test = []
for i in range(window_size, raw.shape[0]):
X_test.append(raw[i-window_size:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
return X_test
#Run it
X_test = preprocess_testdat()
predicted_price_ = model.predict(X_test)
predicted_price = scaler.inverse_transform(predicted_price_)
# Plot the predicted price versus the actual closing price
test['Predictions_lstm'] = predicted_price
###Output
https://app.neptune.ai/gaelkbertrand/google-stock-prediction/e/GOOG-8
Remember to stop your run once you’ve finished logging your metadata (https://docs.neptune.ai/api-reference/run#.stop). It will be stopped automatically only when the notebook kernel/interactive console is terminated.
Epoch 1/15
156/156 [==============================] - 21s 112ms/step - loss: 0.0114 - val_loss: 0.0041
Epoch 2/15
156/156 [==============================] - 17s 106ms/step - loss: 0.0030 - val_loss: 0.0038
Epoch 3/15
156/156 [==============================] - 18s 113ms/step - loss: 0.0025 - val_loss: 0.0028
Epoch 4/15
156/156 [==============================] - 20s 131ms/step - loss: 0.0022 - val_loss: 0.0031
Epoch 5/15
156/156 [==============================] - 18s 115ms/step - loss: 0.0020 - val_loss: 0.0027
Epoch 6/15
156/156 [==============================] - 19s 125ms/step - loss: 0.0018 - val_loss: 0.0024
Epoch 7/15
156/156 [==============================] - 17s 111ms/step - loss: 0.0016 - val_loss: 0.0035
Epoch 8/15
156/156 [==============================] - 17s 106ms/step - loss: 0.0015 - val_loss: 0.0027
Epoch 9/15
156/156 [==============================] - 17s 110ms/step - loss: 0.0014 - val_loss: 0.0039
Epoch 10/15
156/156 [==============================] - 18s 115ms/step - loss: 0.0013 - val_loss: 0.0019
Epoch 11/15
156/156 [==============================] - 20s 130ms/step - loss: 0.0013 - val_loss: 0.0019
Epoch 12/15
156/156 [==============================] - 17s 112ms/step - loss: 0.0012 - val_loss: 0.0019
Epoch 13/15
156/156 [==============================] - 18s 117ms/step - loss: 0.0012 - val_loss: 0.0017
Epoch 14/15
156/156 [==============================] - 17s 108ms/step - loss: 0.0011 - val_loss: 0.0018
Epoch 15/15
156/156 [==============================] - 17s 108ms/step - loss: 0.0011 - val_loss: 0.0016
###Markdown
Performance Evaluation Evaluating the performance via RMSE and MAPE
###Code
rmse_lstm = calculate_rmse(np.array(test['Close']), np.array(test['Predictions_lstm']))
mape_lstm = calculate_mape(np.array(test['Close']), np.array(test['Predictions_lstm']))
# npt_exp.send_metric('RMSE', rmse_lstm)
# npt_exp.log_metric('RMSE', rmse_lstm)
npt_exp['RMSE'].log(rmse_lstm) ## 12-18
# npt_exp.send_metric('MAPE (%)', mape_lstm)
# npt_exp.log_metric('MAPE (%)', mape_lstm)
npt_exp['MAPE (%)'].log(mape_lstm)
print('RMSE=', rmse_lstm, ':An error difference of 114. Pretty low compared to how Google stock changed in the last 18 years')
print('MAPE % =', mape_lstm, ':Good performance. An error of 3% difference between the actual price vs the predicted price.')
###Output
RMSE= 114.38063218855103 :An error difference of 114. Pretty low compared to how Google stock changed in the last 18 years
MAPE % = 3.354019131687904 :Good performance. An error of 3% difference between the actual price vs the predicted price.
###Markdown
Plotting Function DefinitionNote: if you have color blindness, specify your preferred colors in the code below.
###Code
### Plot prediction and true trends and log to Neptune
def plot_stock_trend_lstm(train, test, logNeptune=True):
fig = plt.figure(figsize = (20,10))
plt.plot(train['Date'], train['Close'], label = 'TRAIN Closing Price')
plt.plot(test['Date'], test['Close'], label = 'TEST Closing Price')
plt.plot(test['Date'], test['Predictions_lstm'], label = 'Predicted Closing Price')
plt.title('Google inc stock price prediction using the LSTM Model: 2004-2022')
plt.xlabel('Date')
plt.ylabel('Stock Price ($)')
plt.legend(loc="upper left")
## Log image to Neptune new version
if logNeptune:
## npt_exp.log_image('Plot of Stock Predictions with LSTM', fig)
npt_exp['Plot of Stock Predictions with LSTM'].upload(neptune.types.File.as_image(fig))
###Output
_____no_output_____
###Markdown
Plotting
###Code
plot_stock_trend_lstm(train, test)
print('RMSE=', rmse_lstm, ':An error difference of 114. Pretty low compared to how Google stock changed in the last 18 years')
print('MAPE % =', mape_lstm, ':Good performance. An error of 3% difference between the actual price vs the predicted price.')
###Output
RMSE= 114.38063218855103 :An error difference of 114. Pretty low compared to how Google stock changed in the last 18 years
MAPE % = 3.354019131687904 :Good performance. An error of 3% difference between the actual price vs the predicted price.
###Markdown
Stop the experiment
###Code
npt_exp.stop()
###Output
Shutting down background jobs, please wait a moment...
Done!
###Markdown
(Do Not) Do-It-Yourself COVID-19 "Data Scientist" Kit This notebook offers a drag-and-drop interface for polynomial and exponential curve fitting that will allow you to predict COVID-19 cases in Thailand like a "data scientist" with just some hyperparameter tuning, and why you should or should not do it. Imports
###Code
%matplotlib inline
import pandas as pd
import numpy as np
from scipy.optimize import curve_fit
from functools import partial
import matplotlib.pyplot as plt
from plotnine import *
from mizani.formatters import *
def poly_func(x, *coefs):
return sum([coefs[i] * x**i for i in range(len(coefs))])
def exp_func(x, b0, b1, b2):
return b0 * np.exp(b1 * x) + b2
def get_prediction(func, cutting_point=33, nb_coefs = 3, valid_interval=5, show_plot=True):
x_train = np.array(th.index[:cutting_point+1])
y_train = np.array(th.total_cases[:cutting_point+1])
x_valid = np.array(th.index[cutting_point:])
y_valid = np.array(th.total_cases[cutting_point:])
coefs, _ = curve_fit(func, x_train, y_train, p0=np.ones(nb_coefs))
y_pred = np.array([func(i,*coefs) for i in x_valid])
y_fit = np.array([func(i,*coefs) for i in x_train])
if show_plot:
print(f'Total MSE: {np.mean((y_pred-y_valid)**2):.2f}; Total MAPE: {100*np.mean(abs(y_pred-y_valid)/y_valid):.2f}%')
print(f'{valid_interval}-day MSE: {np.mean((y_pred[:valid_interval]-y_valid[:valid_interval])**2):.2f}; {valid_interval}-day MAPE: {np.mean(abs(y_pred[:valid_interval]-y_valid[:valid_interval])/y_valid[:valid_interval]):.2f}%')
plot_df = pd.DataFrame({'day':np.concatenate([x_train,x_valid]),
'prediction':np.concatenate([y_fit,y_pred]),
'ground_truth':np.concatenate([y_train,y_valid])}).melt(id_vars='day')
g = (ggplot(plot_df,aes(x='day',y='value',color='variable')) + geom_line() +
geom_vline(xintercept=cutting_point,linetype="dashed") + theme_minimal()+
annotate(geom="text", x=cutting_point-10,y=500, label='Training Peroid') +
annotate(geom="text", x=cutting_point+10,y=500, label='Prediction Peroid') +
coord_cartesian(xlim=(0,50))+
ylab('Total Cases') + xlab('Days Since First Case'))
g.draw()
# plt.plot(x_train,y_train)
# plt.plot(x_valid,y_valid)
# plt.plot(x_train,y_fit)
# plt.plot(x_valid,y_pred)
return np.mean((y_pred-y_valid)**2), np.mean(abs(y_pred-y_valid)/y_valid), \
np.mean((y_pred[:valid_interval]-y_valid[:valid_interval])**2), \
np.mean(abs(y_pred[:valid_interval]-y_valid[:valid_interval])/y_valid[:valid_interval])
###Output
_____no_output_____
###Markdown
Data We use Thailand data from [Our World In Data](https://ourworldindata.org/coronavirus-source-data) which contains number of cases and deaths from COVID-19.
###Code
df = pd.read_csv('https://covid.ourworldindata.org/data/ecdc/full_data.csv')
th = df[df.location=='Thailand'].tail(40).reset_index(drop=True).reset_index()
th.tail()
###Output
_____no_output_____
###Markdown
You Can Fit ANY Curve Use the widgets below to finetune how many days you want to include in your model training (and effectively how many days you want to predict) and some hyperparameters, then *※magic※* you are now making predictions like a "data scientist". Shoutout to this [stackoverflow thread](https://stackoverflow.com/questions/50727723/curve-fit-with-polynomials-of-variable-length) that shows how to do `scipy.optimize` for functions with variable length of input. Polynomial Curves
###Code
#@title {run: "auto"}
cutting_point = 33 #@param {type:"slider", min:25, max:35, step:1}
nb_coefs = 4 #@param {type:"slider", min:1, max:7, step:1}
get_prediction(poly_func,cutting_point=cutting_point,nb_coefs=nb_coefs);
###Output
Total MSE: 50090.79; Total MAPE: 24.75%
5-day MSE: 53118.40; 5-day MAPE: 0.28%
###Markdown
Exponential Curves
###Code
#@title {run: "auto"}
cutting_point = 27 #@param {type:"slider", min:25, max:35, step:1}
get_prediction(exp_func,cutting_point=cutting_point,nb_coefs=3);
###Output
Total MSE: 122366.30; Total MAPE: 23.97%
5-day MSE: 1199.51; 5-day MAPE: 0.16%
###Markdown
Why This Is A Terrible Idea To Predict From 30 Data Points It is great we have such tools and data that allow us to do some fun predictions but let us see why curve fitting with very few data points (25-35 to be exact) is a terrible idea. Let us take the period where the number of total cases starts to look like an exponential curve:
###Code
g = (ggplot(th.iloc[25:,[0,5]], aes(x='index',y='total_cases',group=1))+
geom_line() + theme_minimal() +
ggtitle('Time Period When Things Get "Exponential"') +
scale_x_continuous(breaks=[i for i in range(40)])+
theme(legend_title = element_blank())+
xlab('Days Since First Case') + ylab('Total Cases'))
g
###Output
_____no_output_____
###Markdown
Now remember we have 2 main hyperparameters to tweak:1. Models: polynomial (at what number of coefficients) or exponential2. Number of days used to trainLet us say we use mean absolute percentage error (MAPE) for predictions in the next 5 days as our model error rates, we arrive at the following plot:
###Code
results = []
for cp in range(25,35):
for nb in range(1,7):
result = get_prediction(poly_func,cp,nb,5,False)
results.append({'cutting_point':cp, 'nb_coefs':nb, 'mse': result[2], 'mape': result[3]})
exp_result = get_prediction(exp_func,cp,3,5,False)
results.append({'cutting_point':cp, 'nb_coefs':'exp', 'mse': result[2], 'mape': result[3]})
result_df = pd.DataFrame(results)
g = (ggplot(result_df, aes(x='cutting_point',y='mape',
group='nb_coefs',color='factor(nb_coefs)')) +
geom_line() + theme_minimal() +
ggtitle('Different Hyperparameters and Prediction Errors Over Time') +
scale_y_continuous(labels=percent_format(), breaks=[i/10 for i in range(10)])+
scale_x_continuous(breaks=[i for i in range(40)])+
theme(legend_title = element_blank())+
xlab('Days Used to Train') + ylab('5-day Mean Absolute Percentage Error'))
g
###Output
_____no_output_____
###Markdown
Clearly, polynomial curves with one coefficient (aka a flat line) or two coefficients (aka a straight line) are reliably bad with MAPE of over 50% over all number of days used to trained. But more importantly, even if you look at models which do reasonably well such as polynomial with 4, 5 and 6 coefficients, and the exponential curve, you can see that these models have quite large variance in their performance with range from 10% up to 40% MAPE.At no point in time is one model "the best" for this prediction. This is to be expected since:1. We have very few training data points2. What we are trying to predict has a very dynamic nature3. We do not have any means to obtain the features that might describe that dynamic natureNow then, if you are now asking yourself:> Why are we trying to apply statistical modeling (machine learning) to the ONE situation where we are taught in every single textbook that it is not suitable for? Then I have accomplished my goal with this notebook.
###Code
###Output
_____no_output_____
###Markdown
Bootstrap2.5d Colab Before running the script, go to the toolbar above and select: Runtime → Change Runtime type → Hardware accelerator → GPUAfter the runtime has been changed, the necessary dependencies need to be downloaded.
###Code
# Install dependencies
!pip install snakemake tqdm torch simpleitk albumentations -U -q
#reload modules whenever they are updated
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
import SimpleITK as sitk
from glob import glob
from matplotlib import pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
###Output
_____no_output_____
###Markdown
Next, the bootstrap2.5d directory will be downloaded and the directories for the model's inputs and outputs will be created.
###Code
# Designate path of bootstrap2.5d directory
# Change to /content/drive/My Drive/bootstrap2.5d if downloading to Google Drive
boot_dir = "/content/bootstrap2.5d"
# Make the boot_dir and clone the GitHub repo
os.mkdir(boot_dir)
!git clone https://github.com/volume-em/bootstrap2.5d.git {boot_dir}
# Create data directories
os.mkdir(os.path.join(boot_dir, 'models'))
os.mkdir(os.path.join(boot_dir, 'data'))
os.mkdir(os.path.join(boot_dir, 'data/train'))
os.mkdir(os.path.join(boot_dir, 'data/train/images'))
os.mkdir(os.path.join(boot_dir, 'data/train/masks'))
os.mkdir(os.path.join(boot_dir, 'data/target'))
os.mkdir(os.path.join(boot_dir, 'data/target/images'))
#this is optional, only applies if there is a ground truth mask for the target images
os.mkdir(os.path.join(boot_dir, 'data/target/masks'))
###Output
_____no_output_____
###Markdown
The repository has been cloned and the necessary directories have been created, now datasets can be added. To upload data, click the folder icon to the left. From there files can be added by:1. Dragging files from your local file browser into colab2. Right clicking on a folder in colab and pressing upload3. Pressing "Mount Drive" at the top of the files tab. This will allow you to access files from your Google Drive in Colab. Files uploaded to Colab runtimes are deleted when sessions end, for permanent storage the bootstrap2.5d repository should be downloaded directly to your Google Drive. Alternatively, if the repo is downloaded to the runtime's temporary storage one can click and drag files into Drive. ExampleIn this example, we are downloading data from the paper [Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data](https://www.sciencedirect.com/science/article/abs/pii/S0010482520300792?via%3Dihub). This dataset contains training volumes with segmented mitochondria and lysosomes. Because the organelles are segmented in separate files, we have to combine them in the cell below.
###Code
# Run the setup script, save data in the data folder
data_dir = os.path.join(boot_dir, 'data')
setup_script = os.path.join(boot_dir, 'example_data/setup_data.py')
!python {setup_script} {data_dir}
###Output
_____no_output_____
###Markdown
Now that the dataset has been downloaded, we can visualize what our training volumes and masks look like.
###Code
# Create a custom colormap so that background is transparent
# get colormap
ncolors = 256
color_array = plt.get_cmap('viridis')(range(ncolors))
# change alpha values
#color_array[:,-1] = np.linspace(0.0, 1.0, ncolors)
color_array[0, -1] = 0.0
# create a colormap object
map_object = LinearSegmentedColormap.from_list(name='viridis_alpha',colors=color_array)
# register this new colormap with matplotlib
plt.register_cmap(cmap=map_object)
# Take some 2D slices of a 3D volume
vol_index = 0 #index of file in the train/images directory
#get a list of the train image volumes
impaths = np.sort(glob(os.path.join(data_dir, 'train/images/*.nii.gz')))
impath = impaths[vol_index]
#load the image and the mask and convert them to arrays
train_image = sitk.GetArrayFromImage(sitk.ReadImage(impath))
train_mask = sitk.GetArrayFromImage(sitk.ReadImage(impath.replace('/images/', '/masks/')))
#sample 12 evenly spaced slices from the xy plane
slice_indices = np.linspace(0, train_image.shape[0] - 1, num=12, dtype='int')
#create subplots
f, ax = plt.subplots(2, 6, figsize=(16, 8))
#plot the images
c = 0
for y in range(2):
for x in range(6):
slice_index = slice_indices[c]
ax[y, x].set_title(f'Slice {slice_index}')
ax[y, x].imshow(train_image[slice_index], cmap='gray')
ax[y, x].imshow(train_mask[slice_index], alpha=0.3, cmap='viridis_alpha')
ax[y, x].set_yticks([])
ax[y, x].set_xticks([])
c += 1
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now we're ready to run the snakemake. We need to set 3 configuration parameters: the data directory, model directory, and the number of segmentation classes. In this case we have 3: background, lysosomes, and mitochondria. For binary segmentation, we would set n_classes=1.Other hyperparameters or file paths can be changed by directly editing the Snakefile. Text files can be edited directly in Colab by double clicking them in the file browser.
###Code
snakefile = os.path.join(boot_dir, 'Snakefile')
model_dir = os.path.join(boot_dir, 'models')
!snakemake -s {snakefile} --cores all --config data_dir={data_dir} model_dir={model_dir} n_classes=3
###Output
_____no_output_____
###Markdown
Now that predictions have been made, we can visualize how the two algorithms performed.
###Code
# Take some 2D slices of a 3D volume
vol_index = 0 #index of file in the target/images directory
#get a list of the train image volumes
impaths = np.sort(glob(os.path.join(data_dir, 'target/images/*.nii.gz')))
impath = impaths[vol_index]
#load the image and mask and predictions and convert them to arrays
target_image = sitk.GetArrayFromImage(sitk.ReadImage(impath))
target_mask = sitk.GetArrayFromImage(sitk.ReadImage(impath.replace('/images/', '/masks/')))
target_super_preds = sitk.GetArrayFromImage(sitk.ReadImage(impath.replace('/images/', '/super_preds/')))
target_weaksuper_preds = sitk.GetArrayFromImage(sitk.ReadImage(impath.replace('/images/', '/weaksuper_preds/')))
#sample 6 evenly spaced slices from the xy plane
slice_indices = np.linspace(0, train_image.shape[0] - 1, num=6, dtype='int')
column_names = ['Ground Truth', 'Step 1. Super Prediction', 'Step 2. Weak Super Prediction']
#create subplots
f, ax = plt.subplots(6, 3, figsize=(16, 20))
#plot the images
c = 0
for y in range(6):
for x, overlay in enumerate([target_mask, target_super_preds, target_weaksuper_preds]):
slice_index = slice_indices[c]
ax[y, x].set_title(column_names[x])
ax[y, x].set_ylabel(f'Slice {slice_index}')
ax[y, x].imshow(target_image[slice_index], cmap='gray')
ax[y, x].imshow(overlay[slice_index], alpha=0.3, cmap='viridis_alpha')
ax[y, x].set_yticks([])
ax[y, x].set_xticks([])
c += 1
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Clone the repo
###Code
import os
os.chdir('/content/')
!if [ ! -e ~/work ]; then git clone https://github.com/ericpts/CycleGAN-TensorFlow ~/work; fi
os.chdir('/content/work')
! git fetch origin master
! git reset --hard origin/master
!./init.py
!python3 ./build_data.py --X_input_dir data/comp/trainA --X_output_file data/tfrecords/black.tfrecords --Y_input_dir data/comp/trainB --Y_output_file data/tfrecords/colored.tfrecords
###Output
_____no_output_____
###Markdown
Install a Drive FUSE wrapper, authenticate, and mount a Drive filesystem
###Code
# Install a Drive FUSE wrapper.
# https://github.com/astrada/google-drive-ocamlfuse
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
# Generate auth tokens for Colab
from google.colab import auth
auth.authenticate_user()
# Generate creds for the Drive FUSE library.
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
# Create a directory and mount Google Drive using that directory.
!mkdir -p ~/drive
!google-drive-ocamlfuse ~/drive
!mkdir -p /content/drive/checkpoints
!mkdir -p /content/drive/samples
!if [ -e /content/work/checkpoints ]; then unlink /content/work/checkpoints; fi
!if [ -e /content/work/samples ]; then unlink /content/work/samples; fi
!ln -sf /content/drive/checkpoints /content/work/checkpoints
!ln -sf /content/drive/samples /content/work/samples
import subprocess
import os
datetime=subprocess.check_output(['ls', 'checkpoints']).decode('utf-8').strip()
os.environ['DATETIME']=datetime
!ls checkpoints/$DATETIME
###Output
_____no_output_____
###Markdown
Train the network
###Code
!python3 train.py --X data/tfrecords/black.tfrecords --Y data/tfrecords/colored.tfrecords --load_model $DATETIME
###Output
_____no_output_____
###Markdown
Export the models
###Code
!python3 export_graph.py --checkpoint_dir checkpoints/$DATETIME \
--XtoY_model bw2color.pb \
--YtoX_model color2bw.pb \
--image_size 256
###Output
_____no_output_____
###Markdown
Generate samples
###Code
!python3 generate_samples.py --nsamples 100
!ls
###Output
_____no_output_____
|
Health_Insurance_Charge.ipynb
|
###Markdown
Health Insurance Charge Prediction model- Using Linear Regression 📤 Import Libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
###Output
_____no_output_____
###Markdown
Check out the Data
###Code
health_data = pd.read_csv("/content/insurance.csv")
print(health_data.head(10))
health_data.shape
###Output
age sex bmi children smoker region charges
0 19 female 27.900 0 yes southwest 16884.92400
1 18 male 33.770 1 no southeast 1725.55230
2 28 male 33.000 3 no southeast 4449.46200
3 33 male 22.705 0 no northwest 21984.47061
4 32 male 28.880 0 no northwest 3866.85520
5 31 female 25.740 0 no southeast 3756.62160
6 46 female 33.440 1 no southeast 8240.58960
7 37 female 27.740 3 no northwest 7281.50560
8 37 male 29.830 2 no northeast 6406.41070
9 60 female 25.840 0 no northwest 28923.13692
###Markdown
Check null entries
###Code
health_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Converting categorical data into integer values.
###Code
gender_mapping={'male':1,'female':2}
smoker_mapping={'yes':1,'no':0}
region_mapping={'northeast':1,'southeast':2,'southwest':3,'northwest':4}
health_data['sex']=health_data['sex'].map(gender_mapping)
health_data['smoker']=health_data['smoker'].map(smoker_mapping)
health_data['region']=health_data['region'].map(region_mapping)
health_data.head(10)
###Output
_____no_output_____
###Markdown
PairPlotSeaborn pairplot in Python is made when you want to visualize the relationship between two variables and variables.
###Code
sns.pairplot(health_data)
###Output
_____no_output_____
###Markdown
Observing distplots and applying log transformation DistPlotSeaborn distplot lets you show a histogram with a line on it. Log TransformationThe log transformation reduces or removes the skewness of our original data.
###Code
# health_data['bmi_log'] = np.log(health_data['bmi']+1)
sns.distplot(health_data["bmi"])
# health_data['age'] = np.log(health_data['age']+1)
sns.distplot(health_data["age"])
health_data['charges_log'] = np.log(health_data['charges']+1)
sns.distplot(health_data["charges_log"])
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Printing corelation between charges and other values
###Code
correlation=health_data.corr()
print(correlation['charges'].sort_values(ascending=False))
###Output
charges 1.000000
charges_log 0.892996
smoker 0.787251
age 0.299008
bmi 0.198341
children 0.067998
region -0.050226
sex -0.057292
Name: charges, dtype: float64
###Markdown
HeatMappingPurpose of Seaborn HeatMap The seaborn Heatmaps are the grid Heatmaps that can take various types of data and generate heatmaps. The primary purpose of the seaborn heatmap is to show the correlation matrix by data visualization\**Lighter the color higher the value and more is the correlation between them**
###Code
sns.heatmap(health_data.corr(), annot=True)
###Output
_____no_output_____
###Markdown
Training a Linear Regression ModelLet's now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable.
###Code
dataX,dataY=health_data[['age','bmi','sex','region','children','smoker']], health_data['charges_log']
# dataX,dataY=health_data[['age','bmi','sex','region','children','smoker']], health_data['charges']
###Output
_____no_output_____
###Markdown
Train Test SplitNow let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dataX, dataY, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
Performance Evaluationsklearn.metricesSklearn metrics lets you implement scores, losses, and utility functions for evaluating performance. **Here are the key steps involved:**-Load data;-Split it into train set and test set;-Build the training model;-Make predictions or forecasts on the test data;-Evaluate the machine learning model with a particular method.Here are three common evaluation metrics for regression problems:**Mean Absolute Error (MAE)** is the mean of the absolute value of the errors: **Mean Squared Error (MSE)** is the mean of the squared errors **Root Mean Squared Error (RMSE)** is the square root of the mean of the squared errors Comparing these metrics:**MAE** is the easiest to understand, because it's the average error.**MSE** is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world.**RMSE** is even more popular than MSE, because RMSE is interpretable in the "y" units.All of these are loss functions, because we want to minimize them.Cross Validation**Cross-validation** is a statistical method used to estimate the performance (or accuracy) of machine learning models. It is used to protect against overfitting in a predictive model, particularly in a case where the amount of data may be limited. In **cross-validation**, you make a fixed number of folds (or partitions) of the data, run the analysis on each fold, and then average the overall error estimate.
###Code
from sklearn import metrics
from sklearn.model_selection import cross_val_score
def cross_val(model):
pred = cross_val_score(model, dataX, dataY, cv=10)
return pred.mean()
def print_evaluate(true, predicted):
mae = metrics.mean_absolute_error(true, predicted)
mse = metrics.mean_squared_error(true, predicted)
rmse = np.sqrt(metrics.mean_squared_error(true, predicted))
r2_square = metrics.r2_score(true, predicted)
print('MAE:', mae)
print('MSE:', mse)
print('RMSE:', rmse)
print('R2 Square', r2_square)
print('__________________________________')
def evaluate(true, predicted):
mae = metrics.mean_absolute_error(true, predicted)
mse = metrics.mean_squared_error(true, predicted)
rmse = np.sqrt(metrics.mean_squared_error(true, predicted))
r2_square = metrics.r2_score(true, predicted)
return mae, mse, rmse, r2_square
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
from sklearn.linear_model import LinearRegression
lin_reg = linear_model.LinearRegression(normalize=True)
lin_reg.fit(X_train,y_train)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_base.py:145: FutureWarning: 'normalize' was deprecated in version 1.0 and will be removed in 1.2.
If you wish to scale the data, use Pipeline with a StandardScaler in a preprocessing stage. To reproduce the previous behavior:
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(with_mean=False), LinearRegression())
If you wish to pass a sample_weight parameter, you need to pass it as a fit parameter to each step of the pipeline as follows:
kwargs = {s[0] + '__sample_weight': sample_weight for s in model.steps}
model.fit(X, y, **kwargs)
FutureWarning,
###Markdown
Model EvaluationLet's evaluate the model by checking out it's coefficients and how we can interpret them.
###Code
print("Intercept of Linear fit: ",lin_reg.intercept_)
coeff_df = pd.DataFrame(lin_reg.coef_, dataX.columns, columns=['Coefficient'])
coeff_df
###Output
_____no_output_____
###Markdown
Prediction from our modelLet's grab predictions off our test set and see how well it did!
###Code
pred = lin_reg.predict(X_test)
chart=pd.DataFrame({'True Values': y_test, 'Predicted Values': pred})
x,y=chart['True Values'],chart['Predicted Values']
plt.scatter(x, y)
plt.xlabel("Actual Values")
plt.ylabel("Predicted values")
plt.show()
###Output
_____no_output_____
###Markdown
Residual Histogram
###Code
pd.DataFrame({'Error Values': (y_test - pred)}).plot.kde()
###Output
_____no_output_____
###Markdown
Evalution Results**MAE** : Mean Absolute error\**MSE** : Mean Squared Error\**RMSE** : Root mean square error
###Code
test_pred = lin_reg.predict(X_test)
train_pred = lin_reg.predict(X_train)
print('Test set evaluation:\n_____________________________________')
print_evaluate(y_test, test_pred)
print('\nTrain set evaluation:\n_____________________________________')
print_evaluate(y_train, train_pred)
###Output
Test set evaluation:
_____________________________________
MAE: 0.2698840894032185
MSE: 0.1836685659747899
RMSE: 0.42856570788478854
R2 Square 0.7805996611518236
__________________________________
Train set evaluation:
_____________________________________
MAE: 0.28638450209193445
MSE: 0.20594453635462198
RMSE: 0.453811124097484
R2 Square 0.7569934784816497
__________________________________
###Markdown
Printing Results
###Code
results_df = pd.DataFrame(data=[["Linear Regression", *evaluate(y_test, test_pred) , cross_val(linear_model.LinearRegression())]],
columns=['Model', 'MAE', 'MSE', 'RMSE', 'R2 Square', "Cross Validation"])
results_df
###Output
_____no_output_____
###Markdown
Accuracy Score
###Code
acc= lin_reg.score(X_train,y_train)*100
print("The model accuracy (Linear Regression): ",acc)
print("Weights: ",lin_reg.coef_)
print("Intercepts: ",lin_reg.intercept_)
wt=list(lin_reg.coef_)
print(wt)
wt=[lin_reg.intercept_]+wt
print(wt)
wt=np.array([wt])
print(wt)
print(wt.shape)
###Output
Weights: [ 0.03464997 0.01116519 0.07261456 -0.01674442 0.09648312 1.54821199]
Intercepts: 6.91378499193158
[0.03464996540481971, 0.011165190465997224, 0.07261456211154073, -0.016744423237586927, 0.0964831171385445, 1.5482119925931843]
[6.91378499193158, 0.03464996540481971, 0.011165190465997224, 0.07261456211154073, -0.016744423237586927, 0.0964831171385445, 1.5482119925931843]
[[ 6.91378499 0.03464997 0.01116519 0.07261456 -0.01674442 0.09648312
1.54821199]]
(1, 7)
###Markdown
Health Insurance predictionApplicant inputs his datas and the model predicts the Insurance charge.
###Code
age=int(input("Enter age: "))
s=int(input("Enter sex ('male'->1,'female'->2): "))
bmi=float(input("Enter bmi: "))
c=int(input("Enter no. of children: "))
s=int(input("Does the person smoke('yes'->1,'no'->0): "))
re=int(input("Enter region number('northeast'->1,'southeast'->2,'southwest'->3,'northwest'->4): "))
var=np.array([1,age,bmi,s,re,c,s])
print(var.shape)
print(var)
print("The health Insurance Charge is:",np.exp(np.matmul(wt,var))[0])
price=np.matmul(wt,var)
###Output
The health Insurance Charge is: 2523.519615573321
|
4.2 Pandas for Panel Data .ipynb
|
###Markdown
Pandas for panel data 1 Overview 2 Slicing and reshaping data
###Code
import pandas as pd
#set_option(para,value)函数
pd.set_option('display.max_columns', 6) # 显示6列
#格式化浮点数,显示两位小数
pd.options.display.float_format = '{:,.2f}'.format
realwage = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/realwage.csv')
#realwage.tail()
realwage.head()
realwage = realwage.pivot_table(values='value',
index='Time',
columns=['Country', 'Series', 'Pay period'])
realwage.head()
#to_datetime函数处理日期格式
realwage.index=pd.to_datetime(realwage.index)
type(realwage.index)
type(realwage.columns)
realwage.columns.names
#选取美国的数据
realwage['United States'].head()
#.stack() 内层的行转换为列 .unstack() 内层的列转换成行 对行列进行变化
realwage.stack().head()
#指定stack的层级
realwage.stack(level='Country').head()
realwage['2015'].stack(level=(1,2)).head() # 第2和第3层转为行
#选择一年,一个特定的层级,transpose转置
realwage['2015'].stack(level=(1,2)).transpose().head() # 第2、3层转为行、转置
#用2015汇率测算的,不同国家在不同年份每小时最低工资,xs.根据标签选取行或者列
realwage_f = realwage.xs(('Hourly', 'In 2015 constant prices at 2015 USD exchange rates'),
level=('Pay period', 'Series'), axis=1)
realwage_f.head()
###Output
_____no_output_____
###Markdown
3 Merging dataframes and filling NaNs
###Code
worlddata = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/countries.csv', sep=';')
worlddata.head()
#选取国家和所在大陆
worlddata=worlddata[['Country (en)','Continent']]
worlddata=worlddata.rename(columns={'Country (en)':'Country'})
worlddata.head()
#merge worlddata和realwage_f这两个dataframe,pandas默认的是行合并
realwage_f.transpose().head()
#merge有四种形式,left join ,right join ,outer join,inner join,默认为inner join
merged = pd.merge(realwage_f.transpose(), worlddata,
how='left', left_index=True, right_on='Country')
merged.head()
#判断是否有缺失值 .isnull()
merged[merged['Continent'].isnull()]
#创建一个包含这些缺失国家和对应所在大陆的dictionary
missing_continents = {'Korea': 'Asia',
'Russian Federation': 'Europe',
'Slovak Republic': 'Europe'}
merged['Country'].map(missing_continents)
#.fillna()用来填补缺失值
merged['Continent'] = merged['Continent'].fillna(merged['Country'].map(missing_continents))
# Check for whether continents were correctly mapped
merged[merged['Country'] == 'Korea']
#将美洲国家汇总起来
replace = ['Central America', 'North America', 'South America']
for country in replace:
merged['Continent'].replace(to_replace=country,
value='America',
inplace=True)
#.sort_index()
merged = merged.set_index(['Continent', 'Country']).sort_index()
merged.head()
merged.columns
#to_datetime()
merged.columns = pd.to_datetime(merged.columns)
merged.columns = merged.columns.rename('Time')
merged.columns
merged = merged.transpose()
merged.head()
###Output
_____no_output_____
###Markdown
4 Grouping and summarizing data
###Code
#Groupby分组函数:mean()
merged.mean().head(10)
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('seaborn')
#内嵌绘图
%matplotlib inline
merged.mean().sort_values(ascending=False).plot(kind='bar', title="Average real minimum wage 2006 - 2016")
#Set country labels
country_labels = merged.mean().sort_values(ascending=False).index.get_level_values('Country').tolist()
plt.xticks(range(0, len(country_labels)), country_labels)
plt.xlabel('Country')
plt.show()
merged.mean(axis=1).head()
merged.mean(axis=1).plot()
plt.title('Average real minimum wage 2006 - 2016')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
merged.mean(level='Continent', axis=1).head()
merged.mean(level='Continent', axis=1).plot()
plt.title('Average real minimum wage')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
merged = merged.drop('Australia', level='Continent', axis=1)
merged.mean(level='Continent', axis=1).plot()
plt.title('Average real minimum wage')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
merged
merged.stack().head()
merged.stack().describe()
#group分组函数,生成一个groupby对象
grouped = merged.groupby(level='Continent', axis=1)
grouped
#.size()返回一个含有分组大小的Series
grouped.size()
import seaborn as sns
continents = grouped.groups.keys()
#.kdeplot用于核密度估计,shade=True控制阴影
for continent in continents:
sns.kdeplot(grouped.get_group(continent)['2015'].unstack(), label=continent, shade=True)
plt.title('Real minimum wages in 2015')
plt.xlabel('US dollars')
plt.show()
grouped
###Output
_____no_output_____
###Markdown
Exercises
###Code
employ = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/master/pandas_panel/employ.csv')
employ = employ.pivot_table(values='Value',
index=['DATE'],
columns=['UNIT','AGE', 'SEX', 'INDIC_EM', 'GEO'])
employ.index = pd.to_datetime(employ.index) # ensure that dates are datetime format
employ.head()
employ.columns.names
#.unique()函数去除其中重复的元素,并按元素由小到大返回一个无重复元素的元组或列表
for name in employ.columns.names:
print(name, employ.columns.get_level_values(name).unique())
#swaplevel调整索引级别,.sort_index()按行或者列排序
employ.columns = employ.columns.swaplevel(0,-1)
employ = employ.sort_index(axis=1)
#.tolist()矩阵转换成列表
geo_list = employ.columns.get_level_values('GEO').unique().tolist()
countries = [x for x in geo_list if not x.startswith('Euro')]
employ = employ[countries]
employ.columns.get_level_values('GEO').unique()
employ_f = employ.xs(('Percentage of total population', 'Active population'),
level=('UNIT', 'INDIC_EM'),
axis=1)
employ_f.head()
employ_f = employ_f.drop('Total', level='SEX', axis=1)
box = employ_f['2015'].unstack().reset_index()
#箱体图,showfliers:是否显示异常值,x="AGE"指定绘图数据
sns.boxplot(x="AGE", y=0, hue="SEX", data=box, palette=("husl"), showfliers=False)
plt.xlabel('')
plt.xticks(rotation=35)
plt.ylabel('Percentage of population (%)')
plt.title('Employment in Europe (2015)')
#bbox_to_anchor:表示legend的位置,前一个表示左右,后一个表示上下
plt.legend(bbox_to_anchor=(1,0.5))
plt.show()
###Output
_____no_output_____
|
Entrega 3/ejercicios.ipynb
|
###Markdown
Métodos Avanzados en Estadística Entrega 3: Regresión José Antonio Álvarez Ocete
###Code
options(warn=-1)
shhh <- suppressPackageStartupMessages # It's a library, so shhh!
shhh(library(glmnet))
shhh(library(tidyverse))
library(gapminder)
library(comprehenr)
library(ggplot2)
library(dplyr)
library(ggpubr)
shhh(library(KernSmooth))
theme_set(theme_bw())
options(warn=0)
###Output
_____no_output_____
###Markdown
Ejercicio 1Los datos del fichero *Datos-geyser.txt* corresponden al día de la observación (primera columna), el tiempo medido en minutos (segunda columna $Y$ ) y el tiempo hasta la siguiente erupción (tercera columna $X$) del geyser *Old Faithful* en el parque norteamericano de Yellowstone.**a)** Representa gráficamente los datos, junto con el estimador de Nadaraya-Watson de la función de regresión de $Y$ sobre $X$.**b)** Representa gráficamente los datos, junto con el estimador localmente lineal de la función de regresión de $Y$ sobre $X$. Representaremos ambos estimadores en el mismo gráfico para apreciar mejor las disferencias entre ambos.
###Code
geyser_df <- read.table('datos/Datos-geyser.txt', header=TRUE, sep=' ', )
head(geyser_df)
n <- length(geyser_df$X)
loc_lineal <- locpoly(geyser_df$X, geyser_df$Y, degree = 1, gridsize=n,
bandwidth = dpill(geyser_df$X, geyser_df$Y))
ggplot(geyser_df, aes(X, Y)) +
geom_point() +
geom_smooth(formula=y~x, method = 'loess', se = FALSE, span = 0.25, method.args = list(degree=0), aes(col='Nadaraya-Watson')) +
geom_smooth(formula=y~x, method = 'loess', se = FALSE, span = 0.25, method.args = list(degree=1), aes(col='Localmente lineal')) +
scale_colour_manual("", breaks = c('Nadaraya-Watson', 'Localmente lineal'), values = c('red', 'blue')) +
labs(x = 'Duración de erupción (minutos)', y = 'Tiempo entre erupciones') +
theme(legend.justification=c(1,0), legend.position=c(1,0))
###Output
_____no_output_____
###Markdown
Ejercicio 4Se considera el siguiente modelo de regresión lineal múltiple:$$\begin{equation} \label{eq:model} \tag{1} Y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \beta_3 x_{i3} + \epsilon_i, \quad \epsilon_i \sim N(0, \sigma^2), \quad i\in\{1,\ldots,n\}\end{equation}$$Se dispone de $n=20$ observaciones con las que se ajustan todos los posibles submodelos del modelo \eqref{eq:model}, obteniéndose para cada uno de ellos las siguientes sumas de cuadrados de los residuos (todos los submodelos incluyen un término independiente):| Variables incluidas en el modelo | Coeficientes de regresión | SCR ||:--------------------------------:|:-------------------------------------------:|:--------:|| Sólo término independiente | $\beta_0$ | 42644.00 || $x_1$ | $\beta_0$ y $\beta_1$ | 8352.28 || $x_2$ | $\beta_0$ y $\beta_2$ | 36253.69 || $x_3$ | $\beta_0$ y $\beta_3$ | 36606.19 || $x_1$ y $x_2$ | $\beta_0$, $\beta_1$ y $\beta_2$ | 7713.13 || $x_1$ y $x_3$ | $\beta_0$, $\beta_1$ y $\beta_3$ | 762.55 || $x_2$ y $x_3$ | $\beta_0$, $\beta_2$ y $\beta_3$ | 32700.17 || $x_1$, $x_2$ y $x_3$ | $\beta_0$, $\beta_1$, $\beta_2$ y $\beta_3$ | 761.41 |**a)** Calcula la tabla de análisis de la varianza para el modelo \eqref{eq:model} y contrasta a nivel $\alpha = 0,05$ la hipótesis nula $H_0: \beta_1 = \beta_2 = \beta_3 = 0$. Sabemos que para realizar este tipo de contraste hemos de hacer uso del siguiente estimador:$$ F = \frac{\mbox{SCE}/p}{\mbox{SCR}/(n-p-1)}$$Y puesto que bajo $H_0$, $F$ sigue una distribución $F_{p, n-p-1}$, nuestro *p-value* para este contraste será $\mathbb P[F_{p, n-p-1} > F]$. El problema es que no disponemos de $\mbox{SCE}$ ni de $\mbox{SCT}$. Hemos de darnos cuenta de que la $\mbox{SCT}$ del modelo completo es la suma de cuadrados residual del modelo que utiliza únicamente el término independiente:$$\begin{align} \mbox{SCR}_0 & = \sum_{i=1}^n (\hat y_i - y_i)^2 \\ & = \sum_{i=1}^n (\hat \beta_0 - y_i)^2 \\ & = \sum_{i=1}^n (\bar y - y_i)^2 = \mbox{SCT}\end{align}$$Podemos calcular la suma de cuadrados explicada utilizando la ortogonalidad de los residuos con las variables regresoras:$$ \mbox{SCT} = \mbox{SCE} + \mbox{SCR} \rightarrow \mbox{SCE} = \mbox{SCR} - \mbox{SCT} = \mbox{SCR} - \mbox{SCR}_0$$Con este desarrollo en mente es sencillo computar la tabla de varianzas del modelo completo.
###Code
n <- 20
p <- 3
all_scrs <- c(42644, 8352.28, 36253.69, 36606.19, 7713.13, 762.55, 32700.17, 761.41)
scr <- all_scrs[8]
sce <- all_scrs[1] - scr
source <- c("Complete model", "Residuals")
df <- c(p, n - p - 1)
sum_sq <- c(sce, scr)
mean_sq <- sum_sq/df
f <- c(mean_sq[1]/mean_sq[2], NA)
pr_f <- c(pf(f[1], df1=p, df2=n-p-1, lower.tail=FALSE), NA)
variance_table <- data.frame(Source=source, Df=df, Sum_Sq=sum_sq, Mean_Sq=mean_sq, F_value=f, p_value=pr_f)
variance_table
###Output
_____no_output_____
###Markdown
Obtenemos un *p-value* de $\approx 3.42 \cdot 10^{-14} \ll 0.05$, por lo que tenemos suficiente evidencia para rechazar la hipótesis nula. **b)** En el modelo $(1)$ , contrasta a nivel $\alpha = 0,05$ las dos hipótesis nulas siguientes:- $H_0: \beta_2 = 0$- $H_0: \beta_1 = \beta_3 = 0$ En este caso hemos de utilizar el siguiente estadístico:$$ F = \frac{\mbox{(SCR_0 - SCR}/k}{\mbox{SCR}/(n-p-1)}$$Donde $k$ es el número de variables del modelo reducido. Puesto que bajo $H_0$, $F$ sigue una distribución $F_{k, n-p-1}$, nuestro *p-value* para este contraste será $\mathbb P[F_{k, n-p-1} > F]$. Hacemos una función que, dados los $\mbox{SCR}$'s del modelo completo y reducido, computa el p-valor:
###Code
reduced_model_p_value <- function(total_scr, reduced_scr, n, p, k) {
numerator <- (reduced_scr - total_scr) / k
denominator <- total_scr / (n - p - 1)
pf(numerator/denominator, df1=k, df2=n-p-1, lower.tail=FALSE)
}
###Output
_____no_output_____
###Markdown
**Caso $H_0: \beta_2=0$.**
###Code
k <- 1
cat('p-value: ', reduced_model_p_value(all_scrs[8], all_scrs[6], n, p, k))
###Output
p-value: 0.8789337
###Markdown
Puesto que obtenemos un *p-value* de $\approx 0.8789 > 0.05$, no tenemos suficiente evidencia para rechazar la hipótesis nula. Es decir, **la variable $x_2$ puede no ser significativa para la predicción de $y$**. **b) Caso $H_0: \beta_1 = \beta_3 = 0$.**
###Code
k <- 2
cat('p-value: ', reduced_model_p_value(all_scrs[8], all_scrs[3], n, p, k))
###Output
p-value: 3.785566e-14
###Markdown
Puesto que obtenemos un *p-value* de $\approx 3.79 \cdot 10^{-14} \ll 0.05$, rechazamos la hipótesis nula. Es decir, **las variable $x_1$ y $x_3$ son significativas para la predicción de $y$**. Ejercicio 6Sean $Y_1$, $Y_2$ e $Y_3$ tres variables aleatorias independientes con distribución normal y varianza $\sigma^2$. Supongamos que $\mu$ es la media de $Y_1$, $\lambda$ es la media de $Y_2$ y $\lambda + \mu$ es la media de $Y_3$, donde $\lambda, mu \in \mathbb R$.**a)** Demuestra que el vector $Y = (Y_1, Y_2, Y_3)'$0 verifica el modelo de regresión múltiple $Y = X\beta + \epsilon$. Para ello, determina la matriz de diseño $X$, el vector de parámetros $\beta$ y la distribución de las variables de error $\epsilon$. Podemos expresar el vector de variables independientes de la siguiente forma:$$\begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \\\end{pmatrix} =\underbrace{ \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{pmatrix} }_{X}\underbrace{ \begin{pmatrix} \mu \\ \lambda\end{pmatrix} }_{\beta} +\begin{pmatrix} \epsilon_1 \\ \epsilon_2 \\ \epsilon_3\end{pmatrix} \quad\epsilon_i \sim N(0, \sigma^2)$$Obteniendo el siguiente modelo de regresión múltiple:$$ Y \sim N(X\beta, \sigma^2 I)$$ **b)** Calcula los estimadores de máxima verosimilitud (equivalentemente, de mínimos cuadrados) de $\lambda$ y $\mu$. Sabemos que el estimador de mínimos cuadrados puede calcular utilizando la siguiente expresión:$$ \hat \beta = (X'X)^{-1}X'Y$$Sin embargo, para demostrar esta expresión en clase nos basamos en que la matriz de diseño tenía una columna de unos (el modelo tenía un término independiente $\beta_0$). ¿Es ésta expresión cierta si el modelo carece de término independiente?La respuesta es afirmativa. Para obtenerla sin hacer uso de dicha hipótesis podemos simplemente obtener el valor que minimiza el error cuadrado mínimo derivando e igualando a $0$ en la siguiente expresión:$$ L(\sigma, \beta) = \; \parallel Y - X\beta \;\parallel^2_2$$Las cuentas no son complicadas, puede consultarse la demostración completa en [Wikipedia](https://en.wikipedia.org/wiki/Least_squaresLinear_least_squares). Calculemos estos valores para nuestro caso particular:$$X'X =\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\\end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 1 & 2\end{pmatrix}$$$$(X'X)^{-1} = \frac{1}{3}\begin{pmatrix} 2 & -1 \\ -1 & 2\end{pmatrix}$$$$(X'X)^{-1}X' = \frac{1}{3}\begin{pmatrix} 2 & -1 \\ -1 & 2\end{pmatrix}\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\\end{pmatrix} = \frac{1}{3}\begin{pmatrix} 2 & -1 & 1 \\ -1 & 2 & 1 \\\end{pmatrix}$$$$\hat \beta = (X'X)^{-1}X'Y =\begin{pmatrix} 2 & -1 & 1 \\ -1 & 2 & 1 \\\end{pmatrix}\begin{pmatrix} Y_1 \\ Y_2 \\ Y_3 \\\end{pmatrix} = \frac{1}{3}\begin{pmatrix} 2 Y_1 - Y_2 + Y_3 \\ -Y_1 + 2 Y_2 + Y_3\end{pmatrix}$$ **c)** Calcula la distribución del vector $(\hat \lambda, \hat \mu)'$, formado por los estimadores calculados en el apartado anterior. Puesto que el vector de variables independientes cumple$$ Y \sim N(X\beta, \sigma^2 I)$$Y sabiendo que $\hat \beta = (X'X)^{-1}X'Y$, obtenemos que $\hat \beta$ sigue una distribución normal multivariante:$$ \hat \beta \sim N( ((X'X)^{-1}X') \cdot X\beta, ((X'X)^{-1}X') \cdot \sigma^2 I \cdot ((X'X)^{-1}X')')$$Para la esperanza:$$ \mathbb E [\hat\beta] = (X'X)^{-1}X' \cdot X\beta = (X'X)^{-1} \cdot (X'X) \beta = \beta$$Luego el estimador es insesgado. Por otro lado, para la varianza hemos de ver que $X'X$ es simétrica (y por lo tanto su inversa también lo será):$$ (X'X)' = X'X'' = X'X$$Por lo tanto, la varianza cumple:$$\begin{align} \text{Var}[\hat\beta] & = ((X'X)^{-1}X') \cdot \sigma^2 I \cdot ((X'X)^{-1}X')' \\ & = \sigma^2 (X'X)^{-1} (X'X) (X'X)^{-1} \\ & = \sigma^2 (X'X)^{-1} \\\end{align}$$Obtenemos así la expresión reducida de la distribución que siguen los estimadores:$$ \hat \beta \sim N( \beta, \sigma^2(X'X)^{-1})$$ Ejercicio 10Los datos *fuel2001* del fichero *combustible.RData* corresponden al consumo de combustible (y otras variables relacionadas) en los estados de EE.UU.Se desea explicar la variable ***FuelC*** en función del resto de la información.
###Code
datos <- 'http://verso.mat.uam.es/~joser.berrendero/datos/combustible.RData'
load(url(datos))
head(fuel2001)
###Output
_____no_output_____
###Markdown
**a)** Representa en un plano las dos primeras componentes principales de estos datos estandarizados (consulta la ayuda de *prcomp*). ¿Son suficientes estas dos componentes para explicar un alto porcentaje de la varianza? En primer lugar, hemos de aclarar que como nuestro objetivo final es predecir la variable *FuelC*, la quitamos de nuestro conjunto de datos para hacer PCA. Si la mantuviésemos, las componentes principales no se podrían utilizar para predecir dicha variable ya que ya la contienen. Si simplemente quisiésemos realizar una análisis de PCA sobre los datos para hacer, por ejemplo, cluestering, mantendríamos dicha variable. He tomado esta decisión porque en el enunciado no aclara que debamos quitarlo, pero tiene sentido para nuestra aplicación final.Utilizamos la función de R *prcomp* sugerida en el enunciado con los parámetros *center* y *scale.* para normalizar las variables antes de computar el análisos. Pintamos a continuación las dos primeras componentes principales:
###Code
fuel2001_without_FuelC <- data.frame(fuel2001)
fuel2001_without_FuelC$FuelC <- NULL
pca <- prcomp(fuel2001_without_FuelC, center=TRUE, scale.=TRUE)
ggplot(as.data.frame(pca$x), aes(PC1, PC2)) +
geom_point(color="steelblue") +
labs(x='PC1', y='PC2') +
theme_minimal()
###Output
_____no_output_____
###Markdown
Como se aprecia en el gráfico, los puntos están relativamente separados. Este es el objetivo principal de PCA: maximizar la distancia entre puntos (la varianza) utilizando un conjunto mínimo de variables, siendo éstas combinaciones lineales de las orginales. Utilizamos *summary* para ver los resultados numéricos del PCA.
###Code
summary(pca)
pov <- summary(pca)$importance[2,]
names <- c(paste('PC', 1:6))
proportions_df <- data.frame(PC=names, Proportion_of_variance=pov)
ggplot(proportions_df, aes(x=PC, y=Proportion_of_variance)) +
geom_bar(stat="identity", fill="steelblue") +
theme_minimal()
###Output
_____no_output_____
###Markdown
Obtenemos así un $71.90\%$ de varianza explicada utilizando únicamente 2 componentes principales.Si repetimos este análisis añadiendo la variable *FuelC* obtenemos 7 componentes principales en vez de 6, y la dos primeras explican un $75.47\%$ de la varianza. **b)** Ajusta el modelo completo con todas las variables. En este modelo completo, contrasta la hipótesis nula de que los coeficientes de las variables **Income**, **MPC** y **Tax** son simultáneamente iguales a cero. Como vimos en el ejercicio 4, para comprobar este tipo de hipótesis hemos de ajustar ambnos modelos (completo y reducido), y utilizar el siguiente estadístico:$$ F = \frac{ (\mbox{SCR_0}- \mbox{SCR}) / k }{ \mbox{SCR} / (n - p - 1) }$$Nuestro *p-value* será:$$ \mathbb P [F_{k;n-p-1} \ge F]$$
###Code
complete_model <- lm(FuelC ~ ., data=fuel2001)
summary(complete_model)
reduced_model <- lm(FuelC ~ Drivers + Miles + Pop, data=fuel2001)
summary(reduced_model)
scr_complete <- summary(complete_model)$sigma
scr_reduced <- summary(reduced_model)$sigma
anova(reduced_model, complete_model)
###Output
_____no_output_____
###Markdown
Puesto que obtenemos un *p-value* de $0.1557 \gg 0.05$, no tenemos evidentas suficientes para rechazar la hipótesis nula. **c)** De acuerdo con el método iterativo hacia adelante y el criterio BIC, ¿cúal es el modelo óptimo?
###Code
iterative_model <- leaps::regsubsets(FuelC ~ ., data=fuel2001, method='forward')
summary(iterative_model)
plot(summary(iterative_model)$bic,
xlab="Número de variables",
ylab="BIC", type="l")
###Output
_____no_output_____
###Markdown
El modelo que minimiza el criterio BIC es aquel con tres variables. Como podemos ver en el resumen, el mejor modelo con variables en el método iterativo es aquel que utiliza unicamente ***Drivers***, ***Miles*** y ***Tax***. **d)** Ajusta el modelo usando lasso, con el parámetro de regularización seleccionado mediante validación cruzada.
###Code
set.seed(123)
x <- as.matrix(fuel2001_without_FuelC)
y <- fuel2001$FuelC
lasso_model_cv <- cv.glmnet(x, y, alpha=1) # alpha=1 es lasso
lasso_model_cv
###Output
_____no_output_____
###Markdown
Vemos así los valores de $\lambda$ obtenidos en validación cruzada, donde *Measure* es la desviación típica, el valor por defecto.El $\lambda_{min}$ es aquel valor de $\lambda$ que minimiza la media del error en validación cruzada. El $\lambda_{1se}$ es tal que el error esta a una desviación típica del error. Utilizaremos $\lambda_{min}$ ya que minimiza el error a pesar de ser más restrictivo (introducir más sesgo).Merece la pena destacar que hemos fijado la semilla. Al aplicar validación cruzada obteníamos valores muy variables para ambos lambdas, de esta forma los resultados son completamente reproducibles.
###Code
lambda_lasso <- lasso_model_cv$lambda.min
final_lasso_model <- glmnet(x, y, alpha=1, lambda=lambda_lasso) # alpha=1 es lasso
coef(final_lasso_model)
###Output
_____no_output_____
###Markdown
Como podemos ver, ajustado lasso de esta forma mantenemos las variables ***Drivers***, ***Miles***, ***MPC*** y ***Tax***. **e)** Ajusta el modelo usando **ridge**, con el parámetro de regularización seleccionado mediante validación cruzada.
###Code
set.seed(123)
ridge_model_cv <- cv.glmnet(x, y, alpha=0) # alpha=0 es ridge
ridge_model_cv
lambda_ridge <- lasso_model_cv$lambda.min
final_ridge_model <- glmnet(x, y, alpha=0, lambda=lambda_ridge) # alpha=0 es ridge
coef(final_ridge_model)
###Output
_____no_output_____
|
Honors_Theses_Scraping_Tutorial.ipynb
|
###Markdown
Scraping Wellesley College Honors Theses Notebook created by Marisa Papagelis as part of the Wellesley Data Collective January 2021 Project In this notebook, We will scrape the [Wellesely College Honors Theses Repository](https://repository.wellesley.edu/collections/thesiscollection?display=grid) for Senior Theses titles, years, and departments using Selenium, BeautifulSoup, and Pandas. This product of this notebook results in a json file of our data which we hope is useful in Wellesley data focused projects. We also hope this code can be reused in the future to scrape the most recent Senior Honors Theses. Install Packages First, we need to install Selenium and import appropriate packages which we will use in this notebook. We will use Selenium's webdriver to navigate our Senior Theses pages, BeautifulSoup to scrape, pandas to create our dataset, and json to save and export our dataset.
###Code
!pip install selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
import pandas as pd
import json
###Output
_____no_output_____
###Markdown
Navigate to Honors Theses Repository We use chromedriver to open a chrome browser and navigate to the first theses page we want to scrape.
###Code
driver = webdriver.Chrome(executable_path='Downloads/chromedriver3')
driver.get("https://repository.wellesley.edu/collections/thesiscollection?display=grid")
###Output
_____no_output_____
###Markdown
Create Helper Functions Next, we use BeautifulSoup along with some helper functions to parse through the theses webpage. We inspected the page in our chrome browser to find the class names for the year, department, and title of each thesis on the page. We then used BeautifulSoup's .find function to pull the appropriate content from each thesis.
###Code
soup= BeautifulSoup(driver.page_source, "html.parser")
def getThesisYear(content):
"""A helper function to determine the year of the thesis"""
post_el = content.find("div",
class_="d-inline-block solr-value mods-origininfo-type-displaydate-dateother-ms")
if post_el:
return post_el.text
else:
return np.nan
def getThesisDepartment(content):
"""A helper function to determine the departmnet of the thesis"""
post_el = content.find("div",
class_="d-inline-block solr-value mods-name-corporate-department-namepart-ms")
if post_el:
return post_el.text
else:
return np.nan
def getThesisTitle(content):
"""A helper function to determine the title of the thesis"""
post_el = content.find("div",
class_="d-inline-block solr-value fgs-label-s")
if post_el:
return post_el.text
else:
return np.nan
###Output
_____no_output_____
###Markdown
Scrape the First Page of Theses! Now we loop through each thesis on the given page and collect the year, department, and title. We do this by finding the class containing all of the theses and looping through it. We append these data to a list which we will refer back to after all of the pages are scraped.
###Code
thesis_data = []
for thesis in soup.find_all("div", class_="solr-fields islandora-inline-metadata col-xs-12 col-sm-8 col-md-9"):
try:
thesis_year = getThesisYear(thesis)
thesis_department = getThesisDepartment(thesis)
thesis_title = getThesisTitle(thesis)
thesis_data.append((thesis_year, thesis_department, thesis_title))
except AttributeError:
pass
###Output
_____no_output_____
###Markdown
Scraping Additional Pages In order to scrape the rest of the pages, we need to use our driver to navigate to each page and use our helper functions to scrape the thesis year, department, and title. We append these data to our list to use at the very end of the tutorial. First, we create a list hold all of our future URLs. Since there are currently 34 pages of theses, and page 1 has a URL without a number, we will have 33 URLs to navigate. We create all 33 new URLs using a loop, and we append these URLs to a list for our scraping.
###Code
URLs = []
page = 0
for article in range(33):
page += 1
URL = "https://repository.wellesley.edu/collections/thesiscollection?page=" + str(page) + "&display=grid"
URLs.append(URL)
###Output
_____no_output_____
###Markdown
Next, we loop through our URLs and scrape the appropriate information from each Honors Theses.
###Code
for URL in URLs:
for thesis in soup.find_all("div", class_="solr-fields islandora-inline-metadata col-xs-12 col-sm-8 col-md-9"):
try:
thesis_year = getThesisYear(thesis)
thesis_department = getThesisDepartment(thesis)
thesis_title = getThesisTitle(thesis)
thesis_data.append((thesis_year, thesis_department, thesis_title))
except AttributeError:
pass
###Output
_____no_output_____
###Markdown
Create a Data Frame for Results Finally, we create a data frame using the pandas package and save our data to it.
###Code
theses_df = pd.DataFrame(thesis_data, columns=["Year", "Department", "Title"])
theses_df #view data frame
###Output
_____no_output_____
###Markdown
Save to JSON file Now that we have our data frame, we export it to a json file so it can be used in further exploration.
###Code
json.dump(thesis_data, open('WellesleyHonorsTheses.json', 'w'))
###Output
_____no_output_____
|
docs/notebooks/utils/uploading_brains.ipynb
|
###Markdown
Uploading Brain Images from data in the Octree format.This notebook demonstrates uploading the 2 lowest-resolution brain volumes and a `.swc` segment file.The upload destination could easily be set to a url of a cloud data server such as s3. 1) Define variables. - `source` and `source_segments` are the root directories of the octree-formatted data and swc files. - `p` is the prefix string. `file://` indicates a filepath, while `s3://` or `gc://` indicate URLs. - `dest` and `dest_segments` are the destinations for the uploads (in this case, filepaths). - `num_res` denotes the number of resolutions to upload. The below paths lead to sample data in the NDD Repo. Alter the below path definitions to point to your own local file locations.
###Code
source = (Path().resolve().parents[2] / "data" / "data_octree").as_posix()
dest = (Path().resolve().parents[2] / "data" / "upload").as_uri()
dest_segments = (Path().resolve().parents[2] / "data" / "upload_segments").as_uri()
dest_annotation = (Path().resolve().parents[2] / "data" / "upload_annotation").as_uri()
num_res = 2
###Output
_____no_output_____
###Markdown
2) Upload the image data (.tif) If the upload fails with the error: `timed out on a chunk on layer index 0. moving on to the next step of pipeline`, re-run the `upload_volumes` function but with the `continue_upload` parameter, which takes `layer index` (the layer index said in the error message) and `image index` (the last succesful image that uploaded). For example, if the output failed after image 19, then run `upload.upload_volumes(source, dest, num_res, continue_upload = (0, 19))`. Repeat this till all of the upload is complete.
###Code
upload.upload_volumes(source, dest, num_res)
###Output
_____no_output_____
###Markdown
3) Upload the segmentation data (.swc) If uploading a `.swc` file associated with a brain volume, then use upload.py. Otherwise if uploading `swc` files with different name formats, use upload_benchmarking.py
###Code
upload.upload_segments(source, dest_segments, num_res)
###Output
_____no_output_____
###Markdown
Download the data with NeuroglancerSession and generate labels.
###Code
%%capture
sess = session.NeuroglancerSession(url=dest, url_segments=dest_segments, mip=0) # create session object object
img, bounds, vertices = sess.pull_vertex_list(2, range(250, 350), expand=True) # get image containing some data
labels = sess.create_tubes(2, bounds, radius=1) # generate labels via tube segmentation
###Output
_____no_output_____
###Markdown
4) Visualize the data with napari
###Code
with napari.gui_qt():
viewer = napari.Viewer(ndisplay=3)
viewer.add_image(img)
viewer.add_labels(labels)
###Output
_____no_output_____
###Markdown
Uploading Brain Images from data in the Octree format.This notebook demonstrates uploading the 2 lowest-resolution brain volumes, as well as a `.swc` segment file.The upload destination could easily be set to a url of a cloud data server such as s3. 1) Define variables. - `source` and `source_segments` are the root directories of the octree-formatted data and swc files. - `p` is the prefix string. `file://` indicates a filepath, while `s3://` or `gc://` indicate URLs. - `dest` and `dest_segments` are the destinations for the uploads (in this case, filepaths). - `num_res` denotes the number of resolutions to upload.
###Code
source = (Path().resolve().parents[2] / "data" / "data_octree").as_posix()
dest = (Path().resolve().parents[2] / "data" / "upload").as_uri()
dest_segments = (Path().resolve().parents[2] / "data" / "upload_segments").as_uri()
dest_annotation = (Path().resolve().parents[2] / "data" / "upload_annotation").as_uri()
num_res = 2
###Output
_____no_output_____
###Markdown
2) Upload the image data.
###Code
upload.upload_volumes(source, dest, num_res)
###Output
_____no_output_____
###Markdown
3) Upload the segmentation data.
###Code
upload.upload_segments(source, dest_segments, num_res)
###Output
_____no_output_____
###Markdown
Download the data with NeuroglancerSession and generate labels.
###Code
%%capture
sess = session.NeuroglancerSession(url=dest, url_segments=dest_segments, mip=0) # create session object object
img, bounds, vertices = sess.pull_vertex_list(2, range(250, 350), expand=True) # get image containing some data
labels = sess.create_tubes(2, bounds, radius=1) # generate labels via tube segmentation
###Output
_____no_output_____
###Markdown
4) Visualize the data with napari
###Code
with napari.gui_qt():
viewer = napari.Viewer(ndisplay=3)
viewer.add_image(img)
viewer.add_labels(labels)
###Output
_____no_output_____
###Markdown
Uploading Brain Images from data in the Octree format.This notebook demonstrates uploading the 2 lowest-resolution brain volumes and a `.swc` segment file.The upload destination could easily be set to a url of a cloud data server such as s3. 1) Define variables. - `source` and `source_segments` are the root directories of the octree-formatted data and swc files. - `p` is the prefix string. `file://` indicates a filepath, while `s3://` or `gc://` indicate URLs. - `dest` and `dest_segments` are the destinations for the uploads (in this case, filepaths). - `num_res` denotes the number of resolutions to upload. The below paths lead to sample data in the NDD Repo. Alter the below path definitions to point to your own local file locations.
###Code
# source = (Path().resolve().parents[2] / "data" / "data_octree").as_posix()
# dest = (Path().resolve().parents[2] / "data" / "upload").as_uri()
# dest_segments = (Path().resolve().parents[2] / "data" / "upload_segments").as_uri()
# dest_annotation = (Path().resolve().parents[2] / "data" / "upload_annotation").as_uri()
# num_res = 2
###Output
_____no_output_____
###Markdown
2) Upload the image data (.tif) If the upload fails with the error: `timed out on a chunk on layer index 0. moving on to the next step of pipeline`, re-run the `upload_volumes` function but with the `continue_upload` parameter, which takes `layer index` (the layer index said in the error message) and `image index` (the last succesful image that uploaded). For example, if the output failed after image 19, then run `upload.upload_volumes(source, dest, num_res, continue_upload = (0, 19))`. Repeat this till all of the upload is complete.
###Code
# upload.upload_volumes(source, dest, num_res)
###Output
Creating precomputed volume at layer index 0: 100%|██████████| 1/1 [00:04<00:00, 4.27s/it]
Creating precomputed volume at layer index 1: 0%| | 0/8 [00:00<?, ?it/s]
Finished layer index 0, took 4.266659259796143 seconds
Creating precomputed volume at layer index 1: 100%|██████████| 8/8 [00:09<00:00, 1.19s/it]
Finished layer index 1, took 9.484457015991211 seconds
###Markdown
3) Upload the segmentation data (.swc) If uploading a `.swc` file associated with a brain volume, then use upload.py. Otherwise if uploading `swc` files with different name formats, use upload_benchmarking.py
###Code
# upload.upload_segments(source, dest_segments, num_res)
###Output
converting swcs to neuroglancer format...: 100%|██████████| 1/1 [00:00<00:00, 43.32it/s]
Uploading: 100%|██████████| 1/1 [00:00<00:00, 194.74it/s]
###Markdown
Download the data with NeuroglancerSession and generate labels.
###Code
# %%capture
# sess = session.NeuroglancerSession(url=dest, url_segments=dest_segments, mip=0) # create session object object
# img, bounds, vertices = sess.pull_vertex_list(2, range(250, 350), expand=True) # get image containing some data
# labels = sess.create_tubes(2, bounds, radius=1) # generate labels via tube segmentation
###Output
_____no_output_____
###Markdown
4) Visualize the data with napari
###Code
# viewer = napari.Viewer(ndisplay=3)
# viewer.add_image(img)
# viewer.add_labels(labels)
# nbscreenshot(viewer)
###Output
_____no_output_____
|
Deep.Learning/3.Convulutional-Networks/7.Dog-Breed-Classifier/dog_app.ipynb
|
###Markdown
Artificial Intelligence Nanodegree Convolutional Neural Networks Project: Write an Algorithm for a Dog Identification App ---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.--- Why We're Here In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!). In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience! The Road AheadWe break the notebook into separate steps. Feel free to use the links below to navigate the notebook.* [Step 0](step0): Import Datasets* [Step 1](step1): Detect Humans* [Step 2](step2): Detect Dogs* [Step 3](step3): Create a CNN to Classify Dog Breeds (from Scratch)* [Step 4](step4): Use a CNN to Classify Dog Breeds (using Transfer Learning)* [Step 5](step5): Create a CNN to Classify Dog Breeds (using Transfer Learning)* [Step 6](step6): Write your Algorithm* [Step 7](step7): Test Your Algorithm--- Step 0: Import Datasets Import Dog DatasetIn the code cell below, we import a dataset of dog images. We populate a few variables through the use of the `load_files` function from the scikit-learn library:- `train_files`, `valid_files`, `test_files` - numpy arrays containing file paths to images- `train_targets`, `valid_targets`, `test_targets` - numpy arrays containing onehot-encoded classification labels - `dog_names` - list of string-valued dog breed names for translating labels
###Code
run_questions = True
force_retrain = False
skip_cells = False
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('dogImages/train')
valid_files, valid_targets = load_dataset('dogImages/valid')
test_files, test_targets = load_dataset('dogImages/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
###Output
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Import Human DatasetIn the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array `human_files`.
###Code
import random
random.seed(8675309)
# load filenames in shuffled human dataset
human_files = np.array(glob("lfw/*/*"))
random.shuffle(human_files)
# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
###Output
There are 13233 total human images.
###Markdown
--- Step 1: Detect HumansWe use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory.In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
###Output
Number of faces detected: 1
###Markdown
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter. In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box. Write a Human Face DetectorWe can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
###Code
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Assess the Human Face Detector__Question 1:__ Use the code cell below to test the performance of the `face_detector` function. - What percentage of the first 100 images in `human_files` have a detected human face? - What percentage of the first 100 images in `dog_files` have a detected human face? Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.__Answer:__ * Percentage humans classified as human faces 99.0 %. * Percentage dogs classified as human faces 12.0 %.
###Code
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
if run_questions:
no_of_humans = 0
no_of_dogs = 0
total = 0
for human, dog in zip(human_files_short,dog_files_short):
if face_detector(human):
no_of_humans += 1
if face_detector(dog):
no_of_dogs += 1
total += 1
print("Percentage humans classified as human faces ", (no_of_humans / total) * 100, "%.")
print("Percentage dogs classified as human faces ", (no_of_dogs / total) * 100, "%.")
###Output
Percentage humans classified as human faces 99.0 %.
Percentage dogs classified as human faces 12.0 %.
###Markdown
__Question 2:__ This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?__Answer:__ I expect that if you have an algoritm that detects human faces it should detect human faces. It should not be a requirement for the user to supply a human face to begin with but more that it should be given into account that anything could be used as input. As a developer I always go by the saying _Never trust the client_. And this is very much true. If you say to someone that they expect you to input a human image at least a percentage of these will throw anything but this onto the program.We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on each of the datasets.
###Code
## (Optional) TODO: Report the performance of another
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.
img = cv2.imread(human_files[0])
height, width, channels = img.shape
print(height)
print(width)
print(channels)
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def path_to_tensor2(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(250, 250))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor2(img_paths):
list_of_tensors = [path_to_tensor2(img_path) for img_path in img_paths]
return np.vstack(list_of_tensors)
print(len(human_files))
print(len(train_files))
(x_train_indices, x_test_indices, x_valid_indices) = human_files[:6000],human_files[6000:6300],human_files[6300:6600]
(x_train_indices_d, x_test_indices_d, x_valid_indices_d) = train_files[:6000], train_files[6000:6300], train_files[6300:6600]
lengthT, lenghtV, lengthE = len(x_train_indices), len(x_valid_indices), len(x_test_indices)
x_train_indices = np.append(x_train_indices, x_train_indices_d)
x_valid_indices = np.append(x_test_indices, x_valid_indices_d)
x_test_indices = np.append(x_valid_indices, x_test_indices_d)
y_train = np.ones(shape=[len(x_train_indices), 2])
y_valid = np.ones(shape=[len(x_valid_indices), 2])
y_test = np.ones(shape=[len(x_test_indices), 2])
#y_train = np.full(lengthT, 1)
#y_train = np.append(y_train, np.full(lengthT, 0))
#y_valid = np.full(lenghtV, 1)
#y_valid = np.append(y_valid, np.full(lenghtV, 0))
#from sklearn import preprocessing
#lb = preprocessing.LabelBinarizer(neg_label=0, pos_label=1, sparse_output=False)
#lb.fit(y_train)
#print(lb.classes_)
#y_train = lb.fit_transform(y_train)
#y_valid = lb.fit_transform(y_valid)
y_train[:lengthT,1] = 0
y_train[lengthT:,0] = 0
y_valid[:lenghtV,1] = 0
y_valid[lenghtV:,0] = 0
y_test[:lengthE,1] = 0
y_test[lengthE:,0] = 0
from sklearn.utils import shuffle
x_train_indices, y_train = shuffle(x_train_indices, y_train, random_state=0)
print(x_train_indices.shape, " == ", y_train.shape)
print(y_train)
print(x_valid_indices.shape, " == ",y_valid.shape)
print(y_valid)
print(x_test_indices.shape, " == ",y_test.shape)
print(y_test)
for i, idx in enumerate(np.random.choice(x_train_indices.shape[0], size=5, replace=False)):
print('train: ', y_train[idx], '=>', x_train_indices[idx])
for i, idx in enumerate(np.random.choice(x_valid_indices.shape[0], size=5, replace=False)):
print('valid: ', y_valid[idx], '=>', x_valid_indices[idx])
for i, idx in enumerate(np.random.choice(x_test_indices.shape[0], size=5, replace=False)):
print('test: ', y_test[idx], '=>', x_test_indices[idx])
x_train = preprocess_input(paths_to_tensor2(x_train_indices))
x_valid = preprocess_input(paths_to_tensor2(x_valid_indices))
x_test = preprocess_input(paths_to_tensor2(x_test_indices))
print(x_train.shape)
print(x_valid.shape)
print(x_test.shape)
print(y_train.shape)
print(y_valid.shape)
print(y_test.shape)
print(x_train[0].shape)
print(x_valid[0].shape)
print(y_train[0].shape)
print(y_valid[0].shape)
if skip_cells:
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Input, BatchNormalization
input_shape = Input(shape=(250, 250, 3))
x = Conv2D(filters=16, kernel_size=7, padding='same', activation='relu')(input_shape)
x = BatchNormalization()(x)
x = Conv2D(filters=16, kernel_size=7, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=16, kernel_size=7, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
# x = Dropout(0.5)(x)
x = Conv2D(filters=32, kernel_size=5, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=32, kernel_size=5, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=32, kernel_size=5, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
# x = Dropout(0.5)(x)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
# x = Dropout(0.5)(x)
x = Conv2D(filters=128, kernel_size=2, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=2, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=2, padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
# x = Dropout(0.5)(x)
x = Conv2D(filters=256, kernel_size=2, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=256, kernel_size=2, padding='same', activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=256, kernel_size=2, padding='same', activation='relu')(x)
x = Dropout(0.2)(x)
x = Flatten()(x)
x = Dense(500, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(2, activation='softmax')(x)
old_face_model = Model(inputs=input_shape, outputs=x)
old_face_model.summary()
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Input, BatchNormalization
input_shape = Input(shape=(250, 250, 3))
x = Conv2D(filters=32, kernel_size=3, padding='same', activation='relu')(input_shape)
x = Conv2D(filters=32, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=64, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=64, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=64, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=128, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=128, kernel_size=3, padding='same', activation='relu')(x)
x = Conv2D(filters=128, kernel_size=3, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=256, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=256, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(2, activation='softmax')(x)
face_model = Model(inputs=input_shape, outputs=x)
face_model.summary()
face_model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
face_model_file = 'saved_models/faces.model.weights.best.hdf5'
import os.path
if force_retrain or not os.path.isfile(face_model_file):
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath=face_model_file, verbose=1,
save_best_only=True)
hist = face_model.fit(x_train, y_train, batch_size=32, epochs=10,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
# load the weights that yielded the best validation accuracy
face_model.load_weights(face_model_file)
# evaluate and print test accuracy
score = face_model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
def face_detector_own(img_path):
tensor = path_to_tensor2(img_path)
return face_model.predict(tensor)[0,0] > 0.5
for i, idx in enumerate(np.random.choice(x_train_indices.shape[0], size=5, replace=False)):
print('detect: ', x_train_indices[idx], '=>', face_detector_own(x_train_indices[idx]))
###Output
detect: dogImages/train/063.English_springer_spaniel/English_springer_spaniel_04495.jpg => False
detect: lfw/Laura_Linney/Laura_Linney_0004.jpg => True
detect: lfw/Janine_Pietsch/Janine_Pietsch_0001.jpg => True
detect: dogImages/train/050.Chinese_shar-pei/Chinese_shar-pei_03547.jpg => False
detect: lfw/Choi_Sung-hong/Choi_Sung-hong_0005.jpg => True
###Markdown
--- Step 2: Detect DogsIn this section, we use a pre-trained [ResNet-50](http://ethereon.github.io/netscope//gist/db945b393d40bfa26006) model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
###Code
from keras.applications.resnet50 import ResNet50
# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
###Output
_____no_output_____
###Markdown
Pre-process the DataWhen using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape$$(\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}),$$where `nb_samples` corresponds to the total number of images (or samples), and `rows`, `columns`, and `channels` correspond to the number of rows, columns, and channels for each image, respectively. The `path_to_tensor` function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape$$(1, 224, 224, 3).$$The `paths_to_tensor` function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape $$(\text{nb_samples}, 224, 224, 3).$$Here, `nb_samples` is the number of samples, or number of images, in the supplied array of image paths. It is best to think of `nb_samples` as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
###Code
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
###Output
_____no_output_____
###Markdown
Making Predictions with ResNet-50Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function `preprocess_input`. If you're curious, you can check the code for `preprocess_input` [here](https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py).Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the `predict` method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the `ResNet50_predict_labels` function below.By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
###Code
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
###Output
_____no_output_____
###Markdown
Write a Dog DetectorWhile looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the `ResNet50_predict_labels` function above returns a value between 151 and 268 (inclusive).We use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
###Code
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Assess the Dog Detector__Question 3:__ Use the code cell below to test the performance of your `dog_detector` function. - What percentage of the images in `human_files_short` have a detected dog? - What percentage of the images in `dog_files_short` have a detected dog?__Answer:__ * Percentage humans classified as dogs 2.0 %. * Percentage dogs classified as dogs 100.0 %.
###Code
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
if run_questions:
no_of_humans = 0
no_of_dogs = 0
total = 0
for human, dog in zip(human_files_short,dog_files_short):
if dog_detector(human):
no_of_humans += 1
if dog_detector(dog):
no_of_dogs += 1
total += 1
print("Percentage humans classified as dogs ", (no_of_humans / total) * 100, "%.")
print("Percentage dogs classified as dogs ", (no_of_dogs / total) * 100, "%.")
###Output
Percentage humans classified as dogs 2.0 %.
Percentage dogs classified as dogs 100.0 %.
###Markdown
--- Step 3: Create a CNN to Classify Dog Breeds (from Scratch)Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train. We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel. Brittany | Welsh Springer Spaniel- | - | It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels). Curly-Coated Retriever | American Water Spaniel- | - | Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed. Yellow Labrador | Chocolate Labrador | Black Labrador- | - | | We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun! Pre-process the DataWe rescale the images by dividing every pixel in every image by 255.
###Code
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
###Output
100%|██████████| 6680/6680 [00:36<00:00, 183.39it/s]
100%|██████████| 835/835 [00:06<00:00, 125.58it/s]
100%|██████████| 836/836 [00:05<00:00, 141.89it/s]
###Markdown
(IMPLEMENTATION) Model ArchitectureCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line: model.summary()We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs: __Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.__Answer:__ __Answer:__ I started out with the model that I came up with for the classifying of the face recognition above. This comes from me googling around and reading https://www.learnopencv.com/image-classification-using-convolutional-neural-networks-in-keras/ and used that as a baseline. I added layers to improbe the number of params. 1. First training gave me _Test accuracy: 1.1962%_ after 5 epochs. 1. Training the second time with 10 epochs did nothing for the result. i.e _Test accuracy: 1.1962%_ 1. Divide all filters by 2 to make a smaller network, 5 epochs - _Test accuracy: 1.1962%_ 1. Multiply all filters by 4 (original times 2) to make a larger network, 5 epochs: Stopped first iteration as validation accuracy increased. 1. Original network increasing dropout to .5 from 0.25 in hidden layers: Test accuracy: 1.0766% 1. Change batch size to 30: _Test accuracy: 1.1962%_ 1. Testing to add one additional filter layer in each of the hidden layers: _Test accuracy: 1.1962%_ 1. Add batch normalization between the added layers: _Test accuracy: 2.5120%_ Finally something is improving. Lets go from here. 1. Change the kernel size to 2 for each layer: _Test accuracy: 4.4258%_ 1. Change dense layers from 512 - 1024: _Test accuracy: 3.1100%_ 1. Add another dense layers of size 512 in the final layers: _Test accuracy: 2.3923%_ 1. Decrease layers down to half in the fully connected layer (512 -> 256): _Test accuracy: 2.1531%_ 1. back to one fully connected layer in part 9 and increase epoches to 10 from 5: _Test accuracy: 5.6220%_ 1. Increase batchers dense layer to 1024 again: _Test accuracy: 3.7081%_ 1. Go back to dense layer of 512 and go for 20 epochs: _Test accuracy: 5.7416%_ Feels like model at step 13 is the way to go. Pretty happy with the network that I evolved readin around on the internet and seeing how other people add layers to improve their networks. Running for more than 10 batches seems to give little back from the time invested. And over 5% accuracy feels like a good first step.
###Code
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense, BatchNormalization
from keras.models import Sequential
input_shape = Input(shape=(224, 224, 3))
x = Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')(input_shape)
x = Conv2D(filters=32, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=64, kernel_size=2, activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=64, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=64, kernel_size=2, activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=64, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=128, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=128, kernel_size=2, activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=128, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=128, kernel_size=2, activation='relu')(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.25)(x)
x = Conv2D(filters=256, kernel_size=2, padding='same', activation='relu')(x)
x = Conv2D(filters=256, kernel_size=2, activation='relu')(x)
x = MaxPooling2D(pool_size=2)(x)
x = Dropout(0.5)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(133, activation='softmax')(x)
model = Model(inputs=input_shape, outputs=x)
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, 224, 224, 32) 416
_________________________________________________________________
conv2d_14 (Conv2D) (None, 223, 223, 32) 4128
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 111, 111, 32) 0
_________________________________________________________________
dropout_8 (Dropout) (None, 111, 111, 32) 0
_________________________________________________________________
conv2d_15 (Conv2D) (None, 111, 111, 64) 8256
_________________________________________________________________
conv2d_16 (Conv2D) (None, 110, 110, 64) 16448
_________________________________________________________________
batch_normalization_1 (Batch (None, 110, 110, 64) 256
_________________________________________________________________
conv2d_17 (Conv2D) (None, 109, 109, 64) 16448
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 54, 54, 64) 0
_________________________________________________________________
dropout_9 (Dropout) (None, 54, 54, 64) 0
_________________________________________________________________
conv2d_18 (Conv2D) (None, 54, 54, 64) 16448
_________________________________________________________________
conv2d_19 (Conv2D) (None, 53, 53, 64) 16448
_________________________________________________________________
batch_normalization_2 (Batch (None, 53, 53, 64) 256
_________________________________________________________________
conv2d_20 (Conv2D) (None, 52, 52, 64) 16448
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 26, 26, 64) 0
_________________________________________________________________
dropout_10 (Dropout) (None, 26, 26, 64) 0
_________________________________________________________________
conv2d_21 (Conv2D) (None, 26, 26, 128) 32896
_________________________________________________________________
conv2d_22 (Conv2D) (None, 25, 25, 128) 65664
_________________________________________________________________
batch_normalization_3 (Batch (None, 25, 25, 128) 512
_________________________________________________________________
conv2d_23 (Conv2D) (None, 24, 24, 128) 65664
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 12, 12, 128) 0
_________________________________________________________________
dropout_11 (Dropout) (None, 12, 12, 128) 0
_________________________________________________________________
conv2d_24 (Conv2D) (None, 12, 12, 128) 65664
_________________________________________________________________
conv2d_25 (Conv2D) (None, 11, 11, 128) 65664
_________________________________________________________________
batch_normalization_4 (Batch (None, 11, 11, 128) 512
_________________________________________________________________
conv2d_26 (Conv2D) (None, 10, 10, 128) 65664
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 5, 5, 128) 0
_________________________________________________________________
dropout_12 (Dropout) (None, 5, 5, 128) 0
_________________________________________________________________
conv2d_27 (Conv2D) (None, 5, 5, 256) 131328
_________________________________________________________________
conv2d_28 (Conv2D) (None, 4, 4, 256) 262400
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 2, 2, 256) 0
_________________________________________________________________
dropout_13 (Dropout) (None, 2, 2, 256) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 1024) 0
_________________________________________________________________
dense_3 (Dense) (None, 512) 524800
_________________________________________________________________
dropout_14 (Dropout) (None, 512) 0
_________________________________________________________________
dense_4 (Dense) (None, 133) 68229
=================================================================
Total params: 1,444,549
Trainable params: 1,443,781
Non-trainable params: 768
_________________________________________________________________
###Markdown
Compile the Model
###Code
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model_file = 'saved_models/weights.best.from_scratch.hdf5'
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Train the ModelTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
###Code
if force_retrain or not os.path.isfile(model_file):
from keras.callbacks import ModelCheckpoint
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 10
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath=model_file,
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
###Output
_____no_output_____
###Markdown
Load the Model with the Best Validation Loss
###Code
model.load_weights(model_file)
###Output
_____no_output_____
###Markdown
Test the ModelTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
###Code
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 6.4593%
###Markdown
--- Step 4: Use a CNN to Classify Dog BreedsTo reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN. Obtain Bottleneck Features
###Code
bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
###Output
_____no_output_____
###Markdown
Model ArchitectureThe model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
###Code
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
global_average_pooling2d_1 ( (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 133) 68229
=================================================================
Total params: 68,229
Trainable params: 68,229
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compile the Model
###Code
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
vgg_16_file='saved_models/weights.best.VGG16.hdf5'
###Output
_____no_output_____
###Markdown
Train the Model
###Code
if force_retrain or not os.path.isfile(vgg_16_file):
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath=vgg_16_file,
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
###Output
_____no_output_____
###Markdown
Load the Model with the Best Validation Loss
###Code
VGG16_model.load_weights(vgg_16_file)
###Output
_____no_output_____
###Markdown
Test the ModelNow, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
###Code
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 47.4880%
###Markdown
Predict Dog Breed with the Model
###Code
from extract_bottleneck_features import *
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
###Output
_____no_output_____
###Markdown
--- Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:- [VGG-19](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG19Data.npz) bottleneck features- [ResNet-50](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz) bottleneck features- [Inception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz) bottleneck features- [Xception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogXceptionData.npz) bottleneck featuresThe files are encoded as such: Dog{network}Data.npz where `{network}`, in the above filename, can be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the `bottleneck_features/` folder in the repository. (IMPLEMENTATION) Obtain Bottleneck FeaturesIn the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following: bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz') train_{network} = bottleneck_features['train'] valid_{network} = bottleneck_features['valid'] test_{network} = bottleneck_features['test']
###Code
### TODO: Obtain bottleneck features from another pre-trained CNN.
bottleneck_features = np.load('bottleneck_features/DogVGG19Data.npz')
train_VGG19 = bottleneck_features['train']
valid_VGG19 = bottleneck_features['valid']
test_VGG19 = bottleneck_features['test']
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Model ArchitectureCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line: .summary() __Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.__Answer:__ __Answer:__ Started to test around with the code from Transfer Learning in tensorflow to see some performance:```pythoninput_shape = Input(shape=(train_VGG19.shape[1:]))xx = Flatten()(input_shape)xx = Dense(1024, activation='relu', name="fc1")(xx)xx = Dropout(0.5)(xx)xx = Dense(133, activation='softmax')(xx)VGG19_model = Model(inputs=input_shape, outputs=xx)```Regardless of the dense layer size it did not seem to do anything to the model. Tried removing and fiddling on the Drouput without success. So I took to google. See alot of suggestions of the architecture above without improvement. So I try to add double layers with a batch normalization between fc2 and fc3. Low and behold 61%, but after 3 epochs nothing happens with the improvements, but I am onto something at least.```pythoninput_shape = Input(shape=(train_VGG19.shape[1:]))xx = Flatten()(input_shape)xx = Dense(1024, activation='relu', name="fc1")(xx)xx = Dense(1024, activation='relu', name="fc2")(xx)xx = BatchNormalization()(xx)xx = Dense(512, activation='relu', name="fc3")(xx)xx = Dense(512, activation='relu', name="fc4")(xx)xx = Dense(133, activation='softmax')(xx)``` 1. Dropping the weights to half (1024 -> 512 and 512 -> 256) did not do much. 1. Dropping fc4 did not do much. val loss seems to stay put. As it seems we overtrain the network let's add some dropout after fc2 and fc4. 1. Still not much. Move drouput from after fc2 and fc4 to fc1 and fc3. 1. _Test accuracy: 68.6603%_, which says that the dropouts helps. I see clearly that loss is going down but still not much improvements. Lets add an additional double layer with batch normalization with weight 256, called fc5,6. 1. Deepen the network did not help, lets go back to step 4 model and increase dropout from 0.25 -> 0.5. 1. _Test accuracy: 63.7560%_, Lets increase the number of nodes by double in fc1-4. 1. _Test accuracy: 67.5837%_, Seems the number of nodes is happy. Lets add drouput after fc4. 1. _Test accuracy: 70.3349%_, Still seems that it is chances that increases the performance now. The validation loss does not improve after batch 4 in any iteration. 1. _Test accuracy: 68.6603%_, Changing dropout back to 0.25 gives a shorter improvement. Gut feeling is that this is better sa it can slowly improve with more epochs. Lets try to exchange the BatchNormalizationafter fc2 to a drouput, to see what that does. 1. _Test accuracy: 68.6603%_, Seems that this is as far as I can go now. Changing the number of nodes doesn't help. Increasing the depth of the network doesn't help. Lets go back to step 9 model and train for 20 batches. 1. Result finalizing in _Test accuracy: 69.1388%_. Train the model one additional time to make sure. Took quite a while before I got past the 1 layer issue that most example on the net suggests. I cannot say why this did not work for me, but the feeling is that the number of outputs was so much larger than the classification examples I have found on the net and previous lectures. Dropout helped with the training of the model and avoided overfitting. I did not experiment with the number of batches in the testing set but that is something I would like to try out in the future.
###Code
### TODO: Define your architecture.
# 61%
input_shape = Input(shape=(train_VGG19.shape[1:]))
print(train_VGG19.shape[1:])
xx = Flatten()(input_shape)
xx = Dense(2048, activation='relu', name="fc1")(xx)
xx = Dropout(0.5)(xx)
xx = Dense(2048, activation='relu', name="fc2")(xx)
xx = BatchNormalization()(xx)
xx = Dense(1024, activation='relu', name="fc3")(xx)
xx = Dropout(0.5)(xx)
xx = Dense(1024, activation='relu', name="fc4")(xx)
xx = Dropout(0.5)(xx)
xx = Dense(133, activation='softmax')(xx)
VGG19_model = Model(inputs=input_shape, outputs=xx)
VGG19_model.summary()
###Output
(7, 7, 512)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) (None, 7, 7, 512) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 2048) 51382272
_________________________________________________________________
dropout_15 (Dropout) (None, 2048) 0
_________________________________________________________________
fc2 (Dense) (None, 2048) 4196352
_________________________________________________________________
batch_normalization_5 (Batch (None, 2048) 8192
_________________________________________________________________
fc3 (Dense) (None, 1024) 2098176
_________________________________________________________________
dropout_16 (Dropout) (None, 1024) 0
_________________________________________________________________
fc4 (Dense) (None, 1024) 1049600
_________________________________________________________________
dropout_17 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_6 (Dense) (None, 133) 136325
=================================================================
Total params: 58,870,917
Trainable params: 58,866,821
Non-trainable params: 4,096
_________________________________________________________________
###Markdown
(IMPLEMENTATION) Compile the Model
###Code
### TODO: Compile the model.
VGG19_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
vgg_19_file='saved_models/weights.best.VGG19.hdf5'
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Train the ModelTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss. You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
###Code
if force_retrain or not os.path.isfile(vgg_19_file):
from keras.callbacks import ModelCheckpoint
### TODO: Train the model.
epochs = 20
checkpointer = ModelCheckpoint(filepath=vgg_19_file,
verbose=1, save_best_only=True)
VGG19_model.fit(train_VGG19, train_targets,
validation_data=(valid_VGG19, valid_targets),
epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Load the Model with the Best Validation Loss
###Code
### TODO: Load the model weights with the best validation loss.
VGG19_model.load_weights(vgg_19_file)
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Test the ModelTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
###Code
### TODO: Calculate classification accuracy on the test dataset.
VGG19_predictions = [np.argmax(VGG19_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG19]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG19_predictions)==np.argmax(test_targets, axis=1))/len(VGG19_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 70.5742%
###Markdown
(IMPLEMENTATION) Predict Dog Breed with the ModelWrite a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan_hound`, etc) that is predicted by your model. Similar to the analogous function in Step 5, your function should have three steps:1. Extract the bottleneck features corresponding to the chosen CNN model.2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.3. Use the `dog_names` array defined in Step 0 of this notebook to return the corresponding breed.The functions to extract the bottleneck features can be found in `extract_bottleneck_features.py`, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function extract_{network} where `{network}`, in the above filename, should be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`.
###Code
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
from extract_bottleneck_features import *
def VGG19_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG19(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG19_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
for i, idx in enumerate(np.random.choice(x_train_indices.shape[0], size=5, replace=False)):
print(x_train_indices[idx])
print(' predicted to => ', VGG19_predict_breed(x_train_indices[idx]), ' (VGG16 model=', VGG16_predict_breed(x_train_indices[idx]), ')')
###Output
lfw/Lindsay_Benko/Lindsay_Benko_0001.jpg
predicted to => Xoloitzcuintli (VGG16 model= Welsh_springer_spaniel )
dogImages/train/122.Pointer/Pointer_07804.jpg
predicted to => Pointer (VGG16 model= Pointer )
dogImages/train/118.Pembroke_welsh_corgi/Pembroke_welsh_corgi_07641.jpg
predicted to => Pembroke_welsh_corgi (VGG16 model= Canaan_dog )
lfw/James_Brown/James_Brown_0001.jpg
predicted to => Giant_schnauzer (VGG16 model= Leonberger )
lfw/Cherie_Blair/Cherie_Blair_0002.jpg
predicted to => Kerry_blue_terrier (VGG16 model= Poodle )
###Markdown
--- Step 6: Write your AlgorithmWrite an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,- if a __dog__ is detected in the image, return the predicted breed.- if a __human__ is detected in the image, return the resembling dog breed.- if __neither__ is detected in the image, provide output that indicates an error.You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 5 to predict dog breed. Some sample output for our algorithm is provided below, but feel free to design your own user experience! (IMPLEMENTATION) Write your Algorithm
###Code
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def detector(img_path):
if dog_detector(img_path):
return 'Predicted breed: ' + VGG19_predict_breed(img_path)
elif face_detector_own(img_path):
return 'Human resembling breed: ' + VGG19_predict_breed(img_path)
else:
return 'Did not detect a dog or human in the picture.'
###Output
_____no_output_____
###Markdown
--- Step 7: Test Your AlgorithmIn this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that __you__ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog? (IMPLEMENTATION) Test Your Algorithm on Sample Images!Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images. __Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.__Answer:__ __Answer:__ Improvements: * Feels that the networks is larger than what they need to be. Should be able to slim it down. * The human face detector worked really bad with my set of random humans. Both was front faching and upscaled though. Happy that the truck was safe. I would probably need to review the images for the human data set in training and see how many is front facing or if all was in perspective. Cocker Spaniel is not that far fetched from a golden retriever, and once again. It is in the picture with a child only showing the head. So might not match the network I have trained. Need to evaluate the input here to see how it limits the wide variety of images people will put into the detector. * I would like to experiment with the classifier some. Just not sure how to proceed here. All the "examples" I see has less deep network than my solution. But also the number of classes it predicts is alot less. My base idea was to increase the number of neurons in one layer but that did not seem to help so deeper network helped with that. It however stagnated quite quick in the training process.
###Code
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
step7_test = [human_files[index] for index in np.random.choice(human_files.shape[0], size=5, replace=False)]
step7_test = np.append(step7_test, [train_files[index] for index in np.random.choice(train_files.shape[0], size=5, replace=False)])
step7_test = np.append(step7_test, np.array(glob("ownImages/*")))
for step7_test_image in step7_test:
print(step7_test_image)
print(' => ', detector(step7_test_image))
###Output
lfw/Kim_Clijsters/Kim_Clijsters_0010.jpg
=> Human resembling breed:Plott
lfw/Ronaldo_Luis_Nazario_de_Lima/Ronaldo_Luis_Nazario_de_Lima_0004.jpg
=> Human resembling breed:Smooth_fox_terrier
lfw/Brigitte_Boisselier/Brigitte_Boisselier_0001.jpg
=> Human resembling breed:Xoloitzcuintli
lfw/Fidel_Castro/Fidel_Castro_0001.jpg
=> Human resembling breed:German_wirehaired_pointer
lfw/Raul_Rivero/Raul_Rivero_0001.jpg
=> Did not detect a dog or human in the picture.
dogImages/train/057.Dalmatian/Dalmatian_04054.jpg
=> Predicted breed: Dalmatian
dogImages/train/049.Chinese_crested/Chinese_crested_03496.jpg
=> Predicted breed: Chinese_crested
dogImages/train/052.Clumber_spaniel/Clumber_spaniel_03691.jpg
=> Predicted breed: Clumber_spaniel
dogImages/train/039.Bull_terrier/Bull_terrier_02776.jpg
=> Predicted breed: Bull_terrier
dogImages/train/074.Giant_schnauzer/Giant_schnauzer_05115.jpg
=> Predicted breed: Giant_schnauzer
ownImages/golden_retriever_and_human.jpeg
=> Predicted breed: Cocker_spaniel
ownImages/norwegian_elkhound.jpg
=> Predicted breed: Norwegian_elkhound
ownImages/random_human_2.jpeg
=> Did not detect a dog or human in the picture.
ownImages/abandon_truck.jpg
=> Did not detect a dog or human in the picture.
ownImages/random_human_1.jpeg
=> Did not detect a dog or human in the picture.
|
Kaggle/Titanic Machine Learning from Disaster/Titanic machine learning insights from disaster.ipynb
|
###Markdown
Titanic: machine learning insights from disaster This notebook is based on the awesome course series brought by [Dan](https://www.kaggle.com/dansbecker), some of the code and explanations were taken from there, if you like what you see here I highly recommend to checkout these links:* [Permutation Importance](https://www.kaggle.com/dansbecker/permutation-importance)* [Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)* [SHAP Values](https://www.kaggle.com/dansbecker/shap-values)* [Advanced Uses of SHAP Values](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values?utm_medium=email&utm_source=mailchimp&utm_campaign=ml4insights) Here I'll try to put in practice some of what I learned from the course in another data set, and hopefully get more people attention to these very useful techniques of how to get more from your machine learning models. Machine learning insights are meant to give you more information about how and what your model is doing, this way you have a better interpretability of what's is going on, explain the model and it's predictions to other people, validate and improve it. What we'll do:* Load data and do some basic data engineering.* Use [XGBoost](https://xgboost.readthedocs.io/en/latest/index.html) to model our data.* Take insights from our model. * Permutation importance. * Partial plots. * SHAP values. * Advanced use of SHAP values. Dependencies
###Code
import re
import numpy as np
import pandas as pd
import xgboost as xgb
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, recall_score, precision_score
import eli5
from eli5.sklearn import PermutationImportance
from pdpbox import pdp
import shap
shap.initjs()
###Output
_____no_output_____
###Markdown
Load data
###Code
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
###Output
_____no_output_____
###Markdown
Feature engineering
###Code
def pre_process_data(df):
# Add "title" feature
df['title'] = df.apply(lambda row: re.split('[,.]+', row['Name'])[1], axis=1)
# Add "family" feature
df['family'] = df['SibSp'] + df['Parch'] + 1
# One-hot encode categorical values.
df = pd.get_dummies(df, columns=['Sex', 'Embarked', 'title'])
# Drop columns unwanted columns
# I'm dropping "Cabin" because it has too much missing data.
df = df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1)
return df
# Get labels
# labels = train['Survived'].values
labels = train['Survived']
train.drop(['Survived'], axis=1, inplace=True)
# Get test ids
test_ids = test['PassengerId'].values
train = pre_process_data(train)
test = pre_process_data(test)
# align both data sets (by outer join), to make they have the same amount of features,
# this is required because of the mismatched categorical values in train and test sets.
train, test = train.align(test, join='outer', axis=1)
# replace the nan values added by align for 0
train.replace(to_replace=np.nan, value=0, inplace=True)
test.replace(to_replace=np.nan, value=0, inplace=True)
###Output
_____no_output_____
###Markdown
In case you wanna take a look at the data
###Code
train.head()
train.describe()
###Output
_____no_output_____
###Markdown
Split data
###Code
X_train, X_val, Y_train, Y_val = train_test_split(train, labels, test_size=0.2, random_state=1)
###Output
_____no_output_____
###Markdown
Model* Model really isn't the focus here so I won't try to build something super good, but I think it's good to leave a clear and organized code anyway. Train
###Code
model = xgb.XGBClassifier(objective ='binary:logistic', colsample_bytree=0.3, learning_rate=0.1,max_depth=8, n_estimators=50)
model.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
Validation
###Code
predictions = model.predict(X_val)
# Basic metrics from the model
accuracy = accuracy_score(predictions, Y_val)
recall = recall_score(predictions, Y_val)
precision = precision_score(predictions, Y_val)
print('Model metrics')
print('Accuracy: %.2f' % accuracy)
print('Recall: %.2f' % recall)
print('Precision: %.2f' % precision)
###Output
Model metrics
Accuracy: 0.79
Recall: 0.85
Precision: 0.60
###Markdown
Now we can finally take some insights from our model. Let's begin with permutation importance, XGBoost already give us a nice way to visualize feature importance base on F score, let's take a look.
###Code
plt.rcParams["figure.figsize"] = (15, 6)
xgb.plot_importance(model)
plt.show()
###Output
_____no_output_____
###Markdown
Based on this graph we can see that age and fare values were the most important features (older and people who paid more had a higher chance to survive), also females had more chance than males to survive. Now we can try permutation importance.* Permutation importance is a cool technique that give us another angle evaluation feature importance.* It's calculated after a model has been fitted.* Basically we ask the following question: If I randomly shuffle a single column of the validation data, leaving the target and all other columns in place, how would that affect the accuracy of predictions in that now-shuffled data?* This way if a feature has high importance shuffling it should lower our model metrics, on the other hand if shuffling don't impact the metrics too much that feature has low importance (it doesn't really matter to the model). Important note here for anyone trying to use eli5's PermutationImportance on XGBoost estimators, currently you need to train your models using ".as_matrix()" with you input data (X and Y), otherwise PermutationImportance won't work.
###Code
# Fitting the model again so it can work with PermutationImportance.
model_PI = xgb.XGBClassifier(objective ='binary:logistic', colsample_bytree=0.3, learning_rate=0.1,max_depth=8, n_estimators=50)
model_PI.fit(X_train.values, Y_train.values)
perm = PermutationImportance(model_PI, random_state=1).fit(X_train, Y_train)
eli5.show_weights(perm, feature_names=X_val.columns.tolist())
###Output
_____no_output_____
###Markdown
So, the values can go from green to red, being the greener the more important the feature is (top features) and the whiter, less important the feature is (bottom features).The first number in each row shows how much model performance (in this case, using "accuracy" as the performance metric) decreased with a random shuffling with a confidence interval (variance from the multiple shuffles).Negative values (red) means that the shuffling actually improved the model performance, so those features don't really matter but the shuffling (fake data) improved them.In this case the top 2 features (the 2 most important) were the same that we got from XGBoost feature importance method, but here "Pclass" seems to be as important as "Age", another interesting thing here is that "Sex_female" and "Sex_female" have the same importance, this may seem strange at first but remember that they are true/false features, and the opposite of the other, so actually this make all sense. Next let's do some partial plots.* The aim of partial plots is to show how a feature affects predictions.* It's calculated after a model has been fitted.* To calculate the data for our plot, we could take one row of data make a prediction on that row, and repeatedly alter the value for one variable to make a series of predictions. Then we trace out predicted outcomes (on the vertical axis). To have a more robust representation, we make this process to multiple rows of data, and plot the average predicted outcome.
###Code
feature_names = train.columns.tolist()
pdp_fare = pdp.pdp_isolate(model=model, dataset=X_val, model_features=feature_names, feature='Fare')
pdp.pdp_plot(pdp_fare, 'Fare')
plt.show()
###Output
_____no_output_____
###Markdown
About the plot* The y axis is interpreted as change in the prediction from what it would be predicted at the baseline or leftmost value.* A blue shaded area indicates level of confidenceHere we have a strange behavior, on the first few points, this may be because of the small dataset and some noisy data, but if we smooth that line, we can see that the higher the fare is more it contributes to higher predictions, this is true to around fare = 60, from this point it seems that increasing fare has no impact in predictions. Anyway the impact seems to be small. Let's try on "Pclass"
###Code
pdp_fare = pdp.pdp_isolate(model=model, dataset=X_val, model_features=feature_names, feature='Pclass')
pdp.pdp_plot(pdp_fare, 'Pclass')
plt.show()
###Output
_____no_output_____
###Markdown
Here we have a more clear plot of how this feature affects predictions, as we increase the "Pclass" we have a lower predicted value, this make all sense because higher classes had people with more money, and probably had priority leaving the ship, compared to people from lower classes. 2D Partial Dependence Plots* We can also use partial plots to evaluate interactions between features and the label, with 2D dependence plots.
###Code
features_to_plot1 = ['family', 'Pclass']
pdp_inter = pdp.pdp_interact(model=model, dataset=X_val, model_features=feature_names, features=features_to_plot1)
pdp.pdp_interact_plot(pdp_interact_out=pdp_inter, feature_names=features_to_plot1, plot_type='contour')
plt.show()
###Output
_____no_output_____
###Markdown
Here we have a couple of plots of the interaction between some of our features, on the side of each plot we have a scale of feature importance.On the first one we have how "Pclas" and "family" interact together on the prediction, as we can see people with smaller families (max of 5 people) and from higher class (1st class) had higher chance to survive, and the people who had more than 9 family member and were from 3rd class had the lower chance.
###Code
features_to_plot2 = ['Sex_female', 'Pclass']
pdp_inter = pdp.pdp_interact(model=model, dataset=X_val, model_features=feature_names, features=features_to_plot2)
pdp.pdp_interact_plot(pdp_interact_out=pdp_inter, feature_names=features_to_plot2, plot_type='contour')
plt.show()
###Output
_____no_output_____
###Markdown
The second plot show the interaction from "Pclass" and "Sex_female", this on is pretty clear as well, we can see that the people with the higher chance to survive were female (Sex_female = 1) and from the 1st class (Pclass = 1), and the ones with the lowest chances were the opposite, males from the 3rd class.This kind of plot give us a good understanding of what our model learned, and how features behaves according to it, one good idea would be to validate this insights with the domain expert and see if everything is going as expected. SHAP values* SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature.* It can be used to help explain how the model got to one prediction, e.g. a model says a bank shouldn't loan someone money, and the bank is legally required to explain the basis for each loan rejection.* SHAP values interpret the impact of having a certain value for a given feature in comparison to the prediction we'd make if that feature took some baseline value.
###Code
row_to_show = 14
data_for_prediction = X_val.iloc[[row_to_show]]
explainer = shap.TreeExplainer(model)
shap_values_single = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values_single, data_for_prediction)
###Output
_____no_output_____
###Markdown
Feature values causing increased predictions are in pink, and their visual size shows the magnitude of the feature's effect. Feature values decreasing the prediction are in blue. The features that more increase prediction values were "Sex_female" and "Sex_male" (the person here was a female) here seems clear that having both "Sex_female" and "Sex_male" is redundant, as they mean the same. And the feature that decreased more the prediction was "Pclass" that in this case was 3.If you subtract the length of the blue bars from the length of the pink bars, it equals the distance from the base value to the output. Advanced Uses of SHAP Values SHAP summary plots* These plots will give us a more detailed view of feature importance and what is driving it.
###Code
shap_values = explainer.shap_values(X_val)
shap.summary_plot(shap_values, X_val)
###Output
_____no_output_____
###Markdown
* Vertical location shows what feature it is depicting* Color shows whether that feature was high or low for that row of the dataset* Horizontal location shows whether the effect of that value caused a higher or lower prediction.For example, pink dots in the "Fare" row, meas high fare values, blue values in the "Sex_female" row means low values, in this case 0 that means that point is a male. Those pink values in "title_Master" row means that the few people that had the title Master had increased chance to survive.Some insights we can get from this plot:* The model ignored the bottom 5 features.* Binary features only have 2 possible colors.* High values of "Pclass" caused lower predictions, and high values caused higher predictions. SHAP Dependence Contribution Plots* This one reminds the partial dependence plots that we did before, but with a lot more detail.
###Code
shap.dependence_plot('Fare', shap_values, X_val, interaction_index="Pclass")
###Output
_____no_output_____
###Markdown
* Each dot represents a row of the data.* The horizontal location is the actual value from the dataset.* The vertical location shows what having that value did to the prediction.* Color show the values of another feature, to help we see how they interact.Here we can see how "Pclass" and "Fare" interact together affecting the predictions, it's clear that higher fares means lower class (1st class), and in general people who paid more had more chance to survive.Let's try another one.
###Code
shap.dependence_plot('Pclass', shap_values, X_val, interaction_index="Sex_female")
###Output
_____no_output_____
###Markdown
Here we have something interesting, again people from higher classes had more chance to survive, but for people from the 3rd class females seems to had less chance to survive, what may be the cause? And finally because this is a kernel from a competition, let's make our predictions and output results.
###Code
test_predictions = model.predict(test)
submission = pd.DataFrame({"PassengerId":test_ids})
submission["Survived"] = test_predictions
submission.to_csv("submission.csv", index=False)
###Output
_____no_output_____
|
DL_P1_first-neural-network/.ipynb_checkpoints/Your_first_neural_network-checkpoint.ipynb
|
###Markdown
你的第一个神经网络在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
加载和准备数据构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
###Code
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
###Output
_____no_output_____
###Markdown
数据简介此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
###Code
rides[:24*10].plot(x='dteday', y='cnt')
###Output
_____no_output_____
###Markdown
虚拟变量(哑变量)下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 `get_dummies()` 就可以轻松实现。
###Code
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
###Output
_____no_output_____
###Markdown
调整目标变量为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
###Code
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
###Output
_____no_output_____
###Markdown
将数据拆分为训练、测试和验证数据集我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
###Code
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
###Output
_____no_output_____
###Markdown
我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
###Code
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
###Output
_____no_output_____
###Markdown
开始构建网络下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。> **提示**:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。你需要完成以下任务:1. 实现 S 型激活函数。将 `__init__` 中的 `self.activation_function` 设为你的 S 型函数。2. 在 `train` 方法中实现前向传递。3. 在 `train` 方法中实现反向传播算法,包括计算输出错误。4. 在 `run` 方法中实现前向传递。
###Code
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
# print("X:", X)
# print("y:", y)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
# print("error:", error)
# print("output_error_term:", output_error_term)
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# print("hidden_error", hidden_error)
# print("hidden_error_term:", hidden_error_term)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# print("delta_weights_i_h:", delta_weights_i_h)
# print("delta_weights_h_o:", delta_weights_h_o)
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o * self.lr / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h * self.lr / n_records # update input-to-hidden weights with gradient descent step
# print("self.weights_hidden_to_output:", self.weights_hidden_to_output)
# print("self.weights_input_to_hidden:", self.weights_input_to_hidden)
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
###Output
_____no_output_____
###Markdown
单元测试运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
###Code
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
###Output
.....
----------------------------------------------------------------------
Ran 5 tests in 0.004s
OK
###Markdown
训练网络现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。 选择迭代次数也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。 选择学习速率速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。 选择隐藏节点数量隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
###Code
import sys
### Set the hyperparameters here ###
iterations = 2000
learning_rate = 0.72
hidden_nodes = 14
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
###Output
_____no_output_____
###Markdown
检查预测结果使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。
###Code
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
###Output
_____no_output_____
|
Projects/wine_quality_prediction/Wine_Quality_Prediction.ipynb
|
###Markdown
Read the dataset
###Code
#import libraries
import numpy as np
import pandas as pd
#specify the data
data = pd.read_csv('winequality.csv')
data.info()
data.describe()
data.head()
#check for null values
data.isnull().sum()
print(data['fixed acidity'].nunique())
print(data['volatile acidity'].nunique())
print(data['citric acid'].nunique())
print(data['residual sugar'].nunique())
print(data['chlorides'].nunique())
print(data['pH'].nunique())
print(data['sulphates'].nunique())
data.shape
#fill null values
data['fixed acidity'] = data['fixed acidity'].fillna(data['fixed acidity'].mean())
data['volatile acidity'] = data['volatile acidity'].fillna(data['volatile acidity'].mean())
data['citric acid'] = data['citric acid'].fillna(data['citric acid'].mode()[0])
data['residual sugar'] = data['residual sugar'].fillna(data['residual sugar'].mode()[0])
data['chlorides'] = data['chlorides'].fillna(data['chlorides'].mean())
data['pH'] = data['pH'].fillna(data['pH'].mode()[0])
data['sulphates'] = data['sulphates'].fillna(data['sulphates'].mode()[0])
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
#import necessary libraries
import seaborn as sns
import matplotlib.pyplot as plt
data.hist(figsize=(20,20), bins = 25)
###Output
_____no_output_____
###Markdown
the graph shows that most of them are right-skewed
###Code
#check the outliers
data.plot(kind = 'box', subplots = True, figsize = (20,20), layout = (4,3))
data.loc[data['volatile acidity']>12,'volatile acidity'] = np.mean(data["volatile acidity"])
data.loc[data['citric acid']>1.1,'citric acid'] = np.mean(data["citric acid"])
data.loc[data['residual sugar']>30,'residual sugar'] = np.mean(data["residual sugar"])
data.loc[data['chlorides']>0.5,'chlorides'] = np.mean(data["chlorides"])
data.loc[data['free sulfur dioxide']>200,'free sulfur dioxide'] = np.mean(data["free sulfur dioxide"])
data.loc[data['total sulfur dioxide']>350,'total sulfur dioxide'] = np.mean(data["total sulfur dioxide"])
data.loc[data['density']>1.02,'density'] = np.mean(data["density"])
#check corealtion
sns.heatmap(data.corr(), annot =True)
sns.set(rc = {'figure.figsize':(20,20)})
###Output
_____no_output_____
###Markdown
Prepare data
###Code
from imblearn.combine import SMOTEENN
#fix categorical data
data = pd.get_dummies(data,drop_first=True)
data
###Output
_____no_output_____
###Markdown
wine quality of 6 and 5 have different colours, this needs to be fixed
###Code
#let's say, wine of quality 7 is best
data['Best quality'] = [ 1 if x>=7 else 0 for x in data.quality]
data.head()
#set variables for features(X) and target(y)
X = data.drop(['quality','Best quality'], axis =1)
y = data['Best quality']
#data is imbalanced, hence we need to balance our data
sn = SMOTEENN(random_state=0)
sn.fit(X, y)
X, y=sn.fit_resample(X, y)
###Output
_____no_output_____
###Markdown
Model
###Code
#import necessary libraries
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn import svm
#divide the data in training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3, random_state =5)
# Fitting Polynomial Regression to the dataset
poly=PolynomialFeatures(degree=2)
lm_poly=LinearRegression()
X_train_poly=poly.fit_transform(X_train)
X_test_poly=poly.fit_transform(X_test)
lm_poly.fit(X_train_poly,y_train)
y_pred_regg = lm_poly.predict(X_test_poly)
# Fitting KNN to the dataset
knn = KNeighborsClassifier(n_neighbors = 6)
knn.fit(X_train, y_train)
y_pred_knn = knn.predict(X_test)
#Fitting Decision Tree Classifier model to the dataset
dtc = DecisionTreeClassifier(criterion='entropy', random_state=6)
dtc.fit(X_train,y_train)
y_pred_dtc = dtc.predict(X_test)
#Fitting Random Forest Classifier model to the dataset
rff = RandomForestClassifier(random_state =10)
rff.fit(X_train, y_train)
y_pred_rff = rff.predict(X_test)
#Fitting svm(Support Vector Machine) model to the dataset
sv = svm.SVR()
sv.fit(X_train, y_train)
y_pred_svm = sv.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Selection
###Code
#import necessary libraries
from sklearn.metrics import r2_score, mean_absolute_error
#polynomial regression model
rmse = np.sqrt(mean_absolute_error(y_test, y_pred_regg))
print("Root Mean Squared Error of polynomial regression model is :", rmse)
r2 = r2_score(y_test, y_pred_regg)
print("Accuracy (R2 score) of polynomial regression model is :", r2)
#knn model
rmse = np.sqrt(mean_absolute_error(y_test, y_pred_knn))
print("Root Mean Squared Error of knn model is :", rmse)
r2 = r2_score(y_test, y_pred_knn)
print("Accuracy (R2 score) of knn model is :", r2)
#decision tree classifier model
rmse = np.sqrt(mean_absolute_error(y_test, y_pred_dtc))
print("Root Mean Squared Error of decision tree classifier model is :", rmse)
r2 = r2_score(y_test, y_pred_dtc)
print("Accuracy (R2 score) of decision tree classifier model is :", r2)
#random forest classifier model
rmse = np.sqrt(mean_absolute_error(y_test, y_pred_rff))
print("Root Mean Squared Error of random forest classifier model is :", rmse)
r2 = r2_score(y_test, y_pred_rff)
print("Accuracy (R2 score) of random forest classifier model is :", r2)
#svm model
rmse = np.sqrt(mean_absolute_error(y_test, y_pred_svm))
print("Root Mean Squared Error of svm model is :", rmse)
r2 = r2_score(y_test, y_pred_svm)
print("Accuracy (R2 score) of svmr model is :", r2)
###Output
Root Mean Squared Error of svm model is : 0.5576788686205644
Accuracy (R2 score) of svmr model is : 0.36302259531257
|
module4-classification-metrics/Assign20_LS_DS_224_assignment.ipynb
|
###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 4*--- Classification Metrics Assignment- [ ] If you haven't yet, [review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.- [ ] Plot a confusion matrix for your Tanzania Waterpumps model.- [ ] Continue to participate in our Kaggle challenge. Every student should have made at least one submission that scores at least 70% accuracy (well above the majority class baseline).- [ ] Submit your final predictions to our Kaggle competition. Optionally, go to **My Submissions**, and _"you may select up to 1 submission to be used to count towards your final leaderboard score."_- [ ] Commit your notebook to your fork of the GitHub repo.- [ ] Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](http://archive.is/DelgE), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook. Stretch Goals Reading- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415) Doing- [ ] Share visualizations in our Slack channel!- [ ] RandomizedSearchCV / GridSearchCV, for model selection. (See module 3 assignment notebook)- [ ] Stacking Ensemble. (See module 3 assignment notebook)- [ ] More Categorical Encoding. (See module 2 assignment notebook)
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv',
parse_dates=['date_recorded'],
na_values=[0, -2.000000e-08]),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')).set_index('id')
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv',
parse_dates=['date_recorded'],
na_values=[0, -2.000000e-08])
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from category_encoders import OrdinalEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import accuracy_score, classification_report, plot_confusion_matrix
###Output
_____no_output_____
###Markdown
WRANGLE
###Code
def wrangle(X):
#Make a copy
X = X.copy()
#Create yr_recorded column
X['yr_recorded'] = X['date_recorded'].dt.year
## Feature Engineering
X['pump_age'] = X['yr_recorded'] - X['construction_year']
X['tsh_per_person'] = X['amount_tsh'] / X['population']
X['tsh_by_height'] = X['amount_tsh'] / X['gps_height']
#Column list to remove
columns=['date_recorded', 'recorded_by', 'quality_group', 'extraction_type_group']
X = X.drop(columns=columns)
return X
train = wrangle(train)
test = wrangle(test)
###Output
_____no_output_____
###Markdown
SPLIT
###Code
target = 'status_group'
y = train[target]
X = train.drop(columns=target)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
###Output
_____no_output_____
###Markdown
BASELINE
###Code
print('Baseline Accuracy:', y.value_counts(normalize=True).max())
###Output
Baseline Accuracy: 0.5430899510092763
###Markdown
BUILD MODEL
###Code
rf = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=42,
n_jobs=-1)
)
rf.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
CHECK METRICS
###Code
print("Training Accuracy:", rf.score(X_train, y_train))
print("Validation Accuracy:", rf.score(X_val, y_val))
plot_confusion_matrix(rf, X_val, y_val, values_format='.0f')
print(classification_report(y_val, rf.predict(X_val)))
###Output
precision recall f1-score support
functional 0.81 0.89 0.85 6482
functional needs repair 0.55 0.34 0.42 842
non functional 0.85 0.79 0.82 4556
accuracy 0.81 11880
macro avg 0.74 0.67 0.70 11880
weighted avg 0.81 0.81 0.81 11880
###Markdown
TUNE
###Code
rf_gs = GridSearchCV(rf,
param_grid={
'randomforestclassifier__n_estimators': range(220, 231, 2),
'randomforestclassifier__max_depth': range(45, 56, 2)
},
cv=5,
n_jobs=-1,
verbose=1)
rf_gs.fit(X_train, y_train)
rf_gs.best_params_
rf_tuned = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(n_estimators=222,
max_depth=47,
random_state=42,
n_jobs=-1)
)
rf_tuned.fit(X_train, y_train);
print("Training Accuracy:", rf_tuned.score(X_train, y_train))
print("Validation Accuracy:", rf_tuned.score(X_val, y_val))
###Output
Training Accuracy: 1.0
Validation Accuracy: 0.8132154882154882
|
clase_12_DataWrangling/2_checkpoint.ipynb
|
###Markdown
IntroducciónData wrangling es el proceso de limpieza y unificación de conjuntos de datos desordenados y complejos para facilitar su acceso, exploración, análisis o modelización posterior.Las tareas que involucra son* Limpieza de datos* Eliminación de registros duplicados* Transformación de datos* Discretización de variables* Detección y filtro de outliers* Construcción de variables dummiesPandas provee métodos para llevar a cabo estas tareas, y en esta práctica repasaremos algunos de estos métodos. DatasetEn esta clase usaremos un dataset con info de películas que disponibiliza datos de movielens (https://movielens.org/).https://grouplens.org/datasets/movielens/http://files.grouplens.org/datasets/movielens/ml-latest-small.zipEste conjunto de datos está conformado por varios archivos:* **movies**: idPelicula, título y género; donde cada registro tiene los datos de una película* **ratings**: idUsuario, idPelicula, rating, fecha; donde cada registro tienen la calificación otorgada por un usuario a una película* **tags**: idUsuario, idPelicula, tag, fecha; donde cada registro tienen el tag que asignó un usuario a una película Imports
###Code
import pandas as pd
import numpy as np
import re
###Output
_____no_output_____
|
feature_importance_models.ipynb
|
###Markdown
lasso regression
###Code
# Lasso regression
pipe_lasso = Pipeline([('StandardScaler', StandardScaler()), ('LassoRegression', Lasso(random_state=0))])
pipe_lasso.fit(X_reg, y_reg)
features_lasso = X_reg.columns
coefs_lasso = pipe_lasso.named_steps['LassoRegression'].coef_
important_features_lasso = []
for i in range(0, len(features_lasso)):
important_features_lasso.append([features_lasso[i], np.absolute(coefs_lasso[i])])
important_features_lasso = sorted(important_features_lasso, key=lambda x: x[1], reverse=True)
important_features_lasso = pd.DataFrame(important_features_lasso)[0:25]
ax=plt.subplot()
important_features_lasso.plot(x=0, y=1, kind='bar', legend=False, figsize=(14,6), color='salmon', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Feature', fontsize=20)
ax.set_ylabel('Absolute Value of Coefficient', fontsize=20)
plt.show()
# Hyperparameter tune lasso regression
clfs = {'lr': Lasso(random_state=0)}
pipe_clfs = {}
for name, clf in clfs.items():
pipe_clfs[name] = Pipeline([('StandardScaler', StandardScaler()), ('clf', clf)])
param_grids = {}
param_grid = [{'clf__alpha': [10 ** i for i in range(-5, 1)]}]
param_grids['lr'] = param_grid
best_score_param_estimators = []
for name in pipe_clfs.keys():
gs = GridSearchCV(estimator=pipe_clfs[name],
param_grid=param_grids[name],
scoring='neg_mean_squared_error',
n_jobs=-1,
cv=StratifiedKFold(n_splits=10,
shuffle=True,
random_state=0))
gs = gs.fit(X_reg, y_reg)
best_score_param_estimators.append([gs.best_score_, gs.best_params_, gs.best_estimator_])
best_score_param_estimators = sorted(best_score_param_estimators, key=lambda x : x[0], reverse=True)
for best_score_param_estimator in best_score_param_estimators:
print([best_score_param_estimator[0], best_score_param_estimator[1], type(best_score_param_estimator[2].named_steps['clf'])], end='\n\n')
# Plot feature importance based on hyperparameter tuned lasso regression
pipe_hyp_lasso = Pipeline([('StandardScaler', StandardScaler()), ('LassoRegression', Lasso(alpha=1, random_state=0))])
pipe_hyp_lasso.fit(X_reg, y_reg)
pipe_hyp_lasso.fit(X_reg, y_reg)
features_hyp_lasso = X_reg.columns
coefs_hyp_lasso = pipe_hyp_lasso.named_steps['LassoRegression'].coef_
important_features_hyp_lasso = []
for i in range(0, len(features_hyp_lasso)):
important_features_hyp_lasso.append([features_hyp_lasso[i], np.absolute(coefs_hyp_lasso[i])])
important_features_hyp_lasso = sorted(important_features_hyp_lasso, key=lambda x: x[1], reverse=True)
important_features_hyp_lasso = pd.DataFrame(important_features_hyp_lasso)[0:25]
ax=plt.subplot()
important_features_hyp_lasso.plot(x=0, y=1, kind='bar', legend=False, figsize=(14,6), color='darkred', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Feature', fontsize=20)
ax.set_ylabel('Absolute Value of Coefficient', fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
random forest regressor
###Code
# Random forest regressor
pipe_rfr = Pipeline([('StandardScaler', StandardScaler()), ('RandomForestRegressor', RandomForestRegressor(random_state=0))])
pipe_rfr.fit(X_reg, y_reg)
important_features_rfr = pd.Series(pipe_rfr.named_steps['RandomForestRegressor'].feature_importances_, features_reg)
important_features_rfr = important_features_rfr.sort_values(ascending=False)[0:25]
ax=plt.subplot()
important_features_rfr.plot(x='Features', y='Importance', kind='bar', figsize=(14,6), color='darkturquoise', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Features', fontsize=20)
ax.set_ylabel('Importance', fontsize=20)
plt.show()
# Hyperparameter tune random forest regressor
clfs = {'rfr': RandomForestRegressor(random_state=0)}
pipe_clfs = {}
for name, clf in clfs.items():
pipe_clfs[name] = Pipeline([('StandardScaler', StandardScaler()), ('clf', clf)])
param_grids = {}
param_grid = [{'clf__n_estimators': [2, 10, 30],
'clf__min_samples_split': [2, 10, 30],
'clf__min_samples_leaf': [1, 10, 30]}]
param_grids['rfr'] = param_grid
best_score_param_estimators = []
for name in pipe_clfs.keys():
gs = GridSearchCV(estimator=pipe_clfs[name],
param_grid=param_grids[name],
scoring='neg_mean_squared_error',
n_jobs=-1,
cv=StratifiedKFold(n_splits=10,
shuffle=True,
random_state=0))
gs = gs.fit(X_reg, y_reg)
best_score_param_estimators.append([gs.best_score_, gs.best_params_, gs.best_estimator_])
best_score_param_estimators = sorted(best_score_param_estimators, key=lambda x : x[0], reverse=True)
for best_score_param_estimator in best_score_param_estimators:
print([best_score_param_estimator[0], best_score_param_estimator[1], type(best_score_param_estimator[2].named_steps['clf'])], end='\n\n')
# Plot feature importance based on hyperparameter tuned random forest regressor
pipe_hyp_rfr = Pipeline([('StandardScaler', StandardScaler()),
('RandomForestRegressor', RandomForestRegressor(min_samples_leaf=1,
min_samples_split=2,
n_estimators=10,
random_state=0))])
pipe_hyp_rfr.fit(X_reg, y_reg)
important_features_hyp_rfr = pd.Series(pipe_hyp_rfr.named_steps['RandomForestRegressor'].feature_importances_, features_reg)
important_features_hyp_rfr = important_features_hyp_rfr.sort_values(ascending=False)[0:25]
ax=plt.subplot()
important_features_hyp_rfr.plot(x='Feature', y='Importance', kind='bar', figsize=(14,6), color='navy', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Features', fontsize=20)
ax.set_ylabel('Importance', fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
random forest classifier
###Code
# Random over sampler
ros = RandomOverSampler(random_state=0)
X_class, y_class = ros.fit_sample(X_class, y_class)
pd.DataFrame(data=y_class, columns=[target])[target].value_counts()
# Random forest classifier
pipe_rfc = Pipeline([('StandardScaler', StandardScaler()), ('RandomForestClassifier', RandomForestClassifier(random_state=0))])
pipe_rfc.fit(X_class, y_class)
important_features_rfc = pd.Series(pipe_rfc.named_steps['RandomForestClassifier'].feature_importances_, features_class)
important_features_rfc = important_features_rfc.sort_values(ascending=False)[0:25]
ax=plt.subplot()
important_features_rfc.plot(x='Feature', y='Importance', kind='bar', figsize=(14,6), color='limegreen', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Features', fontsize=20)
ax.set_ylabel('Importance', fontsize=20)
plt.show()
# Hyperparameter tune random forest classifier
clfs = {'rfc': RandomForestClassifier(random_state=0)}
pipe_clfs = {}
for name, clf in clfs.items():
pipe_clfs[name] = Pipeline([('StandardScaler', StandardScaler()), ('clf', clf)])
param_grids = {}
param_grid = [{'clf__n_estimators': [2, 10, 30],
'clf__min_samples_split': [2, 10, 30],
'clf__min_samples_leaf': [1, 10, 30]}]
param_grids['rfc'] = param_grid
best_score_param_estimators = []
for name in pipe_clfs.keys():
gs = GridSearchCV(estimator=pipe_clfs[name],
param_grid=param_grids[name],
scoring='accuracy',
n_jobs=-1,
cv=StratifiedKFold(n_splits=10,
shuffle=True,
random_state=0))
gs = gs.fit(X_class, y_class)
best_score_param_estimators.append([gs.best_score_, gs.best_params_, gs.best_estimator_])
best_score_param_estimators = sorted(best_score_param_estimators, key=lambda x : x[0], reverse=True)
for best_score_param_estimator in best_score_param_estimators:
print([best_score_param_estimator[0], best_score_param_estimator[1], type(best_score_param_estimator[2].named_steps['clf'])], end='\n\n')
# Plot feature importance based on hyperparameter tuned random forest classifier
pipe_hyp_rfc = Pipeline([('StandardScaler', StandardScaler()),
('RandomForestClassifier', RandomForestClassifier(min_samples_leaf=1,
min_samples_split=2,
n_estimators=30,
random_state=0))])
pipe_hyp_rfc.fit(X_class, y_class)
important_features_hyp_rfc = pd.Series(pipe_hyp_rfc.named_steps['RandomForestClassifier'].feature_importances_, features_class)
important_features_hyp_rfc = important_features_hyp_rfc.sort_values(ascending=False)[0:25]
ax=plt.subplot()
important_features_hyp_rfc.plot(x='Feature', y='Importance', kind='bar', figsize=(14,6), color='darkgreen', fontsize=14, rot=90, ax=ax)
ax.set_xlabel('Features', fontsize=20)
ax.set_ylabel('Importance', fontsize=20)
plt.show()
###Output
_____no_output_____
|
dev_utils/NIRCam_Darks/NIRCam_darks_485_sim.ipynb
|
###Markdown
Initialize SCA Dark
###Code
datadir='/Users/jarron/NIRCam/Data/CV3_Darks/'
outdir='/Users/jarron/NIRCam/dark_analysis/CV3/'
dark_data = nircam_dark(485, datadir, outdir)
# Dark ramp/slope info
# Get Super dark ramp (cube)
dark_data.get_super_dark_ramp()
# Calculate dark slope image
dark_data.get_dark_slope_image()
dark_data.get_super_bias_update()
# Calculate pixel slope averages
dark_data.get_pixel_slope_averages()
# Delete super dark ramp to save memory
del dark_data._super_dark_ramp
dark_data._super_dark_ramp = None
# Calculate CDS Noise for various component
# white noise, 1/f noise (correlated and independent), temporal and spatial
dark_data.get_cds_dict()
# Effective Noise
dark_data.get_effective_noise()
# Get kTC reset noise, IPC, and PPC values
dark_data.get_ktc_noise()
# Get the power spectrum information
# Saved to pow_spec_dict['freq', 'ps_all', 'ps_corr', 'ps_ucorr']
dark_data.get_power_spectrum(include_oh=False, calc_cds=True, mn_func=np.median, per_pixel=False)
# Calculate IPC/PPC kernels
dark_data.get_ipc(calc_ppc=True)
# Deconvolve the super dark and super bias images
dark_data.deconvolve_supers()
# Get column variations
dark_data.get_column_variations()
# Create dictionary of reference pixel behavior
dark_data.get_ref_pixel_noise()
###Output
[ pynrc:INFO] Determining column variations (RTN)
[ pynrc:INFO] Determining reference pixel behavior
###Markdown
Simulate Ramps
###Code
from pynrc.simul.ngNRC import sim_dark_ramp, sim_image_ramp, sim_noise_data
from pynrc.simul.ngNRC import gen_ramp_biases, gen_col_noise
from pynrc.simul.ngNRC import add_ipc, add_ppc
from pynrc.reduce.calib import broken_pink_powspec, ramp_resample
from pynrc.simul.ngNRC import slope_to_ramps, simulate_detector_ramp
det = pynrc.DetectorOps(detector=485, ngroup=103, nint=1)#, ypix=256, xpix=256, wind_mode='WINDOW')
import datetime
pynrc.setup_logging('WARN')
dir_out = '/Users/jarron/NIRCam/Data/Sim_Darks/485/'
nfiles = 2
for i in trange(nfiles):
now = datetime.datetime.now().isoformat()[:-7]
file_out = dir_out + f'NRCNRCALONG-DARK-485_SE_{now}.fits'
file_out = file_out.replace(':', 'h', 1)
file_out = file_out.replace(':', 'm', 1)
slope_to_ramps(det, dark_data, DMS=False, return_results=False, file_out=file_out)
data = simulate_detector_ramp(det, dark_data, im_slope=None, out_ADU=False)
# Detector setup and info
det = self.det
nchan = det.nout
ny, nx = (det.ypix, det.xpix)
# Super bias and darks
super_bias = self.super_bias_deconv # DN
super_dark = self.super_dark_deconv # DN/sec
# Scan direction info
ssd = self.det.same_scan_direction
rsd = self.det.reverse_scan_direction
# IPC/PPC kernel information
k_ipc = self.kernel_ipc
k_ppc = self.kernel_ppc
# Noise info
cds_dict = self.cds_act_dict
keys = ['spat_det', 'spat_pink_corr', 'spat_pink_uncorr']
cds_vals = [np.sqrt(np.mean(cds_dict[k]**2, axis=0)) for k in keys]
# CDS Noise values
rd_noise_cds, c_pink_cds, u_pink_cds = cds_vals
# Noise per frame
rn, cp, up = cds_vals / np.sqrt(2)
# kTC Reset Noise
ktc_noise = self.ktc_noise
# Detector Gain
gain = det.gain
# Power spectrum for correlated noise
freq = self.pow_spec_dict['freq']
scales = self._pow_spec_dict['ps_corr_scale']
# pcorr_fit = broken_pink_powspec(freq, scales)
# Reference info
ref_info = self.det.ref_info
ref_ratio = np.mean(self.cds_ref_dict['spat_det'] / self.cds_act_dict['spat_det'])
det_test = pynrc.DetectorOps(xpix=100,ypix=100,x0=10,y0=450,wind_mode='WINDOW')
det_test.reverse_scan_direction
det_test.ref_info
ref_dict
plt.plot(test[1])
plt.plot(test[1,1:])
plt.ylim([0,20])
res = np.random.poisson(10, (3,50,50))#.astype('float')
%time res = np.cumsum(res, axis=0, out=res)
plt.plot(data[:,100,100])
data /= gain
col_noise.shape
plt.imshow(data_noise[50,0:100,0:100])
super_bias = self.super_bias_deconv
super_dark = self.super_dark_deconv
k_ipc = self.kernel_ipc
k_ppc = self.kernel_ppc
# Scan direction info
ssd = self.det.same_scan_direction
rsd = self.det.reverse_scan_direction
# Average shape of ramp
ramp_avg_ch = self.dark_ramp_dict['ramp_avg_ch']
gain = self.det.gain
ktc_noise = self.ktc_noise # Units of DN
# Reference info
ref_info = self.det.ref_info
ref_ratio = np.mean(self.cds_ref_dict['spat_det'] / self.cds_act_dict['spat_det'])
from pynrc.simul.ngNRC import gen_col_noise, add_col_noise, gen_ramp_biases
from pynrc.simul.ngNRC import pink_noise, fft_noise, sim_noise_data, gen_dark_ramp, sim_dark_ramp
pbar = tqdm(total=10, leave=False)
# Initialize data with dark current
pbar.set_description("Dark Current")
data = sim_dark_ramp(det, super_dark, gain=gain, ramp_avg_ch=ramp_avg_ch, ref_info=ref_info)
pbar.update(1)
# Add super bias
pbar.set_description("Super Bias")
data += super_bias
pbar.update(1)
# Add kTC noise:
pbar.set_description("kTC Noise")
ktc_offset = np.random.normal(scale=ktc_noise, size=(ny,nx))
data += ktc_offset
pbar.update(1)
# Apply IPC
pbar.set_description("Include IPC")
data = add_ipc(data, kernel=k_ipc)
pbar.update(1)
pbar.set_description("Detector Noise")
data += sim_noise_data(det, rd_noise=rn, u_pink=up, c_pink=cp*1.2,
acn=1, pow_spec_corr=pcorr_fit, ref_ratio=ref_ratio,
same_scan_direction=ssd, reverse_scan_direction=rsd)
pbar.update(1)
# Add reference offsets
pbar.set_description("Ref Pixel Instability")
ref_dict = self._ref_pixel_dict
data += gen_ramp_biases(ref_dict, data_shape=data.shape, ref_border=det.ref_info)
pbar.update(1)
# Add column noise
pbar.set_description("Column Noise")
col_noise = gen_col_noise(self.column_variations, self.column_prob_bad, nz=nz, nx=nx)
data += col_noise
pbar.update(1)
# Apply PPC
pbar.set_description("Include PPC")
data = add_ppc(data, nchans=nchan, kernel=k_ppc, in_place=True,
same_scan_direction=ssd, reverse_scan_direction=rsd)
pbar.update(1)
# Convert to 16-bit int
data[data < 0] = 0
data[data >= 2**16] = 2**16 - 1
data = data.astype('uint16')
# Then back to float
data = data.astype(np.float)
# Ref pixel correction
pbar.set_description("Ref Pixel Correction")
data -= super_bias
data = reffix_hxrg(data, **kw_reffix)
pbar.update(1)
pbar.set_description("Calc Power Spectrum")
ps, _, _ = get_power_spec(data, nchan=nchan, calc_cds=True, kw_powspec=kw_powspec)
pbar.update(1)
ps_arr.append(ps)
pbar.close()
from pynrc import nrc_utils, robust
from pynrc.detops import create_detops
from pynrc.reduce.ref_pixels import reffix_hxrg, channel_smooth_savgol, channel_averaging
from pynrc.nrc_utils import jl_poly_fit, jl_poly, hist_indices
from pynrc.simul.ngNRC import gen_col_noise, add_col_noise, gen_ramp_biases
from pynrc.simul.ngNRC import pink_noise, fft_noise, sim_noise_data, gen_dark_ramp, sim_dark_ramp
from pynrc.simul.ngNRC import add_ipc, add_ppc
from pynrc.reduce.calib import get_ipc_kernel, ipc_deconvolve, ppc_deconvolve
from pynrc.reduce.calib import get_fits_data, gen_super_bias, gen_super_dark
from pynrc.reduce.calib import chisqr_red, ramp_derivative, gen_col_variations
from pynrc.reduce.calib import gen_ref_dict#, get_bias_offsets, get_oddeven_offsets, get_ref_instability
from pynrc.reduce.calib import nircam_dark, plot_dark_histogram
from pynrc.reduce.calib import pow_spec_ramp, fit_corr_powspec, broken_pink_powspec
from pynrc.reduce.calib import get_power_spec, get_freq_array
import os, gzip, json
from copy import deepcopy
from astropy.io import fits
from scipy import ndimage
# Initialize
datadir='/Users/jarron/NIRCam/Data/CV3_Darks/'
outdir='/Users/jarron/NIRCam/dark_analysis/CV3/'
dark_data = nircam_dark(485, datadir, outdir)
###Output
[ pynrc:INFO] Initializing SCA 485/A5
###Markdown
Initialize SCA Dark
###Code
datadir='/Users/jarron/NIRCam/Data/CV3_Darks/'
outdir='/Users/jarron/NIRCam/dark_analysis/CV3/'
dark_data = nircam_dark(485, datadir, outdir)
# Dark ramp/slope info
# Get Super dark ramp (cube)
dark_data.get_super_dark_ramp()
# Calculate dark slope image
dark_data.get_dark_slope_image()
dark_data.get_super_bias_update()
# Calculate pixel slope averages
dark_data.get_pixel_slope_averages()
# Delete super dark ramp to save memory
del dark_data._super_dark_ramp
dark_data._super_dark_ramp = None
# Calculate CDS Noise for various component
# white noise, 1/f noise (correlated and independent), temporal and spatial
dark_data.get_cds_dict()
# Effective Noise
dark_data.get_effective_noise()
# Get kTC reset noise, IPC, and PPC values
dark_data.get_ktc_noise()
# Get the power spectrum information
# Saved to pow_spec_dict['freq', 'ps_all', 'ps_corr', 'ps_ucorr']
dark_data.get_power_spectrum(include_oh=False, calc_cds=True, mn_func=np.median, per_pixel=False)
# Calculate IPC/PPC kernels
dark_data.get_ipc(calc_ppc=True)
# Deconvolve the super dark and super bias images
dark_data.deconvolve_supers()
# Get column variations
dark_data.get_column_variations()
# Create dictionary of reference pixel behavior
dark_data.get_ref_pixel_noise()
###Output
[ pynrc:INFO] Determining column variations (RTN)
[ pynrc:INFO] Determining reference pixel behavior
###Markdown
Simulate Ramps
###Code
from pynrc.simul.ngNRC import sim_dark_ramp, sim_image_ramp, sim_noise_data
from pynrc.simul.ngNRC import gen_ramp_biases, gen_col_noise
from pynrc.simul.ngNRC import add_ipc, add_ppc
from pynrc.reduce.calib import broken_pink_powspec, ramp_resample
from pynrc.simul.ngNRC import slope_to_ramps, simulate_detector_ramp
det = pynrc.DetectorOps(detector=485, ngroup=108, nint=1)#, ypix=256, xpix=256, wind_mode='WINDOW')
import datetime
pynrc.setup_logging('WARN', verbose=False)
dir_out = '/Users/jarron/NIRCam/Data/Sim_Darks/485/'
nfiles = 2
for i in trange(nfiles):
now = datetime.datetime.now().isoformat()[:-7]
file_out = dir_out + f'NRCNRCALONG-DARK-485_SE_{now}.fits'
file_out = file_out.replace(':', 'h', 1)
file_out = file_out.replace(':', 'm', 1)
slope_to_ramps(det, dark_data, DMS=False, return_results=False, file_out=file_out)
data = simulate_detector_ramp(det, dark_data, im_slope=None, out_ADU=False)
# Detector setup and info
det = self.det
nchan = det.nout
ny, nx = (det.ypix, det.xpix)
# Super bias and darks
super_bias = self.super_bias_deconv # DN
super_dark = self.super_dark_deconv # DN/sec
# Scan direction info
ssd = self.det.same_scan_direction
rsd = self.det.reverse_scan_direction
# IPC/PPC kernel information
k_ipc = self.kernel_ipc
k_ppc = self.kernel_ppc
# Noise info
cds_dict = self.cds_act_dict
keys = ['spat_det', 'spat_pink_corr', 'spat_pink_uncorr']
cds_vals = [np.sqrt(np.mean(cds_dict[k]**2, axis=0)) for k in keys]
# CDS Noise values
rd_noise_cds, c_pink_cds, u_pink_cds = cds_vals
# Noise per frame
rn, cp, up = cds_vals / np.sqrt(2)
# kTC Reset Noise
ktc_noise = self.ktc_noise
# Detector Gain
gain = det.gain
# Power spectrum for correlated noise
freq = self.pow_spec_dict['freq']
scales = self._pow_spec_dict['ps_corr_scale']
# pcorr_fit = broken_pink_powspec(freq, scales)
# Reference info
ref_info = self.det.ref_info
ref_ratio = np.mean(self.cds_ref_dict['spat_det'] / self.cds_act_dict['spat_det'])
det_test = pynrc.DetectorOps(xpix=100,ypix=100,x0=10,y0=450,wind_mode='WINDOW')
det_test.reverse_scan_direction
det_test.ref_info
ref_dict
plt.plot(test[1])
plt.plot(test[1,1:])
plt.ylim([0,20])
res = np.random.poisson(10, (3,50,50))#.astype('float')
%time res = np.cumsum(res, axis=0, out=res)
plt.plot(data[:,100,100])
data /= gain
col_noise.shape
plt.imshow(data_noise[50,0:100,0:100])
super_bias = self.super_bias_deconv
super_dark = self.super_dark_deconv
k_ipc = self.kernel_ipc
k_ppc = self.kernel_ppc
# Scan direction info
ssd = self.det.same_scan_direction
rsd = self.det.reverse_scan_direction
# Average shape of ramp
ramp_avg_ch = self.dark_ramp_dict['ramp_avg_ch']
gain = self.det.gain
ktc_noise = self.ktc_noise # Units of DN
# Reference info
ref_info = self.det.ref_info
ref_ratio = np.mean(self.cds_ref_dict['spat_det'] / self.cds_act_dict['spat_det'])
from pynrc.simul.ngNRC import gen_col_noise, add_col_noise, gen_ramp_biases
from pynrc.simul.ngNRC import pink_noise, fft_noise, sim_noise_data, gen_dark_ramp, sim_dark_ramp
pbar = tqdm(total=10, leave=False)
# Initialize data with dark current
pbar.set_description("Dark Current")
data = sim_dark_ramp(det, super_dark, gain=gain, ramp_avg_ch=ramp_avg_ch, ref_info=ref_info)
pbar.update(1)
# Add super bias
pbar.set_description("Super Bias")
data += super_bias
pbar.update(1)
# Add kTC noise:
pbar.set_description("kTC Noise")
ktc_offset = np.random.normal(scale=ktc_noise, size=(ny,nx))
data += ktc_offset
pbar.update(1)
# Apply IPC
pbar.set_description("Include IPC")
data = add_ipc(data, kernel=k_ipc)
pbar.update(1)
pbar.set_description("Detector Noise")
data += sim_noise_data(det, rd_noise=rn, u_pink=up, c_pink=cp*1.2,
acn=1, pow_spec_corr=pcorr_fit, ref_ratio=ref_ratio,
same_scan_direction=ssd, reverse_scan_direction=rsd)
pbar.update(1)
# Add reference offsets
pbar.set_description("Ref Pixel Instability")
ref_dict = self._ref_pixel_dict
data += gen_ramp_biases(ref_dict, data_shape=data.shape, ref_border=det.ref_info)
pbar.update(1)
# Add column noise
pbar.set_description("Column Noise")
col_noise = gen_col_noise(self.column_variations, self.column_prob_bad, nz=nz, nx=nx)
data += col_noise
pbar.update(1)
# Apply PPC
pbar.set_description("Include PPC")
data = add_ppc(data, nchans=nchan, kernel=k_ppc, in_place=True,
same_scan_direction=ssd, reverse_scan_direction=rsd)
pbar.update(1)
# Convert to 16-bit int
data[data < 0] = 0
data[data >= 2**16] = 2**16 - 1
data = data.astype('uint16')
# Then back to float
data = data.astype(np.float)
# Ref pixel correction
pbar.set_description("Ref Pixel Correction")
data -= super_bias
data = reffix_hxrg(data, **kw_reffix)
pbar.update(1)
pbar.set_description("Calc Power Spectrum")
ps, _, _ = get_power_spec(data, nchan=nchan, calc_cds=True, kw_powspec=kw_powspec)
pbar.update(1)
ps_arr.append(ps)
pbar.close()
from pynrc import nrc_utils, robust
from pynrc.detops import create_detops
from pynrc.reduce.ref_pixels import reffix_hxrg, channel_smooth_savgol, channel_averaging
from pynrc.nrc_utils import jl_poly_fit, jl_poly, hist_indices
from pynrc.simul.ngNRC import gen_col_noise, add_col_noise, gen_ramp_biases
from pynrc.simul.ngNRC import pink_noise, fft_noise, sim_noise_data, gen_dark_ramp, sim_dark_ramp
from pynrc.simul.ngNRC import add_ipc, add_ppc
from pynrc.reduce.calib import get_ipc_kernel, ipc_deconvolve, ppc_deconvolve
from pynrc.reduce.calib import get_fits_data, gen_super_bias, gen_super_dark
from pynrc.reduce.calib import chisqr_red, ramp_derivative, gen_col_variations
from pynrc.reduce.calib import gen_ref_dict#, get_bias_offsets, get_oddeven_offsets, get_ref_instability
from pynrc.reduce.calib import nircam_dark, plot_dark_histogram
from pynrc.reduce.calib import pow_spec_ramp, fit_corr_powspec, broken_pink_powspec
from pynrc.reduce.calib import get_power_spec, get_freq_array
import os, gzip, json
from copy import deepcopy
from astropy.io import fits
from scipy import ndimage
# Initialize
datadir='/Users/jarron/NIRCam/Data/CV3_Darks/'
outdir='/Users/jarron/NIRCam/dark_analysis/CV3/'
dark_data = nircam_dark(485, datadir, outdir)
###Output
[ pynrc:INFO] Initializing SCA 485/A5
|
class/logistic.ipynb
|
###Markdown
Hello Machine LearningTrain a simple logistic regression model to predict tumor vs. normal from gene expression data
###Code
import os
import requests
import pandas as pd
%%time
X = pd.read_hdf("/scratch/tcga_target_gtex.h5", "expression")
X.head()
Y = pd.read_hdf("/scratch/tcga_target_gtex.h5", "labels")
Y.head()
###Output
_____no_output_____
|
LinkedIn/Ex_Files_Python_Data_Science_EssT_Pt2/Exercise Files/Ch3_LogisticRegression_Titanic.ipynb
|
###Markdown
From LinkedIn Learninghttps://www.linkedin.com/learning/python-for-data-science-essential-training-part-2/logistic-regression-treat-missing-values?u=36492188
###Code
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
from pandas import Series, DataFrame
from pylab import rcParams
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_predict
from sklearn import metrics # why need this line?
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, recall_score
%matplotlib inline
rcParams['figure.figsize'] = 5,4
sb.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
Logistic Regression on the Titanic Dataset
###Code
address = "data/titanic-training-data.csv"
titanic_training = pd.read_csv(address)
titanic_training.head()
# don't really need the following, it is already in the csv header file
# titanic_training.columns = ['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']
titanic_training.columns
titanic_training.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
###Markdown
VARIABLE DESCRIPTIONSSurvived - Survival (0 = No; 1 = Yes)Pclass - Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)Name - NameSex - SexAge - AgeSibSp - Number of Siblings/Spouses AboardParch - Number of Parents/Children AboardTicket - Ticket NumberFare - Passenger Fare (British pound)Cabin - CabinEmbarked - Port of Embarkation (C = Cherbourg, France; Q = Queenstown, UK; S = Southampton - Cobh, Ireland) Checking that your target variable is binary
###Code
# target is Surivied column
sb.countplot(x='Survived', data=titanic_training)
###Output
_____no_output_____
###Markdown
Checking for missing values
###Code
titanic_training.isnull().sum()
# missing value is also in the info() called above
# using describe() to check for missing records. Note Age has 714 count. But only numerical columns showed. Embark not shown.
titanic_training.describe()
###Output
_____no_output_____
###Markdown
Taking care of missing values Dropping missing valuesSo let's just go ahead and drop all the variables that aren't relevant for predicting survival. We should at least keep the following:- Survived - This variable is obviously relevant.- Pclass - Does a passenger's class on the boat affect their survivability?- Sex - Could a passenger's gender impact their survival rate?- Age - Does a person's age impact their survival rate?- SibSp - Does the number of relatives on the boat (that are siblings or a spouse) affect a person survivability? Probability- Parch - Does the number of relatives on the boat (that are children or parents) affect a person survivability? Probability- Fare - Does the fare a person paid effect his survivability? Maybe - let's keep it.- Embarked - Does a person's point of embarkation matter? It depends on how the boat was filled... Let's keep it.What about a person's name, ticket number, and passenger ID number? They're irrelavant for predicting survivability. And as you recall, the cabin variable is almost all missing values, so we can just drop all of these.
###Code
titanic_data = titanic_training.drop(['Name', 'Ticket', 'Cabin'], axis=1)
titanic_data.head()
###Output
_____no_output_____
###Markdown
Imputing missing values
###Code
# Age is missing data. 891 vs 714 (see count above)
# Look at the Parch and Age relationship
sb.boxplot(x='Parch', y='Age', data=titanic_data)
###Output
_____no_output_____
###Markdown
Here we can see for younger age, the Parch is 1 or 2. For children, likely to have 1 or more parent onboard. For Parch 4, the age is from 39 - 50. This indicate the parent having children with them onboard. Using this data to impute missing values in Age column.
###Code
Parch_groups = titanic_data.groupby(titanic_data['Parch'])
Parch_groups.mean()
# going to use data in Parch to impute data for Age
def age_approx(cols):
Age = cols[0]
Parch = cols[1]
if pd.isnull(Age):
if Parch == 0:
return 32 # mean for the Parch group
elif Parch == 1:
return 24
elif Parch == 2:
return 17
elif Parch == 3:
return 33
elif Parch == 4:
return 45
else:
return 30 # mean of the whole ship
else:
return Age
titanic_data['Age'] = titanic_data[['Age', 'Parch']].apply(age_approx, axis=1)
titanic_data.isnull().sum()
# still missing values in Embarked column
# only 2 records, dropping it should not affect much, so drop the 2 records.
titanic_data.dropna(inplace=True)
titanic_data.reset_index(inplace=True, drop=True)
titanic_data.info() # 889 records left
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 889 entries, 0 to 888
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 889 non-null int64
1 Survived 889 non-null int64
2 Pclass 889 non-null int64
3 Sex 889 non-null object
4 Age 889 non-null float64
5 SibSp 889 non-null int64
6 Parch 889 non-null int64
7 Fare 889 non-null float64
8 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(2)
memory usage: 62.6+ KB
###Markdown
Converting categorical variables to a dummy indicators
###Code
# reformat Sex, Embark columns
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
gender_cat = titanic_data['Sex']
gender_encoded = label_encoder.fit_transform(gender_cat)
gender_encoded[0:5]
# what is 1 or 0? Check the dataframe
titanic_data.head()
# 1 = Male, 2 = Female
gender_DF = pd.DataFrame(gender_encoded, columns=['male_gender'])
gender_DF.head()
# now work on Embark
embarked_cat = titanic_data['Embarked']
embarked_encoded = label_encoder.fit_transform(embarked_cat)
embarked_encoded[0:100]
from sklearn.preprocessing import OneHotEncoder
binary_encoder = OneHotEncoder(categories='auto')
embarked_1hot = binary_encoder.fit_transform(embarked_encoded.reshape(-1,1))
embarked_1hot_mat = embarked_1hot.toarray()
embarked_DF = pd.DataFrame(embarked_1hot_mat, columns=['C','Q','S'])
embarked_DF.head()
titanic_data.drop(['Sex','Embarked'], axis=1, inplace=True)
titanic_data.head()
# concatenate the generated dataframe as columns)
titanic_dmy = pd.concat([titanic_data, gender_DF, embarked_DF], axis=1, verify_integrity=1)
titanic_dmy[0:5]
###Output
_____no_output_____
###Markdown
Checking for independence between features
###Code
# use heatmap to check for corelations
sb.heatmap(titanic_dmy.corr());
# if the value is close to 1 or -1, it means there is a strong corelation bert the variables
###Output
_____no_output_____
###Markdown
Logistic Regression assumption Independent variables/Features are independent of each otherLook for 1 (light color) or -1 (dark color)PClass vs Fare is correlated, so need to drop 1 of them.
###Code
# Why drop both? Should not I just drop 1 of them.
titanic_dmy = titanic_dmy.drop(['Pclass','Fare'], axis=1)
titanic_dmy.head()
###Output
_____no_output_____
###Markdown
Checking that your dataset size is sufficientGood rule of thumb is that we should have 50 records per predictive featuresHow many predictive variables? We have 6 predictive features. C|Q|S is considered 1 as they are dummy variables for Embarkedthus, we need at least 50x6 records for this model.
###Code
titanic_dmy.info()
# we have 889 records. Thus we have enough to run the regression.
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 889 entries, 0 to 888
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 889 non-null int64
1 Survived 889 non-null int64
2 Age 889 non-null float64
3 SibSp 889 non-null int64
4 Parch 889 non-null int64
5 male_gender 889 non-null int32
6 C 889 non-null float64
7 Q 889 non-null float64
8 S 889 non-null float64
dtypes: float64(4), int32(1), int64(4)
memory usage: 59.2 KB
###Markdown
Split the Data into Training and Test for Logistic Regression
###Code
# Split training and test set
X_train, X_test, y_train, y_test = train_test_split(titanic_dmy.drop('Survived',axis=1), titanic_dmy['Survived'], test_size=0.2, random_state=200)
print(X_train.shape)
print(y_train.shape)
# print out to check the data and see the columns are ok
X_train[0:5]
###Output
_____no_output_____
###Markdown
Deploying and evaluating the model
###Code
LogReg = LogisticRegression(solver='liblinear')
LogReg.fit(X_train, y_train)
y_pred = LogReg.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Evaluation Classification report without cross-validation
###Code
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.83 0.88 0.85 109
1 0.79 0.71 0.75 69
accuracy 0.81 178
macro avg 0.81 0.80 0.80 178
weighted avg 0.81 0.81 0.81 178
###Markdown
We are getting 0.81 (81%) accuracy, not bad K-fold cross-validation & confusion matrices
###Code
y_train_pred = cross_val_predict(LogReg, X_train, y_train, cv=5)
confusion_matrix(y_train, y_train_pred)
# 377, 180 is the correct prediction. True Positive and True Negative
precision_score(y_train, y_train_pred)
###Output
_____no_output_____
###Markdown
Make a test prediction
###Code
# Create a fake record for testing
# Look at one existing record, use as source
titanic_dmy[863:864]
# set up the fake record based on the the record above
test_passenger = np.array([866, 40, 0, 0, 0, 0, 0, 1]).reshape(1,-1)
print(LogReg.predict(test_passenger))
print(LogReg.predict_proba(test_passenger))
###Output
[1]
[[0.26351831 0.73648169]]
|
My notebooks/T8 - 2 - SVM - Model.ipynb
|
###Markdown
Support Vector Machines
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns; sns.set()
from sklearn.datasets.samples_generator import make_blobs
X, Y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.6)
X, Y
###Output
_____no_output_____
###Markdown
Dividimos los puntos en 2 clusters (centers = 2)* X es un conjunto de 50 puntos* Y es una array con el índice del cluster al que pertenece cada uno de los puntos de X
###Code
plt.scatter(X[:,0], X[:,1], c = Y, s = 50, cmap ="autumn")
xx = np.linspace(-1, 3.5)
plt.scatter(X[:,0], X[:,1], c = Y, s = 50, cmap ="autumn")
plt.plot([0.5], [2.1], "x", color="blue", markeredgewidth=2, markersize=10)
for a, b in [(1,0.65),(0.5,1.6),(-0.2,2.9)]: #Pendiente y ordenada en el origen de los tres puntos siguientes
yy = a * xx + b ##Generamos la ecuación de la recta
plt.plot(xx, yy, "-k") #Pintamos la recta para cada uno de los puntos
plt.xlim(-1, 3.5)
###Output
_____no_output_____
###Markdown
Maximización del margen
###Code
xx = np.linspace(-1, 3.5)
plt.scatter(X[:,0], X[:,1], c = Y, s = 50, cmap ="autumn")
plt.plot([0.5], [2.1], "x", color="blue", markeredgewidth=2, markersize=10)
for a, b, d in [(1,0.65,0.33),(0.5,1.6,0.55),(-0.2,2.9,0.2)]: #Pendiente y ordenada en el origen de los tres puntos siguientes
yy = a * xx + b ##Generamos la ecuación de la recta
plt.plot(xx, yy, "-k") #Pintamos la recta para cada uno de los puntos
plt.fill_between(xx, yy-d, yy+d, edgecolor=None, color ="grey", alpha=0.4) #d es la distancia desde la recta al punto más cercano
plt.xlim(-1, 3.5)
###Output
_____no_output_____
###Markdown
Creación del modelo SVM
###Code
from sklearn.svm import SVC
model = SVC(kernel="linear", C=1E10) #Kernel es el separador
model.fit(X,Y)
def plt_svc(model, ax=None, plot_support=True):
"""Plot de la función de decisión para una clasificación en 2D con SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim() # [xlim[0], xlim[1]] -> Límite inferior y superior del eje X
ylim = ax.get_ylim() # [ylim[0], ylim[1]] -> Límite inferior y superior del eje Y
#Generamos la parrilla de puntos para evaluar el modelo
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(yy,xx)
#Evaluar el modelo
xy = np.vstack([X.ravel(), Y.ravel()]).T #Tupla
P = model.decision_function(xy).reshape(X.shape)
#Representamos las fronteras y los márgenes del SVC
ax.contour(X, Y, P, colors = "k", levels = [-1,0,1], alpha = 0.5, linestyles=["--","-","--"])
print(model.support_vectors_)
if plot_support:
ax.scatter(model.support_vectors_[:,0],
model.support_vectors_[:,1],
s = 300, linewidth = 1, facecolors="black")
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:,0],X[:,1], c=Y, s=50, cmap="autumn")
plt_svc(model) #Ubicación de los soportes vectoriales
def plt_svm(N=10, ax=None):
X, Y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.6)
X = X[:N]
Y = Y[:N]
model = SVC(kernel="linear", C=1E10)
model.fit(X,Y)
ax = ax or plt.gca()
ax.scatter(X[:,0], X[:,1], c=Y, s=50, cmap="autumn")
ax.set_xlim(-1,4)
ax.set_ylim(-1,6)
plt_svc(model, ax)
fig, ax = plt.subplots(1, 2, figsize=(16,6))
fig.subplots_adjust(left=0.026, right=0.95, wspace=0.1)
for ax_i, N in zip(ax, [60,120]):
plt_svm(N, ax_i)
ax_i.set_title("N = {}".format(N))
from ipywidgets import interact, fixed
#Grafo interactivo
interact(plt_svm, N=[100,200], ax=fixed(None))
###Output
_____no_output_____
|
2.-Virtual_Screening.ipynb
|
###Markdown
Virtual ScreeningThis notebook aims to demonstrate how to use AutoDock Vina (via Smina) and Ledock to dock multiple molecules in the same protein target and binding site. Content of this notebook1. Feching system and cleanup2. System Visualization 3. Docking with Smina - Receptor preparation - Ligand preparation - Docking box definition - Docking - 3D visualization of docking results4. Docking with LeDock - Receptor preparation - Ligand preparation - Docking box definition - Docking - DOK results file conversion to SDF - 3D visualization of docking results
###Code
from pymol import cmd
import py3Dmol
from openbabel import pybel
from rdkit import Chem
from rdkit.Chem import AllChem
import sys, os, random
sys.path.insert(1, 'utilities/')
from utils import getbox, generate_ledock_file, dok_to_sdf
import warnings
warnings.filterwarnings("ignore")
%config Completer.use_jedi = False
os.chdir('test/Virtual_Screening/')
###Output
_____no_output_____
###Markdown
1. Feching system and cleanupImplementing Pymol is a simple way to download PDB structures. The user can launch this or any other Jupyter Dock's protocol by providing his or her own files.
###Code
cmd.fetch(code='1X1R',type='pdb1')
cmd.select(name='Prot',selection='polymer.protein')
cmd.select(name='GDP',selection='organic')
cmd.save(filename='1X1R_clean.pdb',format='pdb',selection='Prot')
cmd.save(filename='1X1R_GDP.mol2',format='mol2',selection='GDP')
cmd.delete('all')
###Output
_____no_output_____
###Markdown
2. System VisualizationA cool feature of Jupyter Dock is the posibility of visualize ligand-protein complexes and docking results into the notebbok. All this thanks to the powerful py3Dmol. Now the protein and ligand have been sanitized it would be recomended to vissualize the ligand-protein reference system.
###Code
view = py3Dmol.view()
view.removeAllModels()
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.addModel(open('1X1R_clean.pdb','r').read(),format='pdb')
Prot=view.getModel()
Prot.setStyle({'cartoon':{'arrows':True, 'tubes':True, 'style':'oval', 'color':'white'}})
view.addSurface(py3Dmol.VDW,{'opacity':0.6,'color':'white'})
view.addModel(open('1X1R_GDP.mol2','r').read(),format='mol2')
ref_m = view.getModel()
ref_m.setStyle({},{'stick':{'colorscheme':'greenCarbon','radius':0.2}})
view.zoomTo()
view.show()
###Output
_____no_output_____
###Markdown
3. Docking with SminaDespite the presence of Python bindings in AutoDock Vina 1.2.0, other tools that incorporate AutoDock Vina allow for cool features such as custom score functions (smina), fast execution (qvina), and the use of wider boxes (qvina-w). Jupyter Dock can run such binaries in a notebook, giving users more options.Smina is a fork of AutoDock Vina that is customized to better support scoring function development and high-performance energy minimization. Smina is maintained by David Koes at the University of Pittsburgh and is not directly affiliated with the AutoDock project.>**Info:** The following cell contains an example of using Smina to run the current docking example. However, the executable files for qvina and qvina-w are available in the Jupyter Dock repo's bin directory. As a result, the user can use such a tool by adding the necessary cells or replacing the current docking engine. 3.1. Receptor preparationDespite the fact that Smina is a modified version of AutoDock Vina, the input file for a receptor in Smina can be either a PDBQT file or a PDB file with explicit hydrogens in all residues. At this point, we can make use ofca protein structure from Jupyter Dock's _**fix_protein()**_ function or implementing LePro (for more information, see the 2.1 section of the Molecular Docking notebook).
###Code
!../../bin/lepro_linux_x86 {'1X1R_clean.pdb'}
os.rename('pro.pdb','1X1R_clean_H.pdb') # Output from lepro is pro.pdb, this line will change the name to '1X1R_clean_H.pdb'
###Output
_____no_output_____
###Markdown
3.2. Ligand preparationThe ligand molecules in Virtual Screening protocols could come from a variety of sources (i.e. [ZINC15](https://zinc15.docking.org/), [PubChem](https://pubchem.ncbi.nlm.nih.gov/), [DrugBank](https://go.drugbank.com/), etc) and diverse formats (i.e. SDF, PDB, MOL, MOL2, SMILES, etc). The cell below depicts one of the simplest approaches to using molecules from SMILES codes. Users, however, can use any known chemical format in their molecules thanks to the use of PyBel and RDKit.Regardless of the format of the molecules or the differences between docking algorithms, ligand preparation must achieve at least the following objectives:- Set the proper protonation and tautomeric state to the molecules.- Provide a valid 3D structure to initialize the conformational search.Jupyter Dock generates protonated neutral molecules and performs energy minimization under the MMFF94s force field using the pybel functions _make3D_ and _localopt_, respectively.It is recommended that users tune the pybel settings or use other preparation methods before running the molecular docking for special ligand needs.
###Code
smiles=['C1=NC(=C2C(=N1)N(C=N2)CCOCP(=O)(O)O)N',
'C1=NC(=C2C(=N1)N(C=N2)[C@H]3[C@@H]([C@@H]([C@H](O3)COP(=O)(O)O)O)O)N',
'C[C@@H](C1=CC2=C(C(=CC=C2)Cl)C(=O)N1C3=CC=CC=C3)NC4=NC=NC5=C4NC=N5',
'C1=NC(=C2C(=N1)N(C=N2)C3C(C(C(O3)CO)O)O)N',
'C[C@H](CN1C=NC2=C(N=CN=C21)N)OC[P@@](=O)(N[C@@H]',
'C1=NC(=C2C(=N1)N(C=N2)[C@H]3[C@@H]([C@@H]([C@H](O3)C[C@H](CC[C@@H](C(=O)O)N)N)O)O)N',
'C[S+](CC[C@@H](C(=O)O)N)C[C@@H]1[C@H]([C@H]([C@@H](O1)N2C=NC3=C(N=CN=C32)N)O)O',
'CN1C=C(C(=N1)OC)NC2=C3C(=NC(=N2)N4C[C@H]([C@@H](C4)F)NC(=O)C=C)N(C=N3)C',
'C1COC[C@@H]1NC2=C3C(=NC=N2)N(C=N3)[C@H]4[C@@H]([C@@H]([C@H](O4)CO)O)O']
out=pybel.Outputfile(filename='InputMols.mol2',format='mol2',overwrite=True)
for index,smi in enumerate(smiles):
mol=pybel.readstring(string=smi,format='smiles')
mol.title='mol_'+str(index)
mol.make3D('mmff94s')
mol.localopt(forcefield='mmff94s', steps=500)
out.write(mol)
out.close()
###Output
_____no_output_____
###Markdown
3.3. Docking box definitionIn this example, the natural ligand GDP of 1X1R will be used as a reference for box definition (Section 4.3 of the Molecular Docking Notebook contains more information about the _**getbox()**_ function).
###Code
cmd.load(filename='1X1R_clean_H.pdb',format='pdb',object='prot') #Not needed but as reference of the system
cmd.load(filename='1X1R_GDP.mol2',format='mol2',object='lig')
center,size=getbox(selection='lig',extending=6.0,software='vina')
cmd.delete('all')
print(center)
print(size)
###Output
{'center_x': 3.4204999953508377, 'center_y': 9.91599988937378, 'center_z': 11.27299976348877}
{'size_x': 19.56700000166893, 'size_y': 18.30399990081787, 'size_z': 23.20599937438965}
###Markdown
3.4. DockingJupyter Dock comes with the smina executable for Linux and Mac OS. By running the binary file, the parameters can be accessed.
###Code
!../../bin/smina -r {'1X1R_clean_H.pdb'} -l {'InputMols.mol2'} -o {'1X1R_lig_smina_out.sdf'} --center_x {center['center_x']} --center_y {center['center_y']} --center_z {center['center_z']} --size_x {size['size_x']} --size_y {size['size_y']} --size_z {size['size_z']} --exhaustiveness 8 --num_modes 5
###Output
_______ _______ _________ _ _______
( ____ \( )\__ __/( ( /|( ___ )
| ( \/| () () | ) ( | \ ( || ( ) |
| (_____ | || || | | | | \ | || (___) |
(_____ )| |(_)| | | | | (\ \) || ___ |
) || | | | | | | | \ || ( ) |
/\____) || ) ( |___) (___| ) \ || ) ( |
\_______)|/ \|\_______/|/ )_)|/ \|
smina is based off AutoDock Vina. Please cite appropriately.
Weights Terms
-0.035579 gauss(o=0,_w=0.5,_c=8)
-0.005156 gauss(o=3,_w=2,_c=8)
0.840245 repulsion(o=0,_c=8)
-0.035069 hydrophobic(g=0.5,_b=1.5,_c=8)
-0.587439 non_dir_h_bond(g=-0.7,_b=0,_c=8)
1.923 num_tors_div
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -7.4 0.000 0.000
2 -7.2 5.931 8.141
3 -7.2 0.991 2.114
4 -7.2 1.225 2.457
5 -7.1 2.492 3.743
Refine time 7.328
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -9.8 0.000 0.000
2 -9.6 1.378 1.901
3 -8.9 4.481 7.249
4 -8.5 4.613 6.525
5 -8.5 1.185 1.981
Refine time 13.324
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -8.5 0.000 0.000
2 -8.2 0.853 1.775
3 -8.1 0.776 2.139
4 -7.7 2.444 4.612
5 -7.7 2.447 4.596
Refine time 20.204
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -8.0 0.000 0.000
2 -7.7 3.176 6.236
3 -7.6 1.999 2.878
4 -7.5 2.102 2.896
5 -7.4 3.194 6.200
Refine time 6.841
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -7.3 0.000 0.000
2 -7.2 3.644 4.681
3 -7.2 3.319 7.653
4 -7.1 3.224 7.730
5 -6.9 3.459 7.991
Refine time 11.145
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -10.0 0.000 0.000
2 -9.3 2.259 7.983
3 -9.2 1.286 2.043
4 -8.7 1.720 2.554
5 -8.3 2.364 3.178
Refine time 27.483
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -10.0 0.000 0.000
2 -9.1 0.872 1.848
3 -8.7 2.896 8.412
4 -8.7 1.526 2.409
5 -8.5 2.944 8.144
Refine time 25.409
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -7.9 0.000 0.000
2 -7.8 1.827 2.321
3 -7.6 2.471 7.414
4 -7.2 2.670 7.150
5 -7.2 3.254 7.681
Refine time 28.091
Using random seed: 1547183094
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -8.9 0.000 0.000
2 -8.1 4.194 8.941
3 -8.0 2.854 7.379
4 -7.9 2.276 7.389
5 -7.6 2.401 3.729
Refine time 12.801
Loop time 163.883
###Markdown
3.5. 3D visualization of docking resultsAs with the system visualization (section 2), the docking results can be inspected and compared to the reference structure (if one exists). Smina saves the "minimizedAffinity" information corresponding to the docking score as the molecule's attribute.
###Code
view = py3Dmol.view()
view.removeAllModels()
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.addModel(open('1X1R_clean_H.pdb','r').read(),'pdb')
Prot=view.getModel()
Prot.setStyle({'cartoon':{'arrows':True, 'tubes':True, 'style':'oval', 'color':'white'}})
view.addSurface(py3Dmol.VDW,{'opacity':0.8,'color':'white'})
view.addModel(open('1X1R_GDP.mol2','r').read(),'mol2')
ref_m = view.getModel()
ref_m.setStyle({},{'stick':{'colorscheme':'greenCarbon','radius':0.2}})
poses=Chem.SDMolSupplier('1X1R_lig_smina_out.sdf',True)
for p in list(poses)[::5]:
pose_1=Chem.MolToMolBlock(p)
print(p.GetProp('_Name'),'Score: {}'.format(p.GetProp('minimizedAffinity')))
color = ["#"+''.join([random.choice('0123456789ABCDEF') for j in range(6)])]
view.addModel(pose_1,'mol')
z= view.getModel()
z.setStyle({},{'stick':{'color':color[0],'radius':0.05,'opacity':0.6}})
view.zoomTo()
view.show()
###Output
mol_0 Score: -7.35685
mol_1 Score: -9.78292
mol_2 Score: -8.50774
mol_3 Score: -7.99066
mol_4 Score: -7.26349
mol_5 Score: -10.04724
mol_6 Score: -9.98977
mol_7 Score: -7.86083
mol_8 Score: -8.90514
###Markdown
4. Docking with LeDockLeDock is designed for fast and accurate flexible docking of small molecules into a protein. It achieves a pose-prediction accuracy of greater than 90% on the Astex diversity set and takes about 3 seconds per run for a drug-like molecule. It has led to the discovery of novel kinase inhibitors and bromodomain antagonists from high-throughput virtual screening campaigns. It directly uses SYBYL Mol2 format as input for small molecules. 4.1. Receptor preparationIn LeDock, the input file for a receptor is a PDB file with explicit hydrogens in all residues. LePro was created as a tool for preparing protein structures for docking with LeDock. Thus, at this stage, we can use the file ater sanitization steps as well as a protein structure from Jupyter Dock's _**fix_protein()**_ function or implementing LePro (for more information, see the 2.1 section of the Molecular Docking notebook). 4.2. Ligand preparationMOL2 is the ligand input format in LeDock. LeDock, on the other hand, cannot accept a multimodel MOL2 file as an input. To use LeDock, we can run the preparation in the same way as Smina (section 3.2), but generate one MOL2 file per ligand.
###Code
smiles=['C1=NC(=C2C(=N1)N(C=N2)CCOCP(=O)(O)O)N',
'C1=NC(=C2C(=N1)N(C=N2)[C@H]3[C@@H]([C@@H]([C@H](O3)COP(=O)(O)O)O)O)N',
'C[C@@H](C1=CC2=C(C(=CC=C2)Cl)C(=O)N1C3=CC=CC=C3)NC4=NC=NC5=C4NC=N5',
'C1=NC(=C2C(=N1)N(C=N2)C3C(C(C(O3)CO)O)O)N',
'C[C@H](CN1C=NC2=C(N=CN=C21)N)OC[P@@](=O)(N[C@@H]',
'C1=NC(=C2C(=N1)N(C=N2)[C@H]3[C@@H]([C@@H]([C@H](O3)C[C@H](CC[C@@H](C(=O)O)N)N)O)O)N',
'C[S+](CC[C@@H](C(=O)O)N)C[C@@H]1[C@H]([C@H]([C@@H](O1)N2C=NC3=C(N=CN=C32)N)O)O',
'CN1C=C(C(=N1)OC)NC2=C3C(=NC(=N2)N4C[C@H]([C@@H](C4)F)NC(=O)C=C)N(C=N3)C',
'C1COC[C@@H]1NC2=C3C(=NC=N2)N(C=N3)[C@H]4[C@@H]([C@@H]([C@H](O4)CO)O)O']
for index,smi in enumerate(smiles):
mol=pybel.readstring(string=smi,format='smiles')
mol.title='mol_'+str(index)
mol.make3D('mmff94s')
mol.localopt(forcefield='mmff94s', steps=500)
out=pybel.Outputfile(filename='ledock_inputfiles/'+'mol_'+str(index)+'.mol2',format='mol2',overwrite=True)
out.write(mol)
out.close()
###Output
_____no_output_____
###Markdown
4.3. Docking box definitionThis step can be completed in the same manner as the Smina box definition (section 3.3). To obtain the identical box from Smina docking but in LeDock format, the user only needs to change the parameter "software" from "vina" to "ledock." >**Info:** The implementation of the _**getbox()**_ function allows for the easy replication of binding sites between AutoDock Vina and LeDock, with the goal of replicating and comparing results between both programs.
###Code
cmd.load(filename='1X1R_clean_H.pdb',format='pdb',object='prot')
cmd.load(filename='1X1R_GDP.mol2',format='mol2',object='lig')
X,Y,Z=getbox(selection='lig',extending=6.0,software='ledock')
cmd.delete('all')
print(X)
print(Y)
print(Z)
###Output
{'minX': -6.363000005483627, 'maxX': 13.203999996185303}
{'minY': 0.7639999389648438, 'maxY': 19.067999839782715}
{'minZ': -0.3299999237060547, 'maxZ': 22.875999450683594}
###Markdown
4.4 DockingAside from one file per ligand, Ledock requires the user to create a file containing the docking paths for the ligands (ligand.list in Jupyter Dock). This file can be easily created by using the following cell and the _**generate_ledock_file()**_ function.
###Code
l_list=[]
for file in os.listdir('ledock_inputfiles/'):
if 'mol2' in file:
l_list.append('ledock_inputfiles/'+file+'\n')
l_list
generate_ledock_file(receptor='1X1R_clean_H.pdb',l_list=l_list,
l_list_outfile='ligand.list',
x=[X['minX'],X['maxX']],
y=[Y['minY'],Y['maxY']],
z=[Z['minZ'],Z['maxZ']],
n_poses=10,
rmsd=1.0,
out='dock.in')
###Output
_____no_output_____
###Markdown
Once all of the docking parameters have been entered into a configuration file (dock.in), everything is ready to go.
###Code
!../../bin/ledock_linux_x86 {'dock.in'}
###Output
_____no_output_____
###Markdown
4.5. DOK results file conversion to SDFLeDock produces a file with the dok extension that contains docking properties in the same way that a pdb file does. Despite this, the dok file is not widely used for representing chemical structures. As a result, Jupyter Dock is capable of converting dok files to the widely used sdf format. Jupyter Dock will save the "Pose" and "Score" results as molecule attributes while preserving the chemical features (more information in section 6.5 of the Molecular Docking notebook).
###Code
for file in os.listdir('ledock_inputfiles/'):
if '.dok' in file:
os.rename('ledock_inputfiles/'+file,'ledock_outfiles/'+file)
dok_to_sdf(dok_file='ledock_outfiles/'+file,output='ledock_outfiles/'+file.replace('dok','sdf'))
###Output
_____no_output_____
###Markdown
4.5. 3D visualization of docking resultsAs with the system visualization (section 2), the docking results can be inspected and compared to the reference structure (if one exists).
###Code
view = py3Dmol.view()
view.removeAllModels()
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.addModel(open('1X1R_clean_H.pdb','r').read(),'pdb')
Prot=view.getModel()
Prot.setStyle({'cartoon':{'arrows':True, 'tubes':True, 'style':'oval', 'color':'white'}})
view.addSurface(py3Dmol.VDW,{'opacity':0.8,'color':'white'})
view.addModel(open('1X1R_GDP.mol2','r').read(),'mol2')
ref_m = view.getModel()
ref_m.setStyle({},{'stick':{'colorscheme':'greenCarbon','radius':0.2}})
for file in os.listdir('ledock_outfiles/'):
if 'sdf' in file:
pose_1=Chem.SDMolSupplier('ledock_outfiles/'+file,False)[0]
p=Chem.MolToMolBlock(pose_1)
print('Name: {} | Pose: {} | Score: {}'.format(file.split('.')[0],pose_1.GetProp('Pose'),pose_1.GetProp('Score')))
color = ["#"+''.join([random.choice('0123456789ABCDEF') for j in range(6)])]
view.addModel(p,'mol')
z= view.getModel()
z.setStyle({},{'stick':{'color':color[0],'radius':0.05,'opacity':0.6}})
view.zoomTo()
view.show()
###Output
Name mol_0 | Pose: 1 | Score: -8.28
Name mol_1 | Pose: 5 | Score: -9.59
Name mol_2 | Pose: 4 | Score: -7.53
Name mol_3 | Pose: 4 | Score: -7.52
Name mol_4 | Pose: 2 | Score: -7.40
Name mol_5 | Pose: 1 | Score: -9.93
Name mol_6 | Pose: 2 | Score: -10.00
Name mol_7 | Pose: 1 | Score: -7.75
Name mol_8 | Pose: 1 | Score: -7.75
|
blogs/fugue-transform/Untitled.ipynb
|
###Markdown
This is a tutorial of how to bring Pandas and Python code to Spark. We'll compare the traditional way with Fugue.
###Code
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
X = pd.DataFrame({"x_1": [1, 1, 2, 2], "x_2":[1, 2, 2, 3]})
y = np.dot(X, np.array([1, 2])) + 3
reg = LinearRegression().fit(X, y)
###Output
_____no_output_____
###Markdown
Create the prediction function and test it
###Code
def predict(df: pd.DataFrame, model: LinearRegression) -> pd.DataFrame:
return df.assign(predicted=model.predict(df))
input_df = pd.DataFrame({"x_1": [3, 4, 6, 6], "x_2":[3, 3, 6, 6]})
predict(input_df.copy(), reg)
from fugue import transform
from fugue_spark import SparkExecutionEngine
result = transform(
input_df,
predict,
schema="*,predicted:double",
params={"model": reg},
engine=SparkExecutionEngine()
)
result.show()
from typing import Iterator, Any, Union
from pyspark.sql.types import StructType, StructField, DoubleType
from pyspark.sql import DataFrame, SparkSession
spark_session = SparkSession.builder.getOrCreate()
def predict_wrapper(dfs: Iterator[pd.DataFrame], model):
for df in dfs:
yield predict(df, model)
def run_predict(input_df: Union[DataFrame, pd.DataFrame], model):
# conversion
if isinstance(input_df, pd.DataFrame):
sdf = spark_session.createDataFrame(input_df.copy())
else:
sdf = input_df.copy()
schema = StructType(list(sdf.schema.fields))
schema.add(StructField("predicted", DoubleType()))
return sdf.mapInPandas(lambda dfs: predict_wrapper(dfs, model),
schema=schema)
result = run_predict(input_df.copy(), reg)
result.show()
###Output
+---+---+---------+
|x_1|x_2|predicted|
+---+---+---------+
| 3| 3| 12.0|
| 4| 3| 13.0|
| 6| 6| 21.0|
| 6| 6| 21.0|
+---+---+---------+
|
examples/2. Property calculations from SAFT-VR-Mie EoS (pure fluid).ipynb
|
###Markdown
Property calculation with SAFT-VR-Mie EoSFirst, it is necessary to import the ``component`` class and the SAFT-VR-Mie equation of state (``saftvrmie``).
###Code
import numpy as np
from sgtpy import component, saftvrmie
###Output
_____no_output_____
###Markdown
First, a component is defined with the ``component`` class function, then the ``eos`` object is created with the component and the ``saftvrmie`` function. The ``eos`` object includes the methods to evaluate properties from the equation of state, such as densities, pressure, fugacity coefficients, chemical potential and some thermal derived properties (residual entropy, residual enthalpy, residual heat capacities and speed of sound).**warning:** thermal derived properties are computed with numerical derivatives using $O(h^4)$ approximation.In the case of coarse-grained non-associating fluids, the ``eos`` object allows computing the [influence parameter for SGT](https://aiche.onlinelibrary.wiley.com/doi/full/10.1002/aic.15190). This is done with the following correlation:$$ \sqrt{\frac{c_{ii}}{N_{av}^2 \epsilon_{ii} \sigma_{ii}^5}} \left[0.12008 + 2.21979 \alpha_i \right]$$$$ \alpha_i = \left[ \frac{\lambda_r}{\lambda_r - \lambda_a} \left( \frac{\lambda_r}{\lambda_a}\right)^{\frac{\lambda_a}{\lambda_r - \lambda_a}}\right] \left[\frac{1}{\lambda_a - 3} - \frac{1}{\lambda_r - 3} \right]$$The ``eos.cii_correlation`` method is shown below for methane.
###Code
methane = component('methane', ms = 1.0, sigma = 3.752 , eps = 170.75,
lambda_r = 16.39, lambda_a = 6.)
eos = saftvrmie(methane)
eos.cii_correlation(overwrite=True)
###Output
_____no_output_____
###Markdown
In the following examples, self-associating 4C water will be used alongside SAFT-VR-Mie EoS.
###Code
water = component('water', ms = 1.7311, sigma = 2.4539 , eps = 110.85,
lambda_r = 8.308, lambda_a = 6., eAB = 1991.07, rcAB = 0.5624,
rdAB = 0.4, sites = [0,2,2], Mw = 18.01528, cii = 1.5371939421515458e-20)
eos = saftvrmie(water)
###Output
_____no_output_____
###Markdown
When you run ``saftvrmie(component)``, SGTPy will attempt to compute the critical point of the fluid. If the procedure is successful you can check the computed values with the ``eos.Tc``, ``eos.Pc`` and ``eos.rhoc`` attributes. The bool ``eos.critical`` indicates if the computation was successful.
###Code
print('Critical point calculation success: ', eos.critical)
if eos.critical:
print('Critical temperature:', eos.Tc, 'K')
print('Critical pressure:', eos.Pc, 'Pa')
print('Critical density:', eos.rhoc, 'mol/m^3')
###Output
Critical point calculation success: True
Critical temperature: 694.7496645102847 K
Critical pressure: 33146202.40733257 Pa
Critical density: 17706.262071347497 mol/m^3
###Markdown
If the critical point calculation is not successful, you can try to compute the critical point of the fluid using the ``eos.get_critical`` method. The method returns the critical temperature [K], critical pressure [Pa] and critical density [mol/m^3]. You can supply better initial values to start the non-linear solver.If you set the parameter ``overwrite=True``, the results from this method will overwrite the previous computed critical point at initialization.
###Code
# get_critical providing initial guesses
Tc0 = 690. # K
rhoc0 = 17500. # mol/m3
eos.get_critical(Tc0, rhoc0, overwrite=True)
###Output
_____no_output_____
###Markdown
The density of the fluid is computed with the ``eos.density`` method. It requires the temperature (K), pressure (Pa) and the aggregation state (``'L'`` for liquid phase or ``'V'`` for vapor phase).When no initial guess has been provided. Topliss's method is used to initialize the density calculation.
###Code
T = 300. # K
P = 1e5 # Pa
# computed density in mol/m3
eos.density(T, P, 'L'), eos.density(T, P, 'V')
###Output
_____no_output_____
###Markdown
Optionally, you can provide an initial guess to start computing the density. This is done with the ``rho0`` parameter.In this case, Newton's method is used to solve the density.
###Code
T = 300. # K
P = 1e5 # Pa
# computed density in mol/m3
rho0 = 0.95*56938.97048119255 # mol/m3
eos.density(T, P, 'L', rho0=rho0)
###Output
_____no_output_____
###Markdown
Similarly, the pressure of the fluid can be computed at given molar density (mol/m3) and temperature (K) using the ``eos.pressure`` method.
###Code
T = 300. # K
rhov = 44.962772347754836 # mol/m3
rhol = 56938.970481192526 # mol/m3
#computed pressure in Pa
eos.pressure(rhov, T), eos.pressure(rhol, T)
###Output
_____no_output_____
###Markdown
For pure fluids, the ``eos.psat`` method allows computing the saturation pressure at given temperature. This method returns the equilibrium pressure and molar volumes of the liquid and vapor phases. Similarly, the ``eos.tsat`` method allows computing the saturation temperature at given pressure.The phase equilibria can be verified through fugacity coefficients using the ``eos.logfug`` method or by using chemical potentials with the ``eos.muad`` method. The chemical potentials require that dimensionless density and temperature.
###Code
T = 300. # K
P0 = 1e3 # Pa
# equilibrium pressure (Pa), liquid volume (m3/mol), vapor volume (m3/mol)
eos.psat(T, P0=P0)
P = 1e3 # Pa
T0 = 300. # K
# equilibrium temperature (K), liquid volume (m3/mol), vapor volume (m3/mol)
eos.tsat(P, T0=T0)
###Output
_____no_output_____
###Markdown
Alternatively, if you don't supply an initial guess for the pressure and the critical point of the fluid was correctly computed at initialization, the pressure will be automatically initiated using the [zero-pressure o Pmin/Pmax algorithm](https://www.sciencedirect.com/science/article/pii/S0098135497000161).
###Code
# computing VLE without initial guess
# equilibrium pressure (Pa), liquid volume (m3/mol), vapor volume (m3/mol)
eos.psat(T)
###Output
_____no_output_____
###Markdown
You can check that the phase equilibria was computed correctly, verifying either the chemical potential or fugacity coefficients of the phases.
###Code
Psat = 3640.841209122654 # Pa
vl = 1.756342718267362e-05 # m3/mol
vv = 0.6824190076059896 # m3/mol
# checking chemical potentials
np.allclose(eos.muad(1/vl, T) , eos.muad(1/vv, T))
# checking fugacity coefficients
np.allclose(eos.logfug(T, Psat, 'L', v0=vl)[0], eos.logfug(T, Psat, 'V', v0=vv)[0])
###Output
_____no_output_____
###Markdown
The ``eos`` object also includes the calculation of some thermal derived properties such as residual entropy (``eos.EntropyR``), residual enthalpy (``eos.EnthalpyR``), residual isochoric heat capacity (``eos.CvR``), , residual isobaric heat capacity (``eos.CpR``).For the speed of sound calculation (``eos.speed_sound``) the ideal gas heat capacities are required, in the example the isochoric and isobaric ideal gas contributions are set to $3R/2$ and $5R/2$, respectively. Better values of ideal gas heat capacities contribution can be found from empirical correlations, such as the provided by DIPPR 801.
###Code
# vaporization entropy in J/mol K
Sl = eos.EntropyR(T, Psat, 'L', v0=vl)
Sv = eos.EntropyR(T, Psat, 'V', v0=vv)
Svap = Sv - Sl
# vaporization enthalpy in J/mol
Hl = eos.EnthalpyR(T, Psat, 'L')
Hv = eos.EnthalpyR(T, Psat, 'V')
Hvap = Hv - Hl
# isochoric and isobaric residual heats capacities in J / mol K
cvr = eos.CvR(1/vl, T)
cpr = eos.CpR(T, Psat, 'L')
# ideal gas heat capacities, better values can be obtained with DIPPR 801 correlations
r = 8.314 # J / mol K
CvId = 3*r/2
CpId = 5*r/2
w = eos.speed_sound(T, Psat, 'V', v0=vl, CvId=CvId, CpId=CpId)
print('Vaporization Entropy : ', Svap, 'J / mol K')
print('Vaporization Enthalpy : ', Hvap, 'J / mol')
print('Residual isochoric heat capacity : ', cvr, 'J / mol K')
print('Residual isobaric heat capacity : ', cpr, 'J / mol K')
print('Speed of sound : ', w, 'm / s')
###Output
Vaporization Entropy : 142.8199794928126 J / mol K
Vaporization Enthalpy : 42845.993847885235 J / mol
Residual isochoric heat capacity : 30.835590938521406 J / mol K
Residual isobaric heat capacity : 28.20967671923448 J / mol K
Speed of sound : 1563.6575125389477 m / s
###Markdown
To get better values of the speed of sound you can provide correlated values for the ideal gas heat capacities.
###Code
# ideal heat capacity from DIPPR 801.
k1=33363
k2=26790
k3=2610.5
k4=8896
k5=1169
CpId = k1 + k2 * ((k3/T) /np.sinh(k3/T))**2
CpId += k4 * ((k5/T) /np.cosh(k5/T))**2
CpId /= 1000.
CvId = CpId - r
# better value for speed of sound (m/s)
eos.speed_sound(T, Psat, 'L', v0=vl, CvId=CvId, CpId=CpId)
###Output
_____no_output_____
|
lab3/lab3.ipynb
|
###Markdown
lab3 - Levenshtein distance and spelling correctionsThe task introduces the Levenshtein distance - a measure that is useful in tasks such as approximate string matching.
###Code
from elasticsearch import Elasticsearch
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
from itertools import chain
import Levenshtein as lev
from typing import Dict
es = Elasticsearch()
###Output
_____no_output_____
###Markdown
Tasks 1. Use ElasticSearch term vectors API to retrieve and store for each document the following data: 1. The terms (tokens) that are present in the document. 2. The number of times given term is present in the document.
###Code
indices = [h["_id"] for h in es.search(
index="przewie",
doc_type="act",
body={
"query": {
"match_all": {}
}
},
size=2000
)["hits"]["hits"]]
result = {
i: {
token: metadata["term_freq"]
for token, metadata in es.termvectors("przewie", "act", i, term_statistics=True, fields=["text"])["term_vectors"]["text"]["terms"].items()
}
for i in indices
}
result;
###Output
_____no_output_____
###Markdown
2. Aggregate the result to obtain one global **frequency list**.
###Code
global_result = {}
for r in result.values():
for token, freq in r.items():
global_result[token] = global_result.get(token, 0) + freq
global_result;
###Output
_____no_output_____
###Markdown
3. Filter the list to keep terms that contain only letters and have at least 2 of them.
###Code
filtered_result = {
k: v
for k, v in global_result.items()
if len(k) > 2 and k.isalpha()
}
filtered_result;
###Output
_____no_output_____
###Markdown
4. Make a plot in a logarithmic scale: 1. X-axis should contain the **rank** of a term, meaning the first rank belongs to the term with the highest number of occurrences; the terms with the same number of occurrences should be ordered by their name, 2. Y-axis should contain the **number of occurrences** of the term with given rank.
###Code
sorted_result = sorted(filtered_result.items(), key = lambda r: -r[1])
plt.plot(list(range(len(sorted_result))), [np.log(r[1]) for r in sorted_result])
###Output
_____no_output_____
###Markdown
5. Download [polimorfologik.zip](https://github.com/morfologik/polimorfologik/releases/download/2.1/polimorfologik-2.1.zip) dictionary and use it to find all words that do not appear in that dictionary.
###Code
with Path("../data/polimorfologik/polimorfologik-2.1.txt").open() as f:
polimorf = [l.split(";")[0:2] for l in f.readlines()]
words = set(chain.from_iterable(polimorf))
unknowns = {k:v for k,v in filtered_result.items() if k not in words}
unknowns.keys();
###Output
_____no_output_____
###Markdown
6. Find 30 words with the highest ranks that do not belong to the dictionary.
###Code
[r for r in sorted_result if r[0] not in words][:30]
###Output
_____no_output_____
###Markdown
7. Find 30 words with 3 occurrences that do not belong to the dictionary.
###Code
unknowns_with_three_occurences = [r for r in sorted_result if r[0] not in words and r[1] == 3 ][:30]
unknowns_with_three_occurences;
###Output
_____no_output_____
###Markdown
8. Use Levenshtein distance and the frequency list, to determine the most probable correction of the words from the second list.
###Code
knowns = {k:v for k,v in filtered_result.items() if k in words}
def probable_correction(word: str, frequency_dict: Dict[str, int]) -> str:
return sorted(frequency_dict.keys(), key=lambda key: -np.log(frequency_dict[key]) / lev.distance(word, key))[0]
{
w[0] : probable_correction(w[0], knowns)
for w in unknowns_with_three_occurences
}
###Output
_____no_output_____
###Markdown
lab3
###Code
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
import numpy as np
from qiskit import BasicAer
backend = BasicAer.get_backend('statevector_simulator')
def calculate(circuit):
return execute(circuit, backend).result().get_statevector(circuit)
###Output
_____no_output_____
###Markdown
Deutsch problem
###Code
u0 = QuantumRegister(1)
u01 = QuantumRegister(1)
c0 = QuantumCircuit(u0, u01)
c0.x(u0)
c0.x(u01)
c0.h(u0)
c0.h(u01)
c0.h(u0)
c0.h(u01)
c0.draw()
calculate(c0)
u1 = QuantumRegister(2)
c1 = QuantumCircuit(u1)
# c1.initialize([0, 0 ,0 ,1], u1)
c1.x(u1[0])
c1.x(u1[1])
c1.h(u1[1])
c1.h(u1[0])
c1.cx(u1[0], u1[1])
c1.h(u1[1])
c1.h(u1[0])
c1.draw()
calculate(c1)
###Output
_____no_output_____
###Markdown
LinearRegression
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_train,y_train)
print(lin_reg.intercept_, lin_reg.coef_, "\n")
lin_test_y = lin_reg.predict(X_test)
lin_train_y = lin_reg.predict(X_train)
# clear plot if any occurs
plt.clf()
# plot initial points
plt.plot(X_test, y_test,'.')
# plot linear regression result
plt.plot(X_test, lin_test_y)
# clear plot if any occurs
plt.clf()
# plot initial points
plt.plot(X_train, y_train,'.')
# plot linear regression result
plt.plot(X_train, lin_train_y)
###Output
_____no_output_____
###Markdown
KNN
###Code
from sklearn.neighbors import KNeighborsRegressor
# k = 3
knn_3_reg = KNeighborsRegressor(n_neighbors=3)
knn_3_reg.fit(X_train,y_train)
knn_3_test_y = knn_3_reg.predict(X_test)
knn_3_train_y = knn_3_reg.predict(X_train)
# clear plot if any occurs
plt.clf()
# plot initial points
plt.plot(X_test, y_test,'.')
# plot KNN result
plt.plot(X_test, knn_3_test_y, 'r.')
# k = 5
knn_5_reg = KNeighborsRegressor(n_neighbors=5)
knn_5_reg.fit(X_train,y_train)
knn_5_test_y = knn_5_reg.predict(X_test)
# clear plot if any occurs
plt.clf()
# plot initial points
plt.plot(X_test, y_test,'.')
# plot KNN result
plt.plot(X_test, knn_5_test_y, 'r.')
###Output
_____no_output_____
###Markdown
Polynomials
###Code
from sklearn.preprocessing import PolynomialFeatures
def plot_polynomial(X, y, predY):
# clear plot
plt.clf()
# plot initial points
plt.plot(X, y,'.')
# plot predictions
plt.plot(X, predY,'r.')
###Output
_____no_output_____
###Markdown
2ND GRADE POLYNOMIAL
###Code
# 2
poly_feature_2 = PolynomialFeatures(degree=2, include_bias=False)
X_poly2 = poly_feature_2.fit_transform(X_train)
poly_2_reg = LinearRegression()
poly_2_reg.fit(X_poly2, y_train)
print("Coefficients: ", poly_2_reg.coef_, "\n", "Intercept: ", poly_2_reg.intercept_,"\n")
lin2_predY = poly_2_reg.predict(poly_feature_2.fit_transform(X_test))
print(poly_2_reg.coef_[1], " x**2 + ", poly_2_reg.coef_[0], " x + ", poly_2_reg.intercept_)
# print(poly_2_reg.coef_[0][1] * 2**2 + poly_2_reg.coef_[0][0] * 2 + poly_2_reg.intercept_[0])
# plot test set
plot_polynomial(X_test, y_test, lin2_predY)
# plot all points
plot_polynomial(X, y, poly_2_reg.predict(poly_feature_2.fit_transform(X)))
###Output
_____no_output_____
###Markdown
3RD GRADE POLYNOMIAL
###Code
poly_feature_3 = PolynomialFeatures(degree=3, include_bias=False)
X_poly3 = poly_feature_3.fit_transform(X_train)
poly_3_reg = LinearRegression()
poly_3_reg.fit(X_poly3, y_train)
print("Coefficients: ", poly_3_reg.coef_, "\n", "Intercept: ", poly_3_reg.intercept_,"\n")
lin3_predY = poly_3_reg.predict(poly_feature_3.fit_transform(X_test))
print(poly_3_reg.coef_[2], "x**3 +", poly_3_reg.coef_[1], "x**2 +", poly_3_reg.coef_[0], "x + ", poly_3_reg.intercept_)
# plot test set
plot_polynomial(X_test, y_test, lin3_predY)
# plot all points
plot_polynomial(X, y, poly_3_reg.predict(poly_feature_3.fit_transform(X)))
###Output
_____no_output_____
###Markdown
4TH GRADE POLYNOMIAL
###Code
poly_feature_4 = PolynomialFeatures(degree=4, include_bias=False)
X_poly4 = poly_feature_4.fit_transform(X_train)
poly_4_reg = LinearRegression()
poly_4_reg.fit(X_poly4, y_train)
print("Coefficients: ", poly_4_reg.coef_, "\n", "Intercept: ", poly_4_reg.intercept_,"\n")
lin4_predY = poly_4_reg.predict(poly_feature_4.fit_transform(X_test))
print(poly_4_reg.coef_[3], "x**4 +", poly_4_reg.coef_[2], "x**3 +", poly_4_reg.coef_[1], "x**2 +", poly_4_reg.coef_[0], "x + ", poly_4_reg.intercept_)
# plot test set
plot_polynomial(X_test, y_test, lin4_predY)
# plot all points
plot_polynomial(X, y, poly_4_reg.predict(poly_feature_4.fit_transform(X)))
###Output
_____no_output_____
###Markdown
5TH GRADE POLYNOMIAL
###Code
poly_feature_5 = PolynomialFeatures(degree=5, include_bias=False)
X_poly5 = poly_feature_5.fit_transform(X_train)
poly_5_reg = LinearRegression()
poly_5_reg.fit(X_poly5, y_train)
print("Coefficients: ", poly_5_reg.coef_, "\n", "Intercept: ", poly_5_reg.intercept_,"\n")
lin5_predY = poly_5_reg.predict(poly_feature_5.fit_transform(X_test))
print(poly_5_reg.coef_[4], "x**5 +", poly_5_reg.coef_[3], "x**4 +", poly_5_reg.coef_[2], "x**3 +", poly_5_reg.coef_[1], "x**2 +", poly_5_reg.coef_[0], "x + ", poly_5_reg.intercept_)
# plot test set
plot_polynomial(X_test, y_test, lin5_predY)
# plot all points
plot_polynomial(X, y, poly_5_reg.predict(poly_feature_5.fit_transform(X)))
###Output
_____no_output_____
###Markdown
Mean Squared Error (MSE)
###Code
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
lin_reg MSE
###Code
# lin_reg MSE
lin_reg_pred_test = lin_reg.predict(X_test)
lin_reg_pred_train = lin_reg.predict(X_train)
lin_test_mse = mean_squared_error(y_test, lin_reg_pred_test)
lin_train_mse = mean_squared_error(y_train, lin_reg_pred_train)
print("lin_test_mse: ", lin_test_mse)
print("lin_train_mse: ", lin_train_mse)
###Output
lin_test_mse: 219.47838931188318
lin_train_mse: 242.34054882516992
###Markdown
KNN_reg MSE
###Code
knn_3_reg_pred_test = knn_3_reg.predict(X_test)
knn_3_reg_pred_train = knn_3_reg.predict(X_train)
knn_3_test_mse = mean_squared_error(y_test, knn_3_reg_pred_test)
knn_3_train_mse = mean_squared_error(y_train, knn_3_reg_pred_train)
print("knn_3_test_mse: ", knn_3_test_mse)
print("knn_3_train_mse: ", knn_3_train_mse)
knn_5_reg_pred_test = knn_5_reg.predict(X_test)
knn_5_reg_pred_train = knn_5_reg.predict(X_train)
knn_5_test_mse = mean_squared_error(y_test, knn_5_reg_pred_test)
knn_5_train_mse = mean_squared_error(y_train, knn_5_reg_pred_train)
print("knn_5_test_mse: ", knn_5_test_mse)
print("knn_5_train_mse: ", knn_5_train_mse)
###Output
knn_5_test_mse: 50.24383743403386
knn_5_train_mse: 46.106491794808825
###Markdown
Polynomial MSE
###Code
poly_2_reg_pred_test = poly_2_reg.predict(poly_feature_2.fit_transform(X_test))
poly_2_reg_pred_train = poly_2_reg.predict(poly_feature_2.fit_transform(X_train))
poly_2_test_mse = mean_squared_error(y_test, poly_2_reg_pred_test)
poly_2_train_mse = mean_squared_error(y_train, poly_2_reg_pred_train)
print("poly_2_test_mse: ", poly_2_test_mse)
print("poly_2_train_mse: ", poly_2_train_mse)
poly_3_reg_pred_test = poly_3_reg.predict(poly_feature_3.fit_transform(X_test))
poly_3_reg_pred_train = poly_3_reg.predict(poly_feature_3.fit_transform(X_train))
poly_3_test_mse = mean_squared_error(y_test, poly_3_reg_pred_test)
poly_3_train_mse = mean_squared_error(y_train, poly_3_reg_pred_train)
print("poly_3_test_mse: ", poly_3_test_mse)
print("poly_3_train_mse: ", poly_3_train_mse)
poly_4_reg_pred_test = poly_4_reg.predict(poly_feature_4.fit_transform(X_test))
poly_4_reg_pred_train = poly_4_reg.predict(poly_feature_4.fit_transform(X_train))
poly_4_test_mse = mean_squared_error(y_test, poly_4_reg_pred_test)
poly_4_train_mse = mean_squared_error(y_train, poly_4_reg_pred_train)
print("poly_4_test_mse: ", poly_4_test_mse)
print("poly_4_train_mse: ", poly_4_train_mse)
poly_5_reg_pred_test = poly_5_reg.predict(poly_feature_5.fit_transform(X_test))
poly_5_reg_pred_train = poly_5_reg.predict(poly_feature_5.fit_transform(X_train))
poly_5_test_mse = mean_squared_error(y_test, poly_5_reg_pred_test)
poly_5_train_mse = mean_squared_error(y_train, poly_5_reg_pred_train)
print("poly_5_test_mse: ", poly_5_test_mse)
print("poly_5_train_mse: ", poly_5_train_mse)
###Output
poly_5_test_mse: 39.03481336258604
poly_5_train_mse: 58.02378001008437
###Markdown
MSE DataFrame
###Code
mse_df = pd.DataFrame({
"regressors": ['lin_reg', 'knn_3_reg', 'knn_5_reg', 'poly_2_reg', 'poly_3_reg', 'poly_4_reg', 'poly_5_reg'],
"train_mse": [lin_train_mse, knn_3_train_mse, knn_5_train_mse, poly_2_train_mse, poly_3_train_mse, poly_4_train_mse, poly_5_train_mse],
"test_mse": [lin_test_mse, knn_3_test_mse, knn_5_test_mse, poly_2_test_mse, poly_3_test_mse, poly_4_test_mse, poly_5_test_mse]
}).set_index("regressors")
mse_df
print("Columns:", mse_df.columns)
print("Index: ", mse_df.index)
# save mse_df as pickle
mse_df.to_pickle('mse.pkl')
###Output
_____no_output_____
###Markdown
Regression objects
###Code
reg_objects = [(lin_reg, None), (knn_3_reg, None), (knn_5_reg, None), (poly_2_reg, poly_feature_2),
(poly_3_reg, poly_feature_3), (poly_4_reg, poly_feature_4), (poly_5_reg, poly_feature_5)]
print(reg_objects)
# save reg_objects as pickle
filename = "reg.pkl"
with open(filename, 'wb') as file:
pickle.dump(reg_objects, file, pickle.HIGHEST_PROTOCOL)
###Output
[(LinearRegression(), None), (KNeighborsRegressor(n_neighbors=3), None), (KNeighborsRegressor(), None), (LinearRegression(), PolynomialFeatures(include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=3, include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=4, include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=5, include_bias=False))]
###Markdown
Check saved Pickles contents
###Code
# check if pickles' contents are saved correctly
print("acc_list\n", pd.read_pickle("mse.pkl"), "\n")
print("reg_objects\n", pd.read_pickle("reg.pkl"))
###Output
acc_list
train_mse test_mse
regressors
lin_reg 242.340549 219.478389
knn_3_reg 38.901071 48.559748
knn_5_reg 46.106492 50.243837
poly_2_reg 90.584374 77.035997
poly_3_reg 70.646404 46.978378
poly_4_reg 58.027983 38.932580
poly_5_reg 58.023780 39.034813
reg_objects
[(LinearRegression(), None), (KNeighborsRegressor(n_neighbors=3), None), (KNeighborsRegressor(), None), (LinearRegression(), PolynomialFeatures(include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=3, include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=4, include_bias=False)), (LinearRegression(), PolynomialFeatures(degree=5, include_bias=False))]
###Markdown
Lab 3: Bayes Classifier and Boosting Jupyter notebooksIn this lab, you can use Jupyter to get a nice layout of your code and plots in one document. However, you may also use Python as usual, without Jupyter.If you have Python and pip, you can install Jupyter with `sudo pip install jupyter`. Otherwise you can follow the instruction on .And that is everything you need! Now use a terminal to go into the folder with the provided lab files. Then run `jupyter notebook` to start a session in that folder. Click `lab3.ipynb` in the browser window that appeared to start this very notebook. You should click on the cells in order and either press `ctrl+enter` or `run cell` in the toolbar above to evaluate all the expressions.Be sure to put `%matplotlib inline` at the top of every code cell where you call plotting functions to get the resulting plots inside the document. Import the librariesIn Jupyter, select the cell below and press `ctrl + enter` to import the needed libraries.Check out `labfuns.py` if you are interested in the details.
###Code
import zipfile
with zipfile.ZipFile("lab3.zip", 'r') as zip_ref:
zip_ref.extractall("")
%matplotlib inline
import numpy as np
from scipy import misc
from imp import reload
from labfuns import *
import random
###Output
_____no_output_____
###Markdown
Bayes classifier functions to implementThe lab descriptions state what each function should do.
###Code
# NOTE: you do not need to handle the W argument for this part!
# in: labels - N vector of class labels
# out: prior - C x 1 vector of class priors
def computePrior(labels, W=None):
Npts = labels.shape[0]
if W is None:
W = np.ones((Npts,1))/Npts
else:
assert(W.shape[0] == Npts)
classes = np.unique(labels)
Nclasses = np.size(classes)
prior = np.zeros((Nclasses,1))
# TODO: compute the values of prior for each class!
# equation 12
for idx, k in enumerate(classes):
k_point_indexes = np.where(labels == k)
current_w = np.array(W[k_point_indexes, :])
prior[idx] = np.sum(current_w)
return prior
# NOTE: you do not need to handle the W argument for this part!
# in: X - N x d matrix of N data points
# labels - N vector of class labels
# out: mu - C x d matrix of class means (mu[i] - class i mean)
# sigma - C x d x d matrix of class covariances (sigma[i] - class i sigma)
def mlParams(X, labels, W=None):
assert(X.shape[0]==labels.shape[0])
Npts, Ndims = np.shape(X)
classes = np.unique(labels)
Nclasses = np.size(classes)
if W is None:
W = np.ones((Npts,1))/float(Npts)
mu = np.zeros((Nclasses,Ndims))
sigma = np.zeros((Nclasses,Ndims,Ndims))
# TODO: fill in the code to compute mu and sigma!
# ==========================
for idx, k in enumerate(classes):
k_point_indexes = np.where(labels == k)[0]
k_points = np.array(X[k_point_indexes, :])
current_w = np.array(W[k_point_indexes, :])
k_sum = np.sum(np.multiply(current_w, k_points), axis=0)
k_size = np.size(k_points, 0)
# mu
current_mu = k_sum / np.sum(current_w)
mu[idx] = current_mu
# sigma
eq_square = np.multiply(current_w, np.square(k_points - current_mu))
sum = eq_square.sum(axis=0)
current_sigma = (1./np.sum(current_w)) * sum
# creates a matrix having the values in its diagonal
diag = np.diag(current_sigma)
sigma[idx] = diag
# ==========================
return mu, sigma
# in: X - N x d matrix of M data points
# prior - C x 1 matrix of class priors
# mu - C x d matrix of class means (mu[i] - class i mean)
# sigma - C x d x d matrix of class covariances (sigma[i] - class i sigma)
# out: h - N vector of class predictions for test points
def classifyBayes(X, prior, mu, sigma):
Npts = X.shape[0]
Nclasses,Ndims = np.shape(mu)
logProb = np.zeros((Nclasses, Npts))
# TODO: fill in the code to compute the log posterior logProb!
# equation 11
# ==========================
for idx in range(Nclasses):
determinant = np.linalg.det(sigma[idx])
first_part = -0.5*np.log(determinant)
third_part = np.log(prior[idx])[0]
x_mu_sub = np.subtract(X, mu[idx]) # Npts * Ndim
result = np.dot(x_mu_sub, np.linalg.inv(sigma[idx])) # Ndim * Ndim
result = np.dot(result, np.transpose(x_mu_sub)) # Npts * Npts
second_part = -0.5*result.diagonal() # Npts * 1
logProb[idx] = first_part + second_part + third_part
# ==========================
# one possible way of finding max a-posteriori once
# you have computed the log posterior
h = np.argmax(logProb,axis=0)
return h
X, labels = genBlobs(centers=2)
mu, sigma = mlParams(X, labels)
print(f"ML-Mean: {mu}\n")
print(f"ML-Covariance: {sigma}\n")
# plot the Gaussian
plotGaussian(X, labels, mu, sigma)
prior = computePrior(labels)
print(prior)
###Output
[[0.5]
[0.5]]
###Markdown
The implemented functions can now be summarized into the `BayesClassifier` class, which we will use later to test the classifier, no need to add anything else here:
###Code
classifyBayes(X, prior, mu, sigma)
# NOTE: no need to touch this
class BayesClassifier(object):
def __init__(self):
self.trained = False
def trainClassifier(self, X, labels, W=None):
rtn = BayesClassifier()
rtn.prior = computePrior(labels, W)
rtn.mu, rtn.sigma = mlParams(X, labels, W)
rtn.trained = True
return rtn
def classify(self, X):
return classifyBayes(X, self.prior, self.mu, self.sigma)
###Output
_____no_output_____
###Markdown
Test the Maximum Likelihood estimatesCall `genBlobs` and `plotGaussian` to verify your estimates.
###Code
%matplotlib inline
X, labels = genBlobs(centers=5)
mu, sigma = mlParams(X,labels)
plotGaussian(X,labels,mu,sigma)
###Output
_____no_output_____
###Markdown
Assignment 3 Call the `testClassifier` and `plotBoundary` functions for this part.
###Code
testClassifier(BayesClassifier(), dataset='iris', split=0.7)
testClassifier(BayesClassifier(), dataset='vowel', split=0.7)
%matplotlib inline
plotBoundary(BayesClassifier(), dataset='iris',split=0.7)
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Answer the following questions:> (1) When can a feature independence assumption be reasonable and when not?Two event are independent if the joint distribution can be factorized. Which means, B happening tells us nothing about A happening.>(2) How does the decision boundary look for the Iris dataset? How could one improve the classification results for this scenario by changing classifier or, alternatively, manipulating the data?This dataset has 3 different classes. Looking at the above plot, we can see that one of the classes is easily separable from the others even with a line, while the remaining two are not. So using classes that are separable will improve the classification result. Choosing attributes that are really different between class 1 and class 2 can also improve our model. Because this is not always possible, we can also change the classifier to a more complex one that can fit the data less smoothly. Therefore, we could use a single decision tree that can distinguish more than 2 classes at the same time and by nature has a high variance. Boosting functions to implementThe lab descriptions state what each function should do.
###Code
X, labels = genBlobs(centers=2)
weights = np.ones((len(labels),1))/float(len(labels))
mu, sigma = mlParams(X, labels, weights)
print(mu)
print(sigma)
# in: base_classifier - a classifier of the type that we will boost, e.g. BayesClassifier
# X - N x d matrix of N data points
# labels - N vector of class labels
# T - number of boosting iterations
# out: classifiers - (maximum) length T Python list of trained classifiers
# alphas - (maximum) length T Python list of vote weights
def trainBoost(base_classifier, X, labels, T=10):
# these will come in handy later on
Npts, Ndims = np.shape(X)
classifiers = [] # append new classifiers to this list
alphas = [] # append the vote weight of the classifiers to this list
# The weights for the first iteration
wCur = np.ones((Npts,1))/float(Npts)
for i_iter in range(0, T):
# a new classifier can be trained like this, given the current weights
classifiers.append(base_classifier.trainClassifier(X, labels, wCur))
# do classification for each point
vote = classifiers[-1].classify(X) # vote = result of function h
# TODO: Fill in the rest, construct the alphas etc.
# ==========================
# delta function
# true if the vote is correct, false otherwise
vote_correct = np.reshape((vote == labels), (Npts, 1))
# 1 if the vote is correct, 0 otherwise
vote_sign = np.where(vote_correct == True, 1, 0)
# weighted error
error = np.multiply(wCur, 1-vote_sign)
# add an epsilon to avoid division by 0
sum_error = np.sum(error) + 1e-20
# calculate alpha
alpha = 0.5 * (np.log(1 - sum_error) - np.log(sum_error))
alphas.append(alpha) # you will need to append the new alpha
# -1 if the vote is correct, 1 otherwise
exp_sign = np.where(vote_correct == True, -1.0, 1.0)
norm_factor = np.sum(wCur)
# update weights
wCur = wCur * np.exp(exp_sign * alpha)
wCur /= norm_factor
# ==========================
return classifiers, alphas
# in: X - N x d matrix of N data points
# classifiers - (maximum) length T Python list of trained classifiers as above
# alphas - (maximum) length T Python list of vote weights
# Nclasses - the number of different classes
# out: yPred - N vector of class predictions for test points
def classifyBoost(X, classifiers, alphas, Nclasses):
Npts = X.shape[0]
Ncomps = len(classifiers)
# if we only have one classifier, we may just classify directly
if Ncomps == 1:
return classifiers[0].classify(X)
else:
votes = np.zeros((Npts,Nclasses))
num_classifiers = len(alphas)
classifications = np.zeros((num_classifiers, Npts))
for t in range(num_classifiers):
alpha = alphas[t]
classification = classifiers[t].classify(X)
classifications[t] = classification
for i in range(len(X)):
for t in range(num_classifiers):
pred_class = int(classifications[t][i])
votes[i][pred_class] += alphas[t]
# one way to compute yPred after accumulating the votes
# equation 15
return np.argmax(votes, axis=1)
"""classifiers, alphas = trainBoost(BayesClassifier, X, labels)
print(" --- ALPHAS ---\n")
print(alphas)
print("\n\n --- CLASSIFY BOOST ---\n")
classifyBoost(X, classifiers, alphas, 2)"""
###Output
_____no_output_____
###Markdown
The implemented functions can now be summarized another classifer, the `BoostClassifier` class. This class enables boosting different types of classifiers by initializing it with the `base_classifier` argument. No need to add anything here.
###Code
# NOTE: no need to touch this
class BoostClassifier(object):
def __init__(self, base_classifier, T=10):
self.base_classifier = base_classifier
self.T = T
self.trained = False
def trainClassifier(self, X, labels):
rtn = BoostClassifier(self.base_classifier, self.T)
rtn.nbr_classes = np.size(np.unique(labels))
rtn.classifiers, rtn.alphas = trainBoost(self.base_classifier, X, labels, self.T)
rtn.trained = True
return rtn
def classify(self, X):
return classifyBoost(X, self.classifiers, self.alphas, self.nbr_classes)
###Output
_____no_output_____
###Markdown
Assignment 5: Run some experimentsCall the `testClassifier` and `plotBoundary` functions for this part.
###Code
testClassifier(BoostClassifier(BayesClassifier(), T=10), dataset='iris',split=0.7)
testClassifier(BoostClassifier(BayesClassifier(), T=10), dataset='vowel',split=0.7)
%matplotlib inline
print("Naive Bayes:")
plotBoundary(BayesClassifier(), dataset='iris',split=0.7)
print("Boosted Bayes:")
plotBoundary(BoostClassifier(BayesClassifier()), dataset='iris',split=0.7)
###Output
Naive Bayes:
###Markdown
Bayes classifiers> (1) Is there any improvement in classification accuracy? Why/why not?Yes there is! | Dataset | NBayes | With Boost || ------- | ------------- | ---------- || iris | 89% | 94.6% || vowel | 64.7% | 79.8% |They are better because we are creating multiple models and using weights on the worse models to compute better ones next. So, for each model, we calculate the error (dependent on the number of points that were misclassified) and give the misclassified points a higher weight so that the following model is forced to fit those points better. > (2) Plot the decision boundary of the boosted classifier on Iris and compare it with that of the basic. What differences do you notice? Is the boundary of the boosted version more complex?We notice that the first model was much smoother, and therefore much more general. Separating the purple class from others can be done easily with a line, so even the first model was able to do that. But the green and the red classes require a more complex hyperplane to separate with. The naive solution was not able to model that (due to its high bias / low variance nature). The boosted version reduces the bias by incorporating different models which take into consideration the previous models' errors. This enforces that these models fit the data in a different way, increasing the efficiency of the overal fit.> (3) Can we make up for not using a more advanced model in the basic classifier (e.g. independent features) by using boosting?In our opinion, yes. Even by using the same predictors, the boosting algorithm enforces the models to differ from others by modeling their erroroneus points more closely. This by itself pushes the following models to differ from each other. Assignment 6 Now repeat the steps with a decision tree classifier.
###Code
testClassifier(DecisionTreeClassifier(), dataset='iris', split=0.7)
testClassifier(BoostClassifier(DecisionTreeClassifier(), T=10), dataset='iris',split=0.7)
testClassifier(DecisionTreeClassifier(), dataset='vowel',split=0.7)
testClassifier(BoostClassifier(DecisionTreeClassifier(), T=10), dataset='vowel',split=0.7)
%matplotlib inline
plotBoundary(DecisionTreeClassifier(), dataset='iris',split=0.7)
%matplotlib inline
plotBoundary(BoostClassifier(DecisionTreeClassifier(), T=10), dataset='iris',split=0.7)
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Decision trees> (1) Is there any improvement in classification accuracy? Why/why not?Yes there is. Below are the accuracies of the decision trees.| Dataset | DTree | With Boost || ------- | ------------- | ---------- || iris | 92.4% | 94.6% || vowel | 64.1% | 86.8% |For fair comparison, we also include the accuracies of the Bayes models.| Dataset | NBayes | With Boost || ------- | ------------- | ---------- || iris | 89% | 94.6% || vowel | 64.7% | 79.8% |It looks like that by nature, decision trees cope better with the provided Iris data than the Bayes classifiers do. In our opinion, this is because the decision trees have generally higher variance, which makes easier to fit the data. A single, really simple decision tree could model the Iris dataset relatively well - boosting lead to an additional `2.2%` accuracy improvement - which is not that much as boosting primarily helps with high bias models. While in the case of the Bayes classifiers (higher bias), boosting could improve the base performance by `5.6%`.The Vowels dataset has more instances (more than 500 records compared to 150 in iris) and predictors (10 compared to 4 in iris). This is harder to model with a simple decision tree, because there are more correlations between the attributes which can be harder to model correctly with a single high-variance algorithm. This is why here the naive Bayes model outperforms the simple decision tree. And this is also why the boosted tree improves by `22.7%` compared to `15.1%` of Bayes (despite the fact that boosting generally helps more with high bias models).> (2) Plot the decision boundary of the boosted classifier on Iris and compare it with that of the basic. What differences do you notice? Is the boundary of the boosted version more complex?The first tree basically models the Iris data with two vertical lines. The boosted version fits the dataset much closer (as expected). What is worth to mention that while the boosted and the simple Bayes both tries to find a curvative boundary, decision trees use more elongated lines.> (3) Can we make up for not using a more advanced model in the basic classifier (e.g. independent features) by using boosting?In our opinion, yes. Even by using the same predictors, the boosting algorithm enforces the models to differ from others by modeling their erroroneus points more closely. This by itself pushes the following models to differ from each other. Assignment 7> If you had to pick a classifier, naive Bayes or a decision tree orthe boosted versions of these, which one would you pick? Motivate from the following criterias. - outliers: the Bayes model should be better because the variance is lower than in a decision tree (which typically has an overfitting nature). So while a decision tree would try to fit all of the points, the Bayes model simply wouldn't be able to do so.- irrelevant inputs: in a Bayes model, for an irrelevant attribute, each value of the attribute will be equally distributed by all of the classes (assuming that we have a good dataset), due to the independence assumption. In a decision tree, this might also be the case since the algorithm tries to choose the attributes with the lowest entropies (highest information gain). However, depending on the feature selection, a decision tree might end up choosing an irrelevant feature as a classifier due to the dependency inside a branch, therefore modelling accidental correlations.- predictive power: It seems like that for a small amount of attributes, the predictive power of the decision tree is stronger. When the number of predictors increase, the Bayesian approach seems to be more accurate.- mixed types of data: binary vs categorical shouldn't be a problem (they both can handle it). About continuous data, also both of them work since the input is just a collection of points.- scalability: - dimension of the data (D): we can see from the tables above (and also from the bonus assignment) that when the number of attributes increases, the performance of the decision tree compared to the Bayes model becomes worse and worse (even though it's better when the number of attributes is low). By this we can conclude that decision trees are more affected by the curse of dimensionality. - number of instances (N): Bayes classifiers perform great with smaller amount of training data because of the assumption of feature independence. This is often not the case, but Bayes models can still work quite well on a high number of datasets. Conclusion:Finally, the boosted versions of this models seem to always perform at least as well as the individual models, so they are the obvious choice.To conclude, we would choose a boosted version of the Bayes model in general, because it seems to perform better and handling input features efectively (also better when the number of attributes is usually big). However, if it's possible to look at the data beforehand, we can make a better decision based on all of these criteria what kind of algorithm to use. Bonus: Visualize faces classified using boosted decision treesNote that this part of the assignment is completely voluntary! First, let's check how a boosted decision tree classifier performs on the olivetti data. Note that we need to reduce the dimension a bit using PCA, as the original dimension of the image vectors is `64 x 64 = 4096` elements.
###Code
testClassifier(BayesClassifier(), dataset='olivetti',split=0.7, dim=20)
testClassifier(BoostClassifier(BayesClassifier(), T=10), dataset='olivetti',split=0.7, dim=20)
testClassifier(DecisionTreeClassifier(), dataset='olivetti',split=0.7, dim=20)
testClassifier(BoostClassifier(DecisionTreeClassifier(), T=10), dataset='olivetti',split=0.7, dim=20)
###Output
Trial: 0 Accuracy 73.3
Trial: 10 Accuracy 74.2
Trial: 20 Accuracy 77.5
Trial: 30 Accuracy 67.5
Trial: 40 Accuracy 70.8
Trial: 50 Accuracy 66.7
Trial: 60 Accuracy 79.2
Trial: 70 Accuracy 75.8
Trial: 80 Accuracy 71.7
Trial: 90 Accuracy 69.2
Final mean classification accuracy 72 with standard deviation 3.93
###Markdown
You should get an accuracy around 70%. If you wish, you can compare this with using pure decision trees or a boosted bayes classifier. Not too bad, now let's try and classify a face as belonging to one of 40 persons!
###Code
%matplotlib inline
X,y,pcadim = fetchDataset('olivetti') # fetch the olivetti data
xTr,yTr,xTe,yTe,trIdx,teIdx = trteSplitEven(X,y,0.7) # split into training and testing
pca = decomposition.PCA(n_components=20) # use PCA to reduce the dimension to 20
pca.fit(xTr) # use training data to fit the transform
xTrpca = pca.transform(xTr) # apply on training data
xTepca = pca.transform(xTe) # apply on test data
# use our pre-defined decision tree classifier together with the implemented
# boosting to classify data points in the training data
classifier = BayesClassifier().trainClassifier(xTrpca, yTr)
yPr = classifier.classify(xTepca)
# choose a test point to visualize
testind = random.randint(0, xTe.shape[0]-1)
# visualize the test point together with the training points used to train
# the class that the test point was classified to belong to
visualizeOlivettiVectors(xTr[yTr == yPr[testind],:], xTe[testind,:])
###Output
_____no_output_____
###Markdown
Lab 3: Expectation Maximization and Variational Autoencoder Machine Learning 2 (2017/2018)* The lab exercises should be made in groups of two or three people.* The deadline is Friday, 01.06.* Assignment should be submitted through BlackBoard! Make sure to include your and your teammates' names with the submission.* Attach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file should be "studentid1\_studentid2\_lab", for example, the attached file should be "12345\_12346\_lab1.ipynb". Only use underscores ("\_") to connect ids, otherwise the files cannot be parsed.Notes on implementation:* You should write your code and answers in an IPython Notebook: http://ipython.org/notebook.html. If you have problems, please ask.* Use __one cell__ for code and markdown answers only! * Put all code in the cell with the ``` YOUR CODE HERE``` comment and overwrite the ```raise NotImplementedError()``` line. * For theoretical questions, put your solution using LaTeX style formatting in the YOUR ANSWER HERE cell.* Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.* Large parts of you notebook will be graded automatically. Therefore it is important that your notebook can be run completely without errors and within a reasonable time limit. To test your notebook before submission, select Kernel -> Restart \& Run All.$\newcommand{\bx}{\mathbf{x}} \newcommand{\bpi}{\mathbf{\pi}} \newcommand{\bmu}{\mathbf{\mu}} \newcommand{\bX}{\mathbf{X}} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\bz}{\mathbf{z}}$ Installing PyTorchIn this lab we will use PyTorch. PyTorch is an open source deep learning framework primarily developed by Facebook's artificial-intelligence research group. In order to install PyTorch in your conda environment go to https://pytorch.org and select your operating system, conda, Python 3.6, no cuda. Copy the text from the "Run this command:" box. Now open a terminal and activate your 'ml2labs' conda environment. Paste the text and run. After the installation is done you should restart Jupyter. MNIST dataIn this Lab we will use several methods for unsupervised learning on the MNIST dataset of written digits. The dataset contains digital images of handwritten numbers $0$ through $9$. Each image has 28x28 pixels that each take 256 values in a range from white ($= 0$) to black ($=1$). The labels belonging to the images are also included. Fortunately, PyTorch comes with a MNIST data loader. The first time you run the box below it will download the MNIST data set. That can take a couple of minutes.The main data types in PyTorch are tensors. For Part 1, we will convert those tensors to numpy arrays. In Part 2, we will use the torch module to directly work with PyTorch tensors.
###Code
%pylab inline
import torch
from torchvision import datasets, transforms
train_dataset = datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
train_labels = train_dataset.train_labels.numpy()
train_data = train_dataset.train_data.numpy()
# For EM we will use flattened data
train_data = train_data.reshape(train_data.shape[0], -1)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Part 1: Expectation MaximizationWe will use the Expectation Maximization (EM) algorithm for the recognition of handwritten digits in the MNIST dataset. The images are modelled as a Bernoulli mixture model (see Bishop $\S9.3.3$):$$p(\bx|\bmu, \bpi) = \sum_{k=1}^K \pi_k \prod_{i=1}^D \mu_{ki}^{x_i}(1-\mu_{ki})^{(1-x_i)}$$where $x_i$ is the value of pixel $i$ in an image, $\mu_{ki}$ represents the probability that pixel $i$ in class $k$ is black, and $\{\pi_1, \ldots, \pi_K\}$ are the mixing coefficients of classes in the data. We want to use this data set to classify new images of handwritten numbers. 1.1 Binary data (5 points)As we like to apply our Bernoulli mixture model, write a function `binarize` to convert the (flattened) MNIST data to binary images, where each pixel $x_i \in \{0,1\}$, by thresholding at an appropriate level.
###Code
def binarize(X):
return 1. * (X > 128)
# Test test test!
bin_train_data = binarize(train_data)
assert bin_train_data.dtype == np.float
assert bin_train_data.shape == train_data.shape
###Output
_____no_output_____
###Markdown
Sample a few images of digits $2$, $3$ and $4$; and show both the original and the binarized image together with their label.
###Code
for digit in [2,3,4]:
fig = plt.figure(figsize=(15,10))
index = np.where(train_labels == digit)[0][:2]
plt.subplot(141)
plt.imshow(train_data[index[0]].reshape(28,28),cmap='gray')
plt.title("Label:" + str(train_labels[index[0]]) + ", Non Binarized")
plt.subplot(142)
plt.imshow(bin_train_data[index[0]].reshape(28,28),cmap='gray')
plt.title("Label:" + str(train_labels[index[0]])+ ", Binarized")
plt.subplot(143)
plt.imshow(train_data[index[1]].reshape(28,28),cmap='gray')
plt.title("Label:" + str(train_labels[index[1]]) + ", Non Binarized")
plt.subplot(144)
plt.imshow(bin_train_data[index[1]].reshape(28,28),cmap='gray')
plt.title("Label:" + str(train_labels[index[1]])+ ", Binarized")
plt.show()
###Output
_____no_output_____
###Markdown
1.2 Implementation (40 points)You are going to write a function ```EM(X, K, max_iter)``` that implements the EM algorithm on the Bernoulli mixture model. The only parameters the function has are:* ```X``` :: (NxD) array of input training images* ```K``` :: size of the latent space* ```max_iter``` :: maximum number of iterations, i.e. one E-step and one M-stepYou are free to specify your return statement.Make sure you use a sensible way of terminating the iteration process early to prevent unnecessarily running through all epochs. Vectorize computations using ```numpy``` as much as possible.You should implement the `E_step(X, mu, pi)` and `M_step(X, gamma)` separately in the functions defined below. These you can then use in your function `EM(X, K, max_iter)`.
###Code
def E_step(X, mu, pi):
# YOUR CODE HERE
N, D = shape(X)
K = shape(pi)[0]
gamma = np.zeros((N,K))
for n in range(N):
gamma_n = np.zeros(K)
for k in range(K):
p1_array = mu[k,:]**X[n,:]
p0_array = (1-mu[k,:])**(1-X[n,:])
p_array = p1_array * p0_array
gamma_n[k] = pi[k]*np.prod(p_array)
normalizing_denom_n = np.sum(gamma_n)
gamma_n /= normalizing_denom_n
# print(np.sum(gamma_n)) # SHOULD BE 1
gamma[n] = gamma_n
return gamma
# Let's test on 5 datapoints
n_test = 5
X_test = bin_train_data[:n_test]
D_test, K_test = X_test.shape[1], 10
np.random.seed(2018)
mu_test = np.random.uniform(low=.25, high=.75, size=(K_test,D_test))
pi_test = np.ones(K_test) / K_test
gamma_test = E_step(X_test, mu_test, pi_test)
assert gamma_test.shape == (n_test, K_test)
def M_step(X, gamma):
# YOUR CODE HERE
N, D = shape(X)
K = shape(gamma)[1]
Nk = np.sum(gamma, axis=0)
pi = Nk/N
mu = np.zeros((K, D))
for k in range(K):
mu[k] = (np.transpose(gamma[:,k]) @ X) / Nk[k]
return mu, pi
# Oh, let's test again
mu_test, pi_test = M_step(X_test, gamma_test)
assert mu_test.shape == (K_test,D_test)
assert pi_test.shape == (K_test, )
def EM(X, K, max_iter, mu=None, pi=None):
N, D = shape(X)
convergence_value = 1e-5
gamma = np.zeros((N, K))
if mu is None:
mu = np.random.uniform(low=.25, high=.75, size=(K, D))
if pi is None:
pi = np.ones(K) / K
for i in range(1, max_iter):
mu_prev, pi_prev = mu, pi
gamma = E_step(X, mu, pi)
mu, pi = M_step(X, gamma)
mu_update = np.linalg.norm(mu-mu_prev)
pi_update = np.linalg.norm(pi-pi_prev)
if i%10 == 0:
print("Iteration ", i, "| delta_mu = %.7f" % mu_update, "| delta_pi = %.7f" % pi_update, "|")
if mu_update < convergence_value and pi_update < convergence_value:
print("Convergence reached at iteration ", i, ", early stopping!")
return gamma, mu, pi
return gamma, mu, pi
# EXPERIMENT with 10 classes, TO BE REMOVED
# gamma, mu, pi = EM(bin_train_data[:10000, :], 10, 100) #Subset
# fig = plt.figure(figsize=(15,10))
# for k in range(9):
# plt.subplot(191+k)
# plt.imshow(mu[k].reshape(28,28),cmap='gray') #Greys swaps colors
# plt.title("Weights for Class:"+str(k))
# plt.show()
###Output
_____no_output_____
###Markdown
1.3 Three digits experiment (10 points)In analogue with Bishop $\S9.3.3$, sample a training set consisting of only __binary__ images of written digits $2$, $3$, and $4$. Run your EM algorithm and show the reconstructed digits.
###Code
# YOUR CODE HERE
mask = np.isin(train_labels, [2,3,4])
reduced_train_labels = train_labels[mask]
reduced_train_data = bin_train_data[mask]
# print(shape(reduced_train_data))
gamma3, mu3, pi3 = EM(reduced_train_data, 3, 100)
fig = plt.figure(figsize=(15,10))
for k in range(3):
plt.subplot(131+k)
plt.imshow(mu3[k].reshape(28,28),cmap='gray') #Greys swaps colors
plt.title("Weights for Class:"+str(k))
plt.show()
from scipy.stats import itemfreq
frequencies = itemfreq(reduced_train_labels)[:, 1].flatten()
print('Mixture Coefficients (pi): {}'.format(pi3))
print('Class-frequencies in the dataset:')
print('; '.join(['{}: {}'.format(d, frequencies[i]/len(reduced_train_labels)) for i, d in enumerate([2, 3, 4])]))
###Output
Iteration 10 | delta_mu = 0.0598935 | delta_pi = 0.0055431 |
Iteration 20 | delta_mu = 0.0041073 | delta_pi = 0.0003765 |
Iteration 30 | delta_mu = 0.0003832 | delta_pi = 0.0000362 |
Iteration 40 | delta_mu = 0.0000223 | delta_pi = 0.0000020 |
Iteration 50 | delta_mu = 0.0003400 | delta_pi = 0.0000329 |
Iteration 60 | delta_mu = 0.0000129 | delta_pi = 0.0000012 |
Convergence reached at iteration 61 , early stopping!
###Markdown
Can you identify which element in the latent space corresponds to which digit? What are the identified mixing coefficients for digits $2$, $3$ and $4$, and how do these compare to the true ones? It is evident how different elements in latent space corresponds to the digits from different classes.In the pictures above we show how, in different digits, the regions with active weight values correspond to the regions representing the shape of that particular digit.The mixing coeffincients ($\pi$) follow roughly an uniform distribution over the 3 classes and resemble the distribution of the classes in the dataset 1.4 Experiments (20 points)Perform the follow-up experiments listed below using your implementation of the EM algorithm. For each of these, describe/comment on the obtained results and give an explanation. You may still use your dataset with only digits 2, 3 and 4 as otherwise computations can take very long. 1.4.1 Size of the latent space (5 points)Run EM with $K$ larger or smaller than the true number of classes. Describe your results.
###Code
K = 2
_, mu2, _ = EM(reduced_train_data, K, 100)
fig = plt.figure(figsize=(15,10))
for k in range(K):
plt.subplot(121+k)
plt.imshow(mu2[k].reshape(28,28),cmap='gray') #Greys swaps colors
plt.title("Weights for Class:"+str(k))
plt.show()
K = 8
_, mu8, _ = EM(reduced_train_data, K, 100)
fig2 = plt.figure(figsize=(15,10))
for k in range(K):
plt.subplot(181+k)
plt.imshow(mu8[k].reshape(28,28),cmap='gray') #Greys swaps colors
plt.title("Weights for Class:"+str(k))
plt.show()
###Output
Iteration 10 | delta_mu = 0.0837468 | delta_pi = 0.0064696 |
Iteration 20 | delta_mu = 0.0490827 | delta_pi = 0.0055646 |
Iteration 30 | delta_mu = 0.0631869 | delta_pi = 0.0060857 |
Iteration 40 | delta_mu = 0.0242119 | delta_pi = 0.0021445 |
Iteration 50 | delta_mu = 0.0003025 | delta_pi = 0.0000236 |
Iteration 60 | delta_mu = 0.0000489 | delta_pi = 0.0000062 |
Convergence reached at iteration 64 , early stopping!
###Markdown
When the number of latent classes K is lower than the true classes, some of the classes (the most similar ones) appear to be clustered together. For example, using K=2 for classes [2,3,4], the spaces for digits 2 and 3 (more similar) are merged together and result in some blurry weight matrix.On the other way, when more latent classes are present with respect to the number of true classes, the spaces will represent digits from the same classes with different shapes. Namely, subcluster in the larger cluster corresponding to a digit are created and represented with different weights 1.4.2 Identify misclassifications (10 points)How can you use the data labels to assign a label to each of the clusters/latent variables? Use this to identify images that are 'misclassified' and try to understand why they are. Report your findings.
###Code
from collections import defaultdict
def test_classification(data, labels, classes, assigned_classes, gamma):
predictions = np.zeros(len(data), dtype=int)
for i in range(len(data)):
predictions[i] = assigned_classes[np.argmax(gamma[i, :])]
misclassified_indexes = predictions != labels
number_of_errors = sum(misclassified_indexes)
print('{} misclassified images out of {}'.format(number_of_errors, len(data)))
sampled_indexes = np.where(misclassified_indexes)[0]
np.random.shuffle(sampled_indexes)
rows = 5
columns = 5
sampled_indexes = sampled_indexes[:rows*columns]
fig, axes = plt.subplots(rows, columns, figsize=(15, rows*4))
for i in range(len(sampled_indexes)):
ax = axes[i//columns, i % columns]
ax.imshow(data[sampled_indexes[i]].reshape(28,28),cmap='gray') #Greys swaps colors
ax.set_title("{} misclassified as {}".format(labels[sampled_indexes[i]], predictions[sampled_indexes[i]]))
plt.show()
classes = [2, 3, 4]
scores = [defaultdict(float) for i in range(len(classes))]
for x, l in zip(reduced_train_data, reduced_train_labels):
for k in range(len(classes)):
#log_prob = sum(np.log(mu[k][x == 1])) + sum(np.log(1 - mu[k][x != 1]))
#scores[k][l] += log_prob
scores[k][l] += -sum((mu3[k] - x)**2)
scores = [dict(scores[i]) for i in range(len(classes))]
assigned_classes = [max(scores[i], key=scores[i].get) for i in range(len(classes))]
# print(scores)
# print(assigned_classes)
test_classification(reduced_train_data, reduced_train_labels, classes, assigned_classes, gamma3)
###Output
1732 misclassified images out of 17931
###Markdown
From the plots we are showing we see how recurrent characteristic and shapes for different digits cause them to be classified wrongly.For example, we notice how digits "2" which are shifted upwards are often misclassified as "4". Probably this happens because the number "4" has a characteristic horizontal line in the middle, while usually "2" as the orizontal line in the bottom part.Similarly, "3" digits that appear sheared/rotated assuming an oblungated shape from top-right to bottom-left are misclassified as "2", whose shape specifically contain a main line on the diagonal spanning top-right to bottom-left, with a rounded part on the top which is in common with the digit "3", and thus cannot be used to discriminate between the two classes.On the other way, we realized that few digit "4" are misclassified. Why is it more common, for example, to have "4" classified as "2" then "2" as "4"? One explanation could be that "4" digit contain characteristical vertical lines which are never found in "2" and "3", in partiuclar the straight line at the top left. 1.4.3 Initialize with true values (5 points)Initialize the three classes with the true values of the parameters and see what happens. Report your results.
###Code
frequencies = itemfreq(reduced_train_labels)
pi = frequencies[:, 1].astype(float) / len(reduced_train_labels)
K = len(classes)
mu = np.zeros((K, 28*28))
classes_map = {}
for i, c in enumerate(classes):
classes_map[c] = i
classes_count = np.zeros(K)
for x, l in zip(reduced_train_data, reduced_train_labels):
mu[classes_map[l], :] += x
classes_count[classes_map[l]] += 1
mu = mu / classes_count.reshape(-1, 1)
gamma_i, mu_i, pi_i = EM(reduced_train_data, K, 100, mu=mu, pi=pi)
test_classification(reduced_train_data, reduced_train_labels, classes, classes, gamma_i)
###Output
Iteration 10 | delta_mu = 0.0006934 | delta_pi = 0.0000136 |
Iteration 20 | delta_mu = 0.0000062 | delta_pi = 0.0000004 |
Convergence reached at iteration 20 , early stopping!
1596 misclassified images out of 17931
###Markdown
By initializing the three classes with the true parameters we see a small improvement in accuracy, which stands between 5 and 10 percent. This is probably because random initialization increases the chances to get stuck into a local optima in comparison with true parameter initialization.Regarding the type of misclassification errors, we see the same patterns described in the previous experiment.In addition, this way of initialize the classes significantly increases the speed in converging. Using a convergenge threshold for $\mu$ and $\pi$ updates of $1e-5$, the random initialization approach usually takes between 30 and 80 EM iterations, while using initialization with true parameters results in convergence after 10 to 30 iterations. Part 2: Variational Auto-EncoderA Variational Auto-Encoder (VAE) is a probabilistic model $p(\bx, \bz)$ over observed variables $\bx$ and latent variables and/or parameters $\bz$. Here we distinguish the decoder part, $p(\bx | \bz) p(\bz)$ and an encoder part $p(\bz | \bx)$ that are both specified with a neural network. A lower bound on the log marginal likelihood $\log p(\bx)$ can be obtained by approximately inferring the latent variables z from the observed data x using an encoder distribution $q(\bz| \bx)$ that is also specified as a neural network. This lower bound is then optimized to fit the model to the data. The model was introduced by Diederik Kingma (during his PhD at the UVA) and Max Welling in 2013, https://arxiv.org/abs/1312.6114. Since it is such an important model there are plenty of well written tutorials that should help you with the assignment. E.g: https://jaan.io/what-is-variational-autoencoder-vae-tutorial/.In the following, we will make heavily use of the torch module, https://pytorch.org/docs/stable/index.html. Most of the time replacing `np.` with `torch.` will do the trick, e.g. `np.sum` becomes `torch.sum` and `np.log` becomes `torch.log`. In addition, we will use `torch.FloatTensor()` as an equivalent to `np.array()`. In order to train our VAE efficiently we will make use of batching. The number of data points in a batch will become the first dimension of our data tensor, e.g. A batch of 128 MNIST images has the dimensions [128, 1, 28, 28]. To check check the dimensions of a tensor you can call `.size()`. 2.1 Loss functionThe objective function (variational lower bound), that we will use to train the VAE, consists of two terms: a log Bernoulli loss (reconstruction loss) and a Kullback–Leibler divergence. We implement the two terms separately and combine them in the end.As seen in Part 1: Expectation Maximization, we can use a multivariate Bernoulli distribution to model the likelihood $p(\bx | \bz)$ of black and white images. Formally, the variational lower bound is maximized but in PyTorch we are always minimizing therefore we need to calculate the negative log Bernoulli loss and Kullback–Leibler divergence. 2.1.1 Negative Log Bernoulli loss (5 points)The negative log Bernoulli loss is defined as,\begin{align}loss = - (\sum_i^D \bx_i \log \hat{\bx_i} + (1 − \bx_i) \log(1 − \hat{\bx_i})).\end{align}Write a function `log_bernoulli_loss` that takes a D dimensional vector `x`, its reconstruction `x_hat` and returns the negative log Bernoulli loss. Make sure that your function works for batches of arbitrary size.
###Code
def log_bernoulli_loss(x_hat, x):
# loss = -x * torch.log(x_hat) - (1 - x)*torch.log(1 - x_hat)
# loss = loss.sum() #loss.sum(2) ??
loss = F.binary_cross_entropy(x_hat, x, size_average=False)
return loss
### Test test test
x_test = torch.FloatTensor([[0.1, 0.2, 0.3, 0.4], [0.5, 0.6, 0.7, 0.8], [0.9, 0.9, 0.9, 0.9]])
x_hat_test = torch.FloatTensor([[0.11, 0.22, 0.33, 0.44], [0.55, 0.66, 0.77, 0.88], [0.99, 0.99, 0.99, 0.99]])
assert log_bernoulli_loss(x_hat_test, x_test) > 0.0
assert log_bernoulli_loss(x_hat_test, x_test) < 10.0
###Output
_____no_output_____
###Markdown
2.1.2 Negative Kullback–Leibler divergence (10 Points)The variational lower bound (the objective to be maximized) contains a KL term $D_{KL}(q(\bz)||p(\bz))$ that can often be calculated analytically. In the VAE we assume $q = N(\bz, \mu, \sigma^2I)$ and $p = N(\bz, 0, I)$. Solve analytically! $\mathcal{KL}(q || p) = \int q (\log q - \log p ) dx \\= \frac{1}{2} \int q (-\log |\sigma^2 I | + z^T z - (z-\mu)^T (\sigma^2 I )^{-1}(z-\mu) ) dx \\=\frac{1}{2} (-\log |\sigma^2 I | + \mu^T \mu + Tr(\sigma^2 I ) - Tr(I^{-1}I) ) \\=- \frac{1}{2}\sum_i^D \log \sigma_i^2 + \frac{1}{2} \mu^T \mu + \frac{1}{2} \sum_i^D \sigma_i^2 - \frac{D}{2} $Since in the definition of $q$ provided $\sigma^2$ appears to be a scalar (then the covariance between every dimension is equal), we also provided the solution for the KL divergence in that case:$\mathcal{KL}(q || p) = -\frac{1}{2} D \ln \sigma^2 + \frac{1}{2} \mu^T\mu -\frac{1}{2} D + \frac{1}{2} D \sigma^2$ Write a function `KL_loss` that takes two J dimensional vectors `mu` and `logvar` and returns the negative Kullback–Leibler divergence. Where `logvar` is $\log(\sigma^2)$. Make sure that your function works for batches of arbitrary size.
###Code
def KL_loss(mu, logvar):
# N, D = mu.size()
# norm = mu.pow(2)
# loss = N*(-0.5*D) -0.5*torch.sum(logvar) + 0.5*torch.sum(norm) + 0.5*torch.sum(torch.exp(logvar))
loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return loss
### Test test test
mu_test = torch.FloatTensor([[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]])
logvar_test = torch.FloatTensor([[0.01, 0.02], [0.03, 0.04], [0.05, 0.06]])
assert KL_loss(mu_test, logvar_test) > 0.0
assert KL_loss(mu_test, logvar_test) < 10.0
###Output
_____no_output_____
###Markdown
2.1.3 Putting the losses together (5 points)Write a function `loss_function` that takes a D dimensional vector `x`, its reconstruction `x_hat`, two J dimensional vectors `mu` and `logvar` and returns the final loss. Make sure that your function works for batches of arbitrary size.
###Code
def loss_function(x_hat, x, mu, logvar):
return log_bernoulli_loss(x_hat, x) + KL_loss(mu, logvar)
x_test = torch.FloatTensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6], [0.7, 0.8, 0.9]])
x_hat_test = torch.FloatTensor([[0.11, 0.22, 0.33], [0.44, 0.55, 0.66], [0.77, 0.88, 0.99]])
mu_test = torch.FloatTensor([[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]])
logvar_test = torch.FloatTensor([[0.01, 0.02], [0.03, 0.04], [0.05, 0.06]])
assert loss_function(x_hat_test, x_test, mu_test, logvar_test) > 0.0
assert loss_function(x_hat_test, x_test, mu_test, logvar_test) < 10.0
###Output
_____no_output_____
###Markdown
2.2 The modelBelow you see a data structure for the VAE. The modell itself consists of two main parts the encoder (images $\bx$ to latent variables $\bz$) and the decoder (latent variables $\bz$ to images $\bx$). The encoder is using 3 fully-connected layers, whereas the decoder is using fully-connected layers. Right now the data structure is quite empty, step by step will update its functionality. For test purposes we will initialize a VAE for you. After the data structure is completed you will do the hyperparameter search.
###Code
from torch import nn
from torch.nn import functional as F
class VAE(nn.Module):
def __init__(self, fc1_dims, fc21_dims, fc22_dims, fc3_dims, fc4_dims):
super(VAE, self).__init__()
self.fc1 = nn.Linear(*fc1_dims)
self.fc21 = nn.Linear(*fc21_dims)
self.fc22 = nn.Linear(*fc22_dims)
self.fc3 = nn.Linear(*fc3_dims)
self.fc4 = nn.Linear(*fc4_dims)
def encode(self, x):
# To be implemented
raise Exception('Method not implemented')
def reparameterize(self, mu, logvar):
# To be implemented
raise Exception('Method not implemented')
def decode(self, z):
# To be implemented
raise Exception('Method not implemented')
def forward(self, x):
# To be implemented
raise Exception('Method not implemented')
VAE_test = VAE(fc1_dims=(784, 4), fc21_dims=(4, 2), fc22_dims=(4, 2), fc3_dims=(2, 4), fc4_dims=(4, 784))
###Output
_____no_output_____
###Markdown
2.3 Encoding (10 points)Write a function `encode` that gets a vector `x` with 784 elements (flattened MNIST image) and returns `mu` and `logvar`. Your function should use three fully-connected layers (`self.fc1()`, `self.fc21()`, `self.fc22()`). First, you should use `self.fc1()` to embed `x`. Second, you should use `self.fc21()` and `self.fc22()` on the embedding of `x` to compute `mu` and `logvar` respectively. PyTorch comes with a variety of activation functions, the most common calls are `F.relu()`, `F.sigmoid()`, `F.tanh()`. Make sure that your function works for batches of arbitrary size.
###Code
def encode(self, x):
e = self.fc1(x)
e = F.relu(e)
mu = self.fc21(e)
# mu = F.tanh(mu)
logvar = self.fc22(e)
# logvar = F.relu(logvar)
return mu, logvar
### Test, test, test
VAE.encode = encode
x_test = torch.ones((5,784))
mu_test, logvar_test = VAE_test.encode(x_test)
assert np.allclose(mu_test.size(), [5, 2])
assert np.allclose(logvar_test.size(), [5, 2])
###Output
_____no_output_____
###Markdown
2.4 Reparameterization (10 points)One of the major question that the VAE is answering, is 'how to take derivatives with respect to the parameters of a stochastic variable?', i.e. if we are given $\bz$ that is drawn from a distribution $q(\bz|\bx)$, and we want to take derivatives. This step is necessary to be able to use gradient-based optimization algorithms like SGD.For some distributions, it is possible to reparameterize samples in a clever way, such that the stochasticity is independent of the parameters. We want our samples to deterministically depend on the parameters of the distribution. For example, in a normally-distributed variable with mean $\mu$ and standard deviation $\sigma$, we can sample from it like this:\begin{align}\bz = \mu + \sigma \odot \epsilon,\end{align}where $\odot$ is the element-wise multiplication and $\epsilon$ is sampled from $N(0, I)$.Write a function `reparameterize` that takes two J dimensional vectors `mu` and `logvar`. It should return $\bz = \mu + \sigma \odot \epsilon$.
###Code
def reparameterize(self, mu, logvar):
z = mu + torch.exp(0.5*logvar)*torch.randn_like(mu)
return z
### Test, test, test
VAE.reparameterize = reparameterize
VAE_test.train()
mu_test = torch.FloatTensor([[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]])
logvar_test = torch.FloatTensor([[0.01, 0.02], [0.03, 0.04], [0.05, 0.06]])
z_test = VAE_test.reparameterize(mu_test, logvar_test)
assert np.allclose(z_test.size(), [3, 2])
assert z_test[0][0] < 5.0
assert z_test[0][0] > -5.0
###Output
_____no_output_____
###Markdown
2.5 Decoding (10 points)Write a function `decode` that gets a vector `z` with J elements and returns a vector `x_hat` with 784 elements (flattened MNIST image). Your function should use two fully-connected layers (`self.fc3()`, `self.fc4()`). PyTorch comes with a variety of activation functions, the most common calls are `F.relu()`, `F.sigmoid()`, `F.tanh()`. Make sure that your function works for batches of arbitrary size.
###Code
def decode(self, z):
y = self.fc3(z)
y = F.relu(y)
y = self.fc4(y)
x_hat = F.sigmoid(y)
return x_hat
# test test test
VAE.decode = decode
z_test = torch.ones((5,2))
x_hat_test = VAE_test.decode(z_test)
assert np.allclose(x_hat_test.size(), [5, 784])
assert (x_hat_test <= 1).all()
assert (x_hat_test >= 0).all()
###Output
_____no_output_____
###Markdown
2.6 Forward pass (10)To complete the data structure you have to define a forward pass through the VAE. A single forward pass consists of the encoding of an MNIST image $\bx$ into latent space $\bz$, the reparameterization of $\bz$ and the decoding of $\bz$ into an image $\bx$.Write a function `forward` that gets a a vector `x` with 784 elements (flattened MNIST image) and returns a vector `x_hat` with 784 elements (flattened MNIST image), `mu` and `logvar`.
###Code
def forward(self, x):
x = x.view(-1, 784)
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
x_hat = self.decode(z)
return x_hat, mu, logvar
# test test test
VAE.forward = forward
x_test = torch.ones((5,784))
x_hat_test, mu_test, logvar_test = VAE_test.forward(x_test)
assert np.allclose(x_hat_test.size(), [5, 784])
assert np.allclose(mu_test.size(), [5, 2])
assert np.allclose(logvar_test.size(), [5, 2])
###Output
_____no_output_____
###Markdown
2.7 Training (15)We will now train the VAE using an optimizer called Adam, https://arxiv.org/abs/1412.6980. The code to train a model in PyTorch is given below.
###Code
from torch.autograd import Variable
def train(epoch, train_loader, model, optimizer):
model.train()
train_loss = 0
for batch_idx, (data, _) in enumerate(train_loader):
data = Variable(data)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data)
loss = loss_function(recon_batch, data.view(-1, 784), mu, logvar)
loss.backward()
train_loss += loss.data
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.data / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(train_loader.dataset)))
###Output
_____no_output_____
###Markdown
Let's train. You have to choose the hyperparameters. Make sure your loss is going down in a reasonable amount of epochs (around 10).
###Code
# Hyperparameters
# fc1_dims = (?,?)
# fc21_dims =
# fc22_dims =
# fc3_dims =
# fc4_dims =
# lr =
# batch_size =
# epochs =
hidden_layer_size = 400
latent_space_size = 20
fc1_dims = (28*28, hidden_layer_size)
fc21_dims = (hidden_layer_size, latent_space_size)
fc22_dims = (hidden_layer_size, latent_space_size)
fc3_dims = (latent_space_size, hidden_layer_size)
fc4_dims = (hidden_layer_size, 28*28)
lr = 0.001
batch_size = 128
epochs = 10
# This cell contains a hidden test, please don't delete it, thx
###Output
_____no_output_____
###Markdown
Run the box below to train the model using the hyperparameters you entered above.
###Code
from torchvision import datasets, transforms
from torch import nn, optim
# Load data
train_data = datasets.MNIST('../data', train=True, download=True,
transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size, shuffle=True, **{})
# Init model
VAE_MNIST = VAE(fc1_dims=fc1_dims, fc21_dims=fc21_dims, fc22_dims=fc22_dims, fc3_dims=fc3_dims, fc4_dims=fc4_dims)
# Init optimizer
optimizer = optim.Adam(VAE_MNIST.parameters(), lr=lr)
# Train
for epoch in range(1, epochs + 1):
train(epoch, train_loader, VAE_MNIST, optimizer)
###Output
Train Epoch: 1 [0/60000 (0%)] Loss: 548.218994
Train Epoch: 1 [12800/60000 (21%)] Loss: 179.038406
Train Epoch: 1 [25600/60000 (43%)] Loss: 154.299728
Train Epoch: 1 [38400/60000 (64%)] Loss: 136.146194
Train Epoch: 1 [51200/60000 (85%)] Loss: 137.696609
====> Epoch: 1 Average loss: 163.9566
Train Epoch: 2 [0/60000 (0%)] Loss: 128.446014
Train Epoch: 2 [12800/60000 (21%)] Loss: 118.466515
Train Epoch: 2 [25600/60000 (43%)] Loss: 121.666725
Train Epoch: 2 [38400/60000 (64%)] Loss: 118.164062
Train Epoch: 2 [51200/60000 (85%)] Loss: 118.403358
====> Epoch: 2 Average loss: 121.3418
Train Epoch: 3 [0/60000 (0%)] Loss: 115.683067
Train Epoch: 3 [12800/60000 (21%)] Loss: 115.601486
Train Epoch: 3 [25600/60000 (43%)] Loss: 116.874527
Train Epoch: 3 [38400/60000 (64%)] Loss: 116.663383
Train Epoch: 3 [51200/60000 (85%)] Loss: 113.716934
====> Epoch: 3 Average loss: 114.6640
Train Epoch: 4 [0/60000 (0%)] Loss: 109.423950
Train Epoch: 4 [12800/60000 (21%)] Loss: 114.831970
Train Epoch: 4 [25600/60000 (43%)] Loss: 111.892365
Train Epoch: 4 [38400/60000 (64%)] Loss: 116.666138
Train Epoch: 4 [51200/60000 (85%)] Loss: 112.022842
====> Epoch: 4 Average loss: 111.6554
Train Epoch: 5 [0/60000 (0%)] Loss: 115.271133
Train Epoch: 5 [12800/60000 (21%)] Loss: 105.700172
Train Epoch: 5 [25600/60000 (43%)] Loss: 108.677246
Train Epoch: 5 [38400/60000 (64%)] Loss: 109.566368
Train Epoch: 5 [51200/60000 (85%)] Loss: 107.932693
====> Epoch: 5 Average loss: 109.8876
Train Epoch: 6 [0/60000 (0%)] Loss: 108.103485
Train Epoch: 6 [12800/60000 (21%)] Loss: 107.634300
Train Epoch: 6 [25600/60000 (43%)] Loss: 108.621140
Train Epoch: 6 [38400/60000 (64%)] Loss: 107.716660
Train Epoch: 6 [51200/60000 (85%)] Loss: 109.582489
====> Epoch: 6 Average loss: 108.7154
Train Epoch: 7 [0/60000 (0%)] Loss: 110.990593
Train Epoch: 7 [12800/60000 (21%)] Loss: 108.023628
Train Epoch: 7 [25600/60000 (43%)] Loss: 103.764626
Train Epoch: 7 [38400/60000 (64%)] Loss: 105.314987
Train Epoch: 7 [51200/60000 (85%)] Loss: 108.633560
====> Epoch: 7 Average loss: 107.8690
Train Epoch: 8 [0/60000 (0%)] Loss: 107.899307
Train Epoch: 8 [12800/60000 (21%)] Loss: 110.971703
Train Epoch: 8 [25600/60000 (43%)] Loss: 109.482376
Train Epoch: 8 [38400/60000 (64%)] Loss: 110.997002
Train Epoch: 8 [51200/60000 (85%)] Loss: 102.966225
====> Epoch: 8 Average loss: 107.1769
Train Epoch: 9 [0/60000 (0%)] Loss: 106.253616
Train Epoch: 9 [12800/60000 (21%)] Loss: 107.286575
Train Epoch: 9 [25600/60000 (43%)] Loss: 102.333618
Train Epoch: 9 [38400/60000 (64%)] Loss: 102.734947
Train Epoch: 9 [51200/60000 (85%)] Loss: 104.835625
====> Epoch: 9 Average loss: 106.6546
Train Epoch: 10 [0/60000 (0%)] Loss: 103.383324
Train Epoch: 10 [12800/60000 (21%)] Loss: 108.156723
Train Epoch: 10 [25600/60000 (43%)] Loss: 106.962585
Train Epoch: 10 [38400/60000 (64%)] Loss: 104.326218
Train Epoch: 10 [51200/60000 (85%)] Loss: 103.240379
====> Epoch: 10 Average loss: 106.2311
###Markdown
Run the box below to check if the model you trained above is able to correctly reconstruct images.
###Code
### Let's check if the reconstructions make sense
# Set model to test mode
VAE_MNIST.eval()
# Reconstructed
train_data_plot = datasets.MNIST('../data', train=True, download=True,
transform=transforms.ToTensor())
train_loader_plot = torch.utils.data.DataLoader(train_data_plot,
batch_size=1, shuffle=False, **{})
for batch_idx, (data, _) in enumerate(train_loader_plot):
x_hat, mu, logvar = VAE_MNIST(data)
plt.imshow(x_hat.view(1,28,28).squeeze().data.numpy(), cmap='gray')
plt.title('%i' % train_data.train_labels[batch_idx])
plt.show()
if batch_idx == 3:
break
###Output
_____no_output_____
###Markdown
2.8 Visualize latent space (20 points)Now, implement the auto-encoder now with a 2-dimensional latent space, and train again over the MNIST data. Make a visualization of the learned manifold by using a linearly spaced coordinate grid as input for the latent space, as seen in https://arxiv.org/abs/1312.6114 Figure 4.
###Code
hidden_layer_size = 400
latent_space_size = 2
fc1_dims = (28*28, hidden_layer_size)
fc21_dims = (hidden_layer_size, latent_space_size)
fc22_dims = (hidden_layer_size, latent_space_size)
fc3_dims = (latent_space_size, hidden_layer_size)
fc4_dims = (hidden_layer_size, 28*28)
lr = 0.001
batch_size = 64
epochs = 10
# Init model
VAE_MNIST_2 = VAE(fc1_dims=fc1_dims, fc21_dims=fc21_dims, fc22_dims=fc22_dims, fc3_dims=fc3_dims, fc4_dims=fc4_dims)
# Init optimizer
optimizer_2 = optim.Adam(VAE_MNIST_2.parameters(), lr=lr)
# Train
for epoch in range(1, epochs + 1):
train(epoch, train_loader, VAE_MNIST_2, optimizer_2)
def box_muller(u1, u2):
R = np.sqrt(-2*np.log(u1))
theta = 2*np.pi*u2
z1 = R*np.cos(theta)
z2 = R*np.sin(theta)
return z1, z2
N = 15
fig, axes = plt.subplots(N, N, figsize=(N, N))
for i, u1 in enumerate(np.linspace(0.1, 1, N)):
for j, u2 in enumerate(np.linspace(0.1, 1, N)):
z1, z2 = box_muller(u1, u2)
rec = VAE_MNIST_2.decode(torch.FloatTensor([z1, z2]).view(1, 2))
ax = axes[i, j]
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
ax.imshow(rec.view(28,28).squeeze().data.numpy(),cmap='gray') #Greys swaps colors
plt.show()
###Output
Train Epoch: 1 [0/60000 (0%)] Loss: 556.354858
Train Epoch: 1 [12800/60000 (21%)] Loss: 191.782272
Train Epoch: 1 [25600/60000 (43%)] Loss: 187.562439
Train Epoch: 1 [38400/60000 (64%)] Loss: 171.920456
Train Epoch: 1 [51200/60000 (85%)] Loss: 170.257584
====> Epoch: 1 Average loss: 189.7763
Train Epoch: 2 [0/60000 (0%)] Loss: 166.193787
Train Epoch: 2 [12800/60000 (21%)] Loss: 167.885071
Train Epoch: 2 [25600/60000 (43%)] Loss: 161.370865
Train Epoch: 2 [38400/60000 (64%)] Loss: 166.489166
Train Epoch: 2 [51200/60000 (85%)] Loss: 158.093262
====> Epoch: 2 Average loss: 165.8276
Train Epoch: 3 [0/60000 (0%)] Loss: 160.789810
Train Epoch: 3 [12800/60000 (21%)] Loss: 160.252365
Train Epoch: 3 [25600/60000 (43%)] Loss: 159.205002
Train Epoch: 3 [38400/60000 (64%)] Loss: 161.420517
Train Epoch: 3 [51200/60000 (85%)] Loss: 161.646622
====> Epoch: 3 Average loss: 162.0872
Train Epoch: 4 [0/60000 (0%)] Loss: 163.585297
Train Epoch: 4 [12800/60000 (21%)] Loss: 167.602798
Train Epoch: 4 [25600/60000 (43%)] Loss: 160.621521
Train Epoch: 4 [38400/60000 (64%)] Loss: 159.711472
Train Epoch: 4 [51200/60000 (85%)] Loss: 155.550308
====> Epoch: 4 Average loss: 159.8737
Train Epoch: 5 [0/60000 (0%)] Loss: 149.485001
Train Epoch: 5 [12800/60000 (21%)] Loss: 159.995224
Train Epoch: 5 [25600/60000 (43%)] Loss: 158.942596
Train Epoch: 5 [38400/60000 (64%)] Loss: 159.052414
Train Epoch: 5 [51200/60000 (85%)] Loss: 166.474014
====> Epoch: 5 Average loss: 158.3051
Train Epoch: 6 [0/60000 (0%)] Loss: 157.657104
Train Epoch: 6 [12800/60000 (21%)] Loss: 151.389679
Train Epoch: 6 [25600/60000 (43%)] Loss: 159.437698
Train Epoch: 6 [38400/60000 (64%)] Loss: 150.097916
Train Epoch: 6 [51200/60000 (85%)] Loss: 164.584930
====> Epoch: 6 Average loss: 157.1183
Train Epoch: 7 [0/60000 (0%)] Loss: 162.528137
Train Epoch: 7 [12800/60000 (21%)] Loss: 167.315643
Train Epoch: 7 [25600/60000 (43%)] Loss: 156.973969
Train Epoch: 7 [38400/60000 (64%)] Loss: 164.392197
Train Epoch: 7 [51200/60000 (85%)] Loss: 158.145416
====> Epoch: 7 Average loss: 156.1562
Train Epoch: 8 [0/60000 (0%)] Loss: 161.309662
Train Epoch: 8 [12800/60000 (21%)] Loss: 153.715240
Train Epoch: 8 [25600/60000 (43%)] Loss: 151.815872
Train Epoch: 8 [38400/60000 (64%)] Loss: 152.455261
Train Epoch: 8 [51200/60000 (85%)] Loss: 160.631943
====> Epoch: 8 Average loss: 155.3192
Train Epoch: 9 [0/60000 (0%)] Loss: 163.636505
Train Epoch: 9 [12800/60000 (21%)] Loss: 153.858139
Train Epoch: 9 [25600/60000 (43%)] Loss: 157.270248
Train Epoch: 9 [38400/60000 (64%)] Loss: 157.295578
Train Epoch: 9 [51200/60000 (85%)] Loss: 152.878906
====> Epoch: 9 Average loss: 154.5977
Train Epoch: 10 [0/60000 (0%)] Loss: 147.994720
Train Epoch: 10 [12800/60000 (21%)] Loss: 163.100708
Train Epoch: 10 [25600/60000 (43%)] Loss: 162.996979
Train Epoch: 10 [38400/60000 (64%)] Loss: 141.923431
Train Epoch: 10 [51200/60000 (85%)] Loss: 157.178879
====> Epoch: 10 Average loss: 153.9229
###Markdown
Lab 3: Some more Python, Bernoulli processes, Poisson distribution Like the previous lab, we want to put all of our imported packages towards the top of the lab in a cell that's easy to run as needed. This way we have access to all the methods we need right from the start.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as img
import numpy as np
import scipy as sp
import scipy.stats as st
import pickle as pkl
import csv as csv
print ("Modules Imported!")
###Output
Modules Imported!
###Markdown
Some More on Python: Dictionaries and Classes: In the first lab we learned about lists, arrays, and tuples. There is yet another sort of grouping of terms and that is a dictionary. It is denoted with curly brackets { } instead of parenthesis ( ) for tuples and brackets [ ] for lists. It is like a list or array but instead of being indexed by the integers 0,1,2,3,4..., a dictionary has a key followed by a value colon followed by a value. So that each value is associated to a given key. Below is a dictionary that has the names of fast food chains as the keys, and the ratings out of 10 as the values.
###Code
Rating = {'Burger King': 4, 'Five Guys':7, 'Chipotle':6, 'Panda Express':5, 'Subway':4} #Creates a dictionary
print (Rating.keys()) #Returns an array of the keys
print (Rating['Burger King']) #Returns the value associated with the key 'Burger King'
###Output
dict_keys(['Burger King', 'Five Guys', 'Chipotle', 'Panda Express', 'Subway'])
4
###Markdown
There should be two questions that come to your mind when first using the dictionary: What happens if we try to retrieve a value from a key that is not in the dictionary? What happens if the same key appears in the dictionary twice? In response to the first question, if there is no key, python will throw an error. Thus, it is always good to check whether the key is in the dictionary before trying to retrieve a value.
###Code
Rating = {'Burger King': 4, 'Five Guys':7, 'Chipotle':6, 'Panda Express':5, 'Subway':4} #Creates a dictionary
for i in ['Burger King', 'Five Guys', 'Chick-Fil-A'] :
print (i,Rating[i]) #Will give an error since 'Chick-Fil-A is not an actual key
Rating = {'Burger King': 4, 'Five Guys':7, 'Chipotle':6, 'Panda Express':5, 'Subway':4} #Creates a dictionary
for i in ['Burger King', 'Five Guys', 'Chick-Fil-A'] :
if i in Rating: #First checks if i is a key in the dictionary
print (i,Rating[i])
###Output
Burger King 4
Five Guys 7
###Markdown
In response to the second question, when we try it below, we find that it takes on the most recent value given to the keyword.
###Code
Rating = {'Burger King': 4, 'Five Guys':7, 'Chipotle':6, 'Panda Express':5, 'Subway':4, 'Chipotle': 9} #Creates a dictionary
print (Rating.keys())
print ([Rating[i] for i in Rating.keys()])
print (Rating)
###Output
dict_keys(['Burger King', 'Five Guys', 'Chipotle', 'Panda Express', 'Subway'])
[4, 7, 9, 5, 4]
{'Burger King': 4, 'Five Guys': 7, 'Chipotle': 9, 'Panda Express': 5, 'Subway': 4}
###Markdown
We can declare classes in python similarly to that of JAVA. We use the keyword "class" followed by the name of the class and then a colon. Tab indentation remains the same as before so that anything included within the tab of the class is contained within the class. We can include class variables or use the "def" keyword to create class functions. Below is an example of a class.
###Code
class Student:
def __init__(self, name, ID):
self.n = name
self.i = ID
def getName(self):
return self.n
def getID(self):
return self.i
###Output
_____no_output_____
###Markdown
The above code is just an example and won't return anything, but make sure you run it anyway. Like the modules that we imported, if we create a custom class and run it once, then all the other cells in our Python notebook will have access to it. There are a few things that should have stood out to you in the code we just ran. The first is the "__init__" function. It is a version of a constructor method common to object oriented programming languages such as Java, and is what you would use to declare a new instance of your class. Second is the "self" keyword that appears in all of the methods. In order to have access to methods and variables within the class itself, you need to reference the class by using the keyword "self". It's kind of like the "this" keyword in JAVA, but is more explicitly expressed here. Finally, the "__init__" function indicates that in our class we pass two parameters (other than self) which will become instance variables for the instances of the class that we will create. The code below creates an instance of the Student class.
###Code
s = Student("Kevin", "4123")
print (s.getName())
print (s.getID())
print (s.n)
print (s.i)
###Output
Kevin
4123
Kevin
4123
###Markdown
Notice how the instance variables we created were not in fact private, so our get methods are not needed (other than to illustrate how things work, of course). Reading and Writing Files It is very useful to know how to read and write files in python. So below we will go over some of the basics with I/O. When loading and saving files you can specify the entire filepath, but it is probably much easier to keep the files coordinating to each lab in the same folder and just use relative filepaths. We can write to a text file very easily using the code below. If you were to look in the folder where this ipython notebook file is held, you would see the file below.
###Code
#Writes a simple statement to a text file
filepath = 'lab3_simple.txt'
f = open(filepath, 'w') #Opens file. 'w' signifies we want to write to it.
#'w' erases existing file; use 'a' to append to an existing file
f.write('This is a simple example') #Writes to the text file
f.close()
print ('The file has been written')
###Output
The file has been written
###Markdown
Likewise we can load the text file back using the following:
###Code
filepath = 'lab3_simple.txt'
f = open(filepath) #Opens the file, default behavior is to read (not write)
print (f.read()) #Reads the text file
f.close()
###Output
This is a simple example
###Markdown
This is fairly easy yet, since it's a text file everything we store in it needs to be a string. This becomes a bit of a pain if we would want to store things like a dictionary that describes a random variable. This has a mix of strings, floats, and possibly others. While it's easy to get the string of each of these and save them in a text file, it's much harder to load back and then parse through to convert everything into the variables we want. Instead we can use the Python Pickle module. Let's use it to save the dictionary we created above.
###Code
grades = {'Bart':75, 'Lisa':98, 'Milhouse':80, 'Nelson':65}
import pickle # import module first
f = open('gradesdict.pkl', 'wb') # Pickle file is newly created where foo1.py is
pickle.dump(grades, f) # dump data to f
f.close()
filepath = 'lab3_dictionary.pkl'
d = {'one':(1./6,-1),'two':(1./6,5),'three':(1./6,-5),'four':(1./6,1),'five':(1./6,-5),'six':(1./6,1)}
f = open(filepath,'wb') # The 'wb' is for openning file to be written to in binary mode
pkl.dump(d,f)
f.close()
print ('The file has been written')
###Output
The file has been written
###Markdown
Now you should see a .pkl file in the same folder which represents our dictionary. It's a bit less conveniant than a text file however, because it's not exactly readable by an outside program. However, we can load it back and manipulate our dictionary just as before. (Note: Due to the way files are written using pickel, a pickel file written using a Windows computer will be hard to open with a computer using Linux and vice versa)
###Code
filepath = 'lab3_dictionary.pkl'
f = open(filepath, 'rb') # The 'rb' is for openning file to be read in binary mode
d = pkl.load(f)
f.close()
print (d['one'])
print (d['five'][1])
###Output
(0.16666666666666666, -1)
-5
###Markdown
It would be nice if we could load in files from csv formats to be able to manipulate them. This can be done through the "csv" module. Along with this lab notebook, there should also be a csv file called SacramentoCrime. This is just a random set of data I found on the internet but is fine for our purposes. It has over 7000 crime logs and each one of those logs has 9 different bits of information. We can load the data in and manipulate it with the following.
###Code
filepath = 'SacramentoCrime.csv'
data = [] #Creates an empty list
f = open(filepath) #Opens the file path in the default 'r' mode
reader = csv.reader(f)
for row in reader:
data.append(row)
f.close() # data is now a list of lists
data = np.array(data) #Converts our list to a numpy array to make it a little easier to work with
print ('Data size:', np.size(data), ', Data shape:', np.shape(data),'\n')
print ('The following is the list of headers:')
print (data[0],'\n')
print ('The following is some random data corresponding to the headers')
print (data[77])
N_row = np.shape(data)[0] # the number of rows in the data matrix
x = [float(a) for a in data[1:N_row, 8]] # Loads column 8 of data (numbering begins at zero) into x
y = [float(a) for a in data[1:N_row, 7]] # Loads column 7 of data (numbering begins at zero) into y
# convert string to float fot plotting plot later
plt.scatter(x,y, color = 'red', edgecolor = 'black')
plt.title('Location of Crimes in Sacremento')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.axis([-121.7,-121.2,38.4,38.7])
###Output
Data size: 68265 , Data shape: (7585, 9)
The following is the list of headers:
['cdatetime' 'address' 'district' 'beat' 'grid' 'crimedescr'
'ucr_ncic_code' 'latitude' 'longitude']
The following is some random data corresponding to the headers
['1/1/2006 2:40' '1415 L ST' '3' '3M ' '745'
'211 PC ROBBERY UNSPECIFIED' '1299' '38.57682477' '-121.4882896']
###Markdown
Finally we can also load in image files. You should have a file along with this lab called SacramentoMap.png. Make sure that this is also in the same folder as the ipython notebook. We can load and plot the image with the following code. It should look similar to the outline given by our crime map.
###Code
filepath = 'SacramentoMap.png'
sac = img.imread(filepath)
image = plt.imshow(sac)
###Output
_____no_output_____
###Markdown
These were just the basics of file loading and saving. Depending on formatting and other issues, it may be necessary to dive into these modules a bit deeper to better suit your circumstances. However, this is a very good start to being able to use I/O. The Lambda Keyword: Finally, I use it in one of the topics below so I figured it may be good to go over it first here. "lambda" is a reserved keyword in Python. This may frustrate you when trying to simulate a Poisson process or random variable because in the literature the parameter for a Poisson or exponential distribution is often lambda, $\lambda$, but it's just the way it is. In python, you can pass functions the same as variables. You can set functions equal to variables. The keyword lambda signals the creation of an anonymous function (it's not bound to a name). It allows functions to be written in a single line and to be passed with relative ease. The best way to understand it is just to look at some examples. So here are a few.
###Code
# Simple function as we would normally define it
def f(x):
return x**3
print (f(3))
g = lambda x:x**3 #Same exact function using the lambda keyword
print (g(3))
# Function that returns a value that is itself a function defined by lambda
def f(n):
return lambda x:x**n
g = f(3) #g is the function x^3
h = f(2) #h is the function x^2
print (g(3))
print (h(3))
n = np.arange(20) #Creates a list from 0 to 19
y = list(filter(lambda x:x%2==0,n)) #Filters n. In Python 3.x filter is an iterable object, so converted here to list
print (y)
###Output
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
###Markdown
Hopefully this gives you a basic idea of what the lambda function is and does. We will not use it very extensively in this lab, but it's still good to know and may come in handy. Bernoulli Processes: In the first lab, you were introduced to both the Bernoulli distribution and the binomial distribution. A *random process* is simply a collection of random variables indexed by time. A Bernoulli process is given by $X=(X_1,X_2, \ldots)$ where $X_t \sim Bernoulli(p)$ for each $t$ and the $X$'s are mutually independent. It is a sequence of Bernoulli RVs. We can calculate probabilities involving the process at multiple times fairly easily, e.g. $P\{X_3=1,X_6=0,X_{11}=1,X_{13}=1\}=p(1-p)pp=p^3(1-p)$. When considering a random process, it is helpful to visualize, or produce by computer simulation, a typical sample path. A sample path of a random process is the deterministic function of time that results by performing the probability experiment for the underlying probability space, and selecting a realization, or variate, for each of the random variables invovled. Generating a sample path of a random process by computer simulation is particularly simple in case the random variables of the process are mutually independent, such as for Bernoulli processes. For such processes, variates of the individual random variables can be generated separately. Below is a sample path of a Bernoulli process $X=(X_1,X_2, \ldots)$ with p=1/7. Run the code several times to see different sample paths.
###Code
p = 1./7 #Probability
T = 30 #Number of time steps
X = [] #Creates a list for the values of the random variables
for i in range(1,T+1): #range(1,T+1) is the list of numbers 1 through T
X.append(st.bernoulli.rvs(p)) #Fills the list with Bernoulli(p) variates
plt.plot(range(1,T+1),X, 'o')
plt.title('Sample Path of Bernoulli process with p=1/7')
plt.ylim((0,2))
plt.ylabel('$X(\omega)$') #You can use LaTex in the Python code
plt.xlabel('Time')
###Output
_____no_output_____
###Markdown
The same Bernoulli process can be described in four different ways. Using $X=(X_1,X_2, \ldots)$ as above. Using $L=(L_1,L_2, \ldots),$ where $L_i$ is the number of trials after the $i-1^{th}$ count up to and including the time of the $i^{th}$ count. Using $S=(S_1,S_2, \ldots),$ where $S_i$ is the time the $i^{th}$ count occurs. Using $C = (C_1,C_2,\ldots)$ where $C_t$ is the number of counts up to and including time $t$ (A diagram of each of these representations can be found in your ECE 313 textbook section 2.6)For example, if $X = 0,1,0,1,0,0,1,1,1,0,1$, then $L = 2,2,3,1,1,2$$S = 2,4,7,8,9,11$$C = 0,1,1,2,2,2,3,4,5,5,6$.**Problem 1:** Write an expanded version of the code above to display the sample paths of $X,L,S,$ and $C$ all for the samerealization of the experiment. To do so, plot the sample paths of $X$ and $C$ up to time 30 as before, and print thefirst ten values of $L$ and of $S.$ You don't need to plot $L$ and $S.$ You may need to generate more than30 X values to determine the first ten values of $L$ and $S.$ To reiterate, your values of $L,S$ and $C$ should be determined by $X.$(If you just generate a large number of trials assuming it will produce at least 10 values of L and S, you may lose a few points. To prevent this way of generation, consider using a while loop.)
###Code
# Your code here
p = 1./7 #Probability
T = 30 #Number of time steps
X = [] #Creates a list for the values of the random variables
C = []
count = 0
for i in range(1,T+1): #range(1,T+1) is the list of numbers 1 through T
t = st.bernoulli.rvs(p)
X.append(t) #Fills the list with Bernoulli(p) variates
if t ==1:
count += 1
C.append(count)
else:
C.append(count)
print("X=",X)
print("C=",C)
L = []
blank = 0
for i in X:
if i == 0:
blank += 1
elif i == 1:
L.append(blank+1)
blank = 0
print("L=",L)
S = []
for j in range(len(X)):
if X[j] == 1:
S.append(j+1)
print("S=",S)
plt.plot(range(1,T+1),X, 'o')
plt.title('Sample Path of Bernoulli process with p=1/7')
plt.ylim((0,2))
plt.ylabel('$X(\omega)$') #You can use LaTex in the Python code
plt.xlabel('Time')
plt.figure()
plt.plot(range(1,T+1),C, 'o')
plt.title('Sample Path of Bernoulli process with p=1/7')
plt.ylim((0,C[T-1]+1))
plt.ylabel('$C(\omega)$') #You can use LaTex in the Python code
plt.xlabel('Time')
###Output
X= [0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1]
C= [0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 6, 6, 6, 7]
L= [2, 4, 4, 1, 14, 2, 3]
S= [2, 6, 10, 11, 25, 27, 30]
###Markdown
**End of Problem 1** The equivalent descriptions above suggest another method to simulate a Bernoulli random process. Each $L_i$ has a geometric distribution with parameter $p,$ and the $L$'s are independent. The geometric distribution is given by its pmf: $p(i)=(1-p)^{i-1}p$ for $i\geq 1.$ For example, the probability that the first count occurs on the third trial is $P\{L_1=3\}= P\{X_1=0,X_2=0,X_3=1\}=(1-p)(1-p)p=(1-p)^2p$ which we determined before. **Problem 2:** Write new code for simulation of a Bernoulli random process by first generating $L=(L_1, \cdots , L_{30})$ according to a geometric distribution and then generating$X,S,$ and $C$ from $L.$ Print all values in sequences $L$, $X$, $S$ and $C$.
###Code
# Your code here
p = 1./7 #Probability
T = 30 #Number of time steps
L = []
P = []
L = np.random.geometric(p,(30,))
L = L.tolist()
S = []
count = 0
for i in L:
count += i
S.append(count)
#print(count)
X = []
for i in range(S[-1]):
X.append(0)
for i in S:
X[i-1] = 1
C = []
count = 0
for i in range(S[-1]):
if X[i] ==1:
count += 1
C.append(count)
print("L=",L)
print("X=",X)
print("S=",S)
print("C=",C)
?np.random.geometric
###Output
_____no_output_____
###Markdown
**End of Problem 2** Poisson distribution as limit of binomial distribution There is yet another important piece to this puzzle, and that is the Poisson distribution. The Poisson distribution has a single parameter $\lambda$ and a probability mass function given by: $p(k) = \frac{e^{-\lambda}\lambda^k}{k!}$ for $k\geq 0.$ The parameter $\lambda$ represents a mean such as the number of hits of a website in one minute, or the number of mispelled words in a document. Thus $p(k)$ represents the probability the number of events occuring is $k$ given that the average number events that occur is $\lambda$. The Poisson distribution is frequently used because it is a good approximation for the binomial distribution when $n$ is large, $p$ is small, and $np \approx \lambda$. It is simpler than the binomial; it only has one parameter and it doesn't involve binomial coefficients. Let's say you create a website and that your website gets on average of 1200 hits per day. This is set up as a Poisson distribution where $\lambda = 1200$, but we can also model this as a binomial. If we were to break down the day into minute increments then the probability that a hit occurs in any given minute is $p = \frac{1200}{24*60} = \frac{5}{6}$ and there are $n = 24*60 = 1440$ minutes in a day. Below is a graph of this binomial approximation of the Poisson.
###Code
lamb =1200 #Average number of hits per day
n = 60*24. #Number of minutes in a day
p = lamb/n #Probability of a hit occuring in a minute
print ('p =', p)
k = range(2*lamb)
plt.plot(k,st.binom.pmf(k,n,p), 'b', label = 'Binomial')
plt.plot(k,st.poisson.pmf(k,lamb), 'r', label = 'Poisson')
plt.title('PMF of Hits Per Day')
plt.legend()
x = np.linspace(0,2*lamb,10000)
plt.figure()
plt.plot(x,st.binom.cdf(x,n,p), 'b', label = 'Binomial')
plt.plot(x,st.poisson.cdf(x,lamb), 'r', label = 'Poisson')
plt.ylim(0,1.2)
plt.title('CDF of Hits Per Day')
plt.legend()
###Output
p = 0.8333333333333334
###Markdown
These two distributions don't really look that close to each other. Why is that? In order for this approximation to be accurate, we require that $n$ be large, $p$ be small, and $np \approx \lambda$. Here $n$ is fairly large but $p$ is not close to zero at all. The variance of the Poisson(1200) distribution is 1200, while the variance of the Binom(1440,5/6) distribution is only 1440*(5/6)*(1/6)=200. Clearly, we haven't broken the day up into small enough increments. So let's now break it up into seconds.
###Code
lamb = 1200 #Average number of hits per day
n = 60*60*24. #Number of seconds in a day
p = lamb/n #Probability of a hit occuring in a minute
print ('p =', p)
X = st.binom(n,p)
Y = st.poisson(lamb)
k = range(2*lamb)
plt.plot(k,X.pmf(k), 'b', label = 'Binomial')
plt.plot(k,Y.pmf(k), 'r', label = 'Poisson')
plt.title('PMF of Hits Per Day')
plt.legend()
x = np.linspace(0,2*lamb,10000)
plt.figure()
plt.plot(x,X.cdf(x), 'b', label = 'Binomial')
plt.plot(x,Y.cdf(x), 'r', label = 'Poisson')
plt.ylim(0,1.2)
plt.title('CDF of Hits Per Day')
plt.legend()
###Output
p = 0.013888888888888888
###Markdown
Now our approximation is so close that the two distributions are almost indistinguishable from each other. If we kept increasing n and decreasing p we would find that the approximation continues to improve. So, symbolically, $\lim_{n\to \infty, p\to 0, np \to \lambda} Binom(n,p) = Pois(\lambda).$ If you encounter a binomial variable with large $n$ and small $p,$ it may be easier to calculate probabilities based on the Poisson distribution. **Problem 3:** While working on this lab course, I have a probability of $p=.014$ of finishing a section during any given minute. Let's say that there are 300 sections that need to be completed and I have 8 weeks to create the lab (assume I work 40 hours/week). What's the probability that I complete the lab before the start of the semester? Equivalently what is the probability that I finish at least 300 sections? In order to answer this question, do the following: Create a binomial variable X to represent the number of sections I complete (for this and other parts of the problem, assume I keep working at the same rate if I finish completing 300 sections). Create a Poisson variable Y to represent the same number, using the Poisson approximation. Make sure to print out what $\lambda$ is. Find the probability of my success (i.e. completing at least 300 sections) using the CDFs of each RV. Do they agree? Find the probability that I finish exactly 300 sections using the pmf of each RV. Do they agree?
###Code
# Your code here
n = 60*40.*8 #Number of minutes in 8 weeks work
p = 0.014
lamb =n*p
X = st.binom(n,p)
Y = st.poisson(lamb)
print('lambda =',lamb)
x = np.linspace(0,2*lamb,10000)
plt.plot(x,X.cdf(x), 'b', label = 'Binomial')
plt.plot(x,Y.cdf(x), 'r', label = 'Poisson')
plt.ylim(0,1.2)
plt.title('CDF of number of sections I complete')
plt.legend()
for i in range(10000-1):
if x[i+1] >= 300 and x[i] < 300:
t = x[i]
print("the probability of my success is",1-X.cdf(t),"for binomial distribution")
print("the probability of my success is",1-Y.cdf(t),"for Poisson distribution")
k = range(int(2*lamb))
plt.plot(k,X.pmf(k), 'b', label = 'Binomial')
plt.plot(k,Y.pmf(k), 'r', label = 'Poisson')
plt.title('PMF of number of sections I complete')
plt.legend()
print("the probability that I finish exactly 300 sections",X.pmf(300.),"for binomial distribution")
print("the probability that I finish exactly 300 sections",Y.pmf(300.),"for Poisson distribution")
###Output
the probability that I finish exactly 300 sections 0.003952459184172475 for binomial distribution
the probability that I finish exactly 300 sections 0.004023643874920296 for Poisson distribution
###Markdown
Zad 1Make sure the texts in the corpus does not contain HTML code.
###Code
for filename, file in files.items():
res = regex.findall(r'(.{0,20})<(.{0,25})' , file, flags=regex.IGNORECASE)
if len(res) > 0:
print(filename, res)
files['2001_1353.txt'] = files['2001_1353.txt'].replace('< < tajne > >', 'tajne')
for filename, file in files.items():
res = regex.findall(r'(.{0,20})<(.{0,25})' , file, flags=regex.IGNORECASE)
if len(res) > 0:
print(filename, res)
###Output
_____no_output_____
###Markdown
Zad 2Use SpaCy tokenizer API to tokenize the text from the cleaned law corpus
###Code
nlp = Polish()
# Create a Tokenizer with the default settings for Polish
# including punctuation rules and exceptions
tokenizer = nlp.tokenizer
tokens = {key:list(tokenizer(val)) for key, val in files.items()}
print(list(tokens.items())[1])
###Output
('1993_602.txt', [ , Dz, ., U, ., z, 1993, r, ., Nr, 129, ,, poz, ., 602, , USTAWA, , z, dnia, 10, grudnia, 1993, r, ., , o, zmianie, niektórych, ustaw, dotyczących, zaopatrzenia, emerytalnego, , Art, ., 1, ., W, ustawie, z, dnia, 29, maja, 1974, r, ., o, zaopatrzeniu, inwalidów, wojennych, i, wojskowych, oraz, ich, rodzin, (, Dz, ., U, ., z, 1983, r, ., Nr, 13, ,, poz, ., 68, ,, z, 1990, r, ., Nr, 34, ,, poz, ., , 198, i, Nr, 36, ,, poz, ., 206, ,, z, 1991, r, ., Nr, 104, ,, poz, ., 450, i, z, 1992, r, ., Nr, 21, ,, poz, ., 84, ), w, , art, ., 11, wprowadza, się, następujące, zmiany, :, , 1, ), ust, ., 2, otrzymuje, brzmienie, :, , ", 2, ., Podstawę, wymiaru, renty, inwalidzkiej, ustala, się, od, kwoty, stanowiącej, , podstawę, waloryzacji, na, podstawie, przepisów, o, zaopatrzeniu, , emerytalnym, pracowników, i, ich, rodzin, ., Podwyższenie, , kwot, podstawy, wymiaru, renty, inwalidzkiej, następuje, od, miesiąca, ,, , w, którym, jest, przeprowadzana, waloryzacja, ., ", ,, , 2, ), ust, ., 5, otrzymuje, brzmienie, :, , ", 5, ., Ustalenie, wysokości, renty, następuje, przez, pomnożenie, podwyższonej, , w, myśl, ust, ., 2, podstawy, wymiaru, przez, stawkę, wymiaru, , świadczenia, określoną, w, art, ., 10, ., Realizacja, podwyżki, renty, następuje, , od, miesiąca, ,, w, którym, jest, przeprowadzana, waloryzacja, ., ", ., , Art, ., 2, ., W, ustawie, z, dnia, 26, stycznia, 1982, r, ., -, Karta, Nauczyciela, (, Dz, ., U, ., Nr, 3, ,, poz, ., , 19, ,, Nr, 25, ,, poz, ., 187, i, Nr, 31, ,, poz, ., 214, ,, z, 1983, r, ., Nr, 5, ,, poz, ., 33, ,, z, 1988, r, ., Nr, 19, ,, , poz, ., 132, ,, z, 1989, r, ., Nr, 4, ,, poz, ., 24, i, Nr, 35, ,, poz, ., 192, ,, z, 1990, r, ., Nr, 34, ,, poz, ., 197, ,, , Nr, 36, ,, poz, ., 206, i, Nr, 72, ,, poz, ., 423, ,, z, 1991, r, ., Nr, 95, ,, poz, ., 425, i, Nr, 104, ,, poz, ., 450, , oraz, z, 1992, r, ., Nr, 53, ,, poz, ., 252, ,, Nr, 54, ,, poz, ., 254, i, Nr, 90, ,, poz, ., 451, ), w, art, ., 90, ust, ., , 1, otrzymuje, brzmienie, :, , ", 1, ., Nauczycielom, ,, którzy, w, czasie, okupacji, prowadzili, tajne, nauczanie, ,, , przysługuje, dodatek, do, emerytury, lub, renty, w, wysokości, , 10, %, przeciętnego, wynagrodzenia, w, kwartale, kalendarzowym, , poprzedzającym, termin, waloryzacji, ,, jeżeli, nie, pobierają, takiego, , dodatku, z, innego, tytułu, ., Zmiana, wysokości, dodatku, następuje, , od, miesiąca, ,, w, którym, jest, przeprowadzana, waloryzacja, ., ", ., , Art, ., 3, ., W, ustawie, z, dnia, 24, stycznia, 1991, r, ., o, kombatantach, oraz, niektórych, osobach, , będących, ofiarami, represji, wojennych, i, okresu, powojennego, (, Dz, ., U, ., Nr, 17, ,, , poz, ., 75, i, , Nr, 104, ,, poz, ., 450, ,, , z, 1992, r, ., Nr, 21, ,, poz, ., 85, i, z, 1993, r, ., Nr, 29, ,, poz, ., , 133, ), w, art, ., 15, ust, ., 1, otrzymuje, brzmienie, :, , ", 1, ., Kombatantom, i, innym, osobom, uprawnionym, ,, pobierającym, emeryturę, , lub, rentę, ,, przysługuje, dodatek, ,, zwany, dalej, ", dodatkiem, , kombatanckim, ", ,, w, wysokości, 10, %, przeciętnego, wynagrodzenia, , w, kwartale, kalendarzowym, poprzedzającym, termin, waloryzacji, ., , Zmiana, wysokości, dodatku, następuje, od, miesiąca, ,, w, którym, jest, , przeprowadzana, waloryzacja, ., ", ., , Art, ., 4, ., W, ustawie, z, dnia, 17, października, 1991, r, ., o, rewaloryzacji, emerytur, i, rent, ,, o, , zasadach, , ustalania, emerytur, i, rent, oraz, o, zmianie, niektórych, ustaw, (, Dz, ., U, ., , Nr, 104, ,, poz, ., 450, ,, z, 1992, r, ., Nr, 21, ,, poz, ., 84, i, z, 1993, r, ., Nr, 127, ,, poz, ., 583, ), wprowadza, się, następujące, zmiany, :, , 1, ), w, art, ., 7, w, ust, ., 5, w, pkt, 4, skreśla, się, wyrazy, ", przeciętnego, wynagrodzenia, ", ;, , 2, ), w, art, ., 15, :, , a, ), ust, ., 3a, otrzymuje, brzmienie, :, , ", 3a, ., W, razie, zbiegu, okresów, opłacania, składek, na, Fundusz, Emerytalny, , Rolników, i, Fundusz, Ubezpieczenia, Społecznego, Rolników, ,, , przypadających, od, dnia, 1, lipca, 1977, r, ., ,, z, okresami, :, , 1, ), innego, ubezpieczenia, społecznego, ,, nawet, jeżeli, okresy, , składkowe, i, nieskładkowe, ,, ustalone, w, myśl, art, ., 2, -, 4, ,, nie, , wymagałyby, uzupełnienia, w, celu, nabycia, prawa, do, świadczenia, ,, , 2, ), pobierania, emerytury, lub, renty, z, ubezpieczenia, społecznego, , -, świadczenie, ulega, zwiększeniu, ,, o, którym, mowa, w, ust, ., 3, ,, , za, okres, opłacania, tych, składek, ., ", ,, , b, ), po, ust, ., 3a, dodaje, się, ust, ., 3b, i, 3c, w, brzmieniu, :, , ", 3b, ., Zwiększeniu, ,, o, którym, mowa, w, ust, ., 3, ,, za, okres, opłacania, składek, , na, Fundusz, Emerytalny, Rolników, ,, Fundusz, Ubezpieczenia, , Społecznego, Rolników, i, ubezpieczenie, emerytalno, -, rentowe, rolników, ,, , ulega, również, świadczenie, ,, nawet, jeżeli, okresy, składkowe, , i, nieskładkowe, nie, wymagałyby, uzupełnienia, w, celu, nabycia, , tego, świadczenia, ., , 3c, ., Zwiększenie, ,, o, którym, mowa, w, ust, ., 3a, i, 3b, ,, przyznaje, się, na, , wniosek, zainteresowanego, ., ", ,, , c, ), w, ust, ., 5, wyrazy, ", ust, ., 1, -, 3a, ", zastępuje, się, wyrazami, ", ust, ., 1, -, 3b, ", ;, , 3, ), w, art, ., 16, ust, ., 1, otrzymuje, brzmienie, :, , ", 1, ., Kwoty, najniższych, emerytur, i, rent, -, bez, uwzględnienia, dodatków, ,, , o, których, mowa, w, art, ., 21, -, wynoszą, :, , 1, ), 39, %, przeciętnego, wynagrodzenia, w, kwartale, kalendarzowym, , poprzedzającym, termin, waloryzacji, -, w, przypadku, , emerytury, ,, , renty, , rodzinnej, i, renty, inwalidzkiej, dla, inwalidy, , I, lub, II, grupy, ,, , 2, ), 30, %, przeciętnego, wynagrodzenia, w, kwartale, kalendarzowym, , poprzedzającym, termin, waloryzacji, -, w, przypadku, renty, , inwalidzkiej, dla, inwalidy, III, grupy, ., ", ;, , 4, ), w, art, ., 17, :, , a, ), w, ust, ., 1, skreśla, się, wyrazy, ", zwany, dalej, wskaźnikiem, waloryzacji, ", ,, , b, ), ust, ., 2, otrzymuje, brzmienie, :, , ", 2, ., Ustalenie, wysokości, zwaloryzowanej, emerytury, i, renty, następuje, , przez, , jej, obliczenie, od, kwoty, wynoszącej, od, dnia, 1, stycznia, , 1994, r, ., 91, %, przeciętnego, wynagrodzenia, ,, a, od, terminu, drugiej, , waloryzacji, w, 1994, r, ., -, co, najmniej, 93, %, tego, wynagrodzenia, w, , kwartale, kalendarzowym, poprzedzającym, termin, waloryzacji, , przy, zastosowaniu, wskaźnika, wysokości, świadczenia, ., ", ;, , 5, ), w, art, ., 18, ust, ., 2, otrzymuje, brzmienie, :, , ", 2, ., Prezes, Zakładu, Ubezpieczeń, Społecznych, ogłasza, w, Dzienniku, , Urzędowym, Rzeczypospolitej, Polskiej, ", Monitor, Polski, ", ,, w, terminie, , do, 14, roboczego, dnia, drugiego, miesiąca, każdego, kwartału, ,, , kwotę, najniższej, emerytury, i, renty, ,, jeżeli, został, spełniony, warunek, , do, waloryzacji, ,, o, którym, mowa, w, art, ., 17, ust, ., 1, ., ", ;, , 6, ), w, art, ., 21, w, ust, ., 4, po, wyrazach, ", ust, ., 1, ", dodaje, się, wyrazy, ", oraz, w, art, ., 15, , ust, ., 3, -, 3b, ", ;, , 7, ), w, art, ., 22, w, ust, ., 1, i, 2, oraz, w, art, ., 34, wyrazy, ", przeciętnego, wynagrodzenia, , stanowiącego, podstawę, ostatnio, przeprowadzonej, waloryzacji, ", zastępuje, , się, wyrazami, ", przeciętnego, wynagrodzenia, w, kwartale, kalendarzowym, poprzedzającym, , termin, waloryzacji, ", ;, , 8), w, art, ., 24, :, , a, ), w, ust, ., 3, i, 7, wyrazy, ", kwoty, bazowej, ", zastępuje, się, wyrazami, ", przeciętnego, , wynagrodzenia, za, kwartał, kalendarzowy, ostatnio, ogłoszonego, przez, , Prezesa, Głównego, Urzędu, Statystycznego, ,, ", ,, , b, ), ust, ., 4, otrzymuje, brzmienie, :, , ", 4, ., W, razie, osiągania, wynagrodzenia, lub, dochodu, z, tytułów, ,, o, których, , mowa, w, ust, ., 1, i, 2, ,, w, kwocie, przekraczającej, 60, %, kwoty, , przeciętnego, wynagrodzenia, za, kwartał, kalendarzowy, ,, ostatnio, , ogłoszonego, przez, Prezesa, Głównego, Urzędu, Statystycznego, ,, , nie, wyższej, jednak, niż, 120, %, tej, kwoty, ,, emerytura, i, renta, inwalidzka, , dla, inwalidów, I, i, II, grupy, ulega, zmniejszeniu, o, kwotę, , przekroczenia, ,, nie, większą, niż, określona, w, art, ., 10, ust, ., 1, pkt, 1, ,, a, , renta, inwalidzka, dla, inwalidów, III, grupy, -, o, kwotę, przekroczenia, ,, , lecz, nie, więcej, niż, 75, %, kwoty, określonej, w, art, ., 10, ust, ., 1, , pkt, 1, ., ", ., , Art, ., 5, ., W, okresie, od, dnia, 1, stycznia, 1994, r, ., do, czasu, przeprowadzenia, pierwszej, waloryzacji, w, 1994, r, ., najniższe, emerytury, i, renty, ustala, się, od, kwoty, przeciętnego, , wynagrodzenia, w, trzecim, kwartale, 1993, r, ., , Art, ., 6, ., Ustawa, , wchodzi, w, życie, z, dniem, 1, stycznia, 1994, r, .])
###Markdown
Zad 3Compute frequency list for each of the processed files.
###Code
freqs = {}
for filename, tok in tokens.items():
freqs[filename] = Counter([t.text.lower() for t in tok])
for i, (filename, toks) in zip(range(10),freqs.items()):
print(filename, toks.most_common(5))
###Output
1993_599.txt [('|', 1634), ('-', 858), (' ', 502), ('.', 497), (',', 306)]
1993_602.txt [('.', 155), (',', 89), ('w', 63), ('"', 44), ('i', 34)]
1993_645.txt [('.', 23), (',', 7), ('z', 6), ('r', 6), ('nr', 6)]
1993_646.txt [('.', 688), (',', 610), ('w', 506), ('-', 324), ('"', 304)]
1994_150.txt [('.', 20), (',', 6), ('w', 6), ('z', 5), ('i', 4)]
1994_195.txt [('.', 2724), (',', 2197), (' ', 1368), ('w', 1226), ('z', 584)]
1994_201.txt [('.', 36), (',', 19), (' ', 15), ('i', 9), ('z', 8)]
1994_214.txt [('.', 17), ('w', 4), ('"', 4), ('z', 3), ('r', 3)]
1994_215.txt [('.', 39), (',', 23), ('w', 14), ('z', 8), ('ust', 7)]
1994_288.txt [(',', 26), ('.', 23), ('w', 17), (')', 16), (' ', 11)]
###Markdown
Zad 4Aggregate the result to obtain one global frequency list.
###Code
aggregated = Counter()
for _, freq in freqs.items():
aggregated.update(freq)
print(aggregated.most_common(10))
###Output
[('.', 437694), (',', 341126), ('w', 201224), (')', 100194), ('i', 90009), ('art', 83804), ('z', 82443), ('1', 73108), ('o', 64776), ('-', 61714)]
###Markdown
Zad 5Reject all entries that are shorter than 2 characters or contain non-letter characters (make sure to include Polish diacritics).
###Code
for c in 'zażółćgęśląjaźń':
assert(tokenizer(c)[0].is_alpha)
filtered_aggr = Counter()
for filename, tok in tokens.items():
filtered_aggr.update([t.text.lower()for t in tok if len(t.text) >= 3 and t.is_alpha])
print(filtered_aggr.most_common(10))
###Output
[('art', 83804), ('ust', 53636), ('się', 45886), ('lub', 45800), ('poz', 45224), ('oraz', 33558), ('mowa', 28783), ('nie', 22990), ('przez', 20953), ('pkt', 19124)]
###Markdown
Zad 6Make a plot in a logarithmic scale (for X and Y): X-axis should contain the rank of a term, meaning the first rank belongs to the term with the highest number of occurrences; the terms with the same number of occurrences should be ordered by their name, Y-axis should contain the number of occurrences of the term with given rank.
###Code
from matplotlib import pyplot as plt
sorted_freqs = sorted(filtered_aggr.items(), key=lambda x: x[1], reverse=True)
sorted_freqs
plt.figure(figsize=(8, 8))
plt.plot([f[1] for f in sorted_freqs])
plt.xscale('log')
plt.yscale('log')
plt.xlabel('rank')
plt.ylabel('frequency')
plt.show()
print([f[1] for f in sorted_freqs])
###Output
_____no_output_____
###Markdown
Zad 7 Install Morfeusz (Binding dla Pythona) and use it to find all words that do not appear in that dictionary.
###Code
import morfeusz2
morf = morfeusz2.Morfeusz()
print(morfeusz2.__version__)
morf.analyse("ustawa")
morf.analyse("klfjakdhhjsf")
def is_in_morfeusz(word):
return morf.analyse(word)[0][2][2] != 'ign'
words_not_in_morfeusz = [word for word in sorted_freqs if not is_in_morfeusz(word[0])]
words_not_in_morfeusz
###Output
_____no_output_____
###Markdown
Zad 8Find 30 words with the highest ranks that do not belong to the dictionary.
###Code
words_not_in_morfeusz[:30]
###Output
_____no_output_____
###Markdown
Zad 9Find 30 random words (i.e. shuffle the words) with 5 occurrences that do not belong to the dictionary.
###Code
from random import shuffle
words_5_occurences=list(filter(lambda x: x[1]==5, words_not_in_morfeusz))
shuffle(words_5_occurences)
words_5_occurences[:30]
###Output
_____no_output_____
###Markdown
Zad 10Use Levenshtein distance and the frequency list, to determine the most probable correction of the words from lists defined in points 8 and 9. (Note: You don't have to apply the distance directly. Any method that is more efficient than scanning the dictionary will be appreciated.)
###Code
WORDS = Counter({key:val for key, val in filtered_aggr.items() if is_in_morfeusz(key)})
WORDS
# Code from https://norvig.com/spell-correct.html
def P(word, N=sum(WORDS.values())):
"Probability of `word`."
return WORDS[word] / N
def correction(word):
"Most probable spelling correction for word."
return max(candidates(word), key=P)
def candidates(word):
"Generate possible spelling corrections for word."
return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word])
def known(words):
"The subset of `words` that appear in the dictionary of WORDS."
return set(w for w in words if w in WORDS)
def edits1(word):
"All edits that are one edit away from `word`."
letters = 'aąbcćdeęfghijklłmnoópqrstuvwxyzżź'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"All edits that are two edits away from `word`."
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
print(edits1('ala'))
###Output
{'afla', 'aala', 'vla', 'alpa', 'ama', 'tala', 'ara', 'tla', 'aila', 'alad', 'alal', 'alae', 'oala', 'aua', 'aźa', 'alc', 'ćla', 'aca', 'alm', 'vala', 'alr', 'awla', 'aęa', 'alac', 'dla', 'xla', 'alża', 'alay', 'aża', 'óala', 'alaż', 'ald', 'als', 'kla', 'alaę', 'alna', 'aia', 'ahla', 'alea', 'ęla', 'ata', 'alł', 'aja', 'ana', 'fla', 'alw', 'alt', 'hala', 'alag', 'axa', 'alż', 'ąla', 'aka', 'alz', 'ąala', 'alan', 'aula', 'alwa', 'sala', 'wla', 'alma', 'alća', 'alfa', 'cala', 'mala', 'alóa', 'iala', 'pala', 'alę', 'bla', 'alx', 'rala', 'anla', 'alta', 'nala', 'aóa', 'ażla', 'alqa', 'aya', 'aba', 'alaq', 'alaą', 'ula', 'alua', 'adla', 'alia', 'dala', 'abla', 'aza', 'aęla', 'alao', 'alj', 'ajla', 'aa', 'alp', 'alź', 'alat', 'alau', 'óla', 'alaó', 'alaw', 'alaz', 'ayla', 'akla', 'qla', 'alba', 'alza', 'avla', 'aqla', 'aola', 'alja', 'ćala', 'la', 'alaź', 'wala', 'yala', 'alga', 'aly', 'alq', 'alas', 'lala', 'żla', 'alav', 'apla', 'ela', 'laa', 'żala', 'jla', 'aąla', 'aela', 'aloa', 'bala', 'alxa', 'al', 'alb', 'ava', 'alca', 'mla', 'alźa', 'ali', 'aćla', 'atla', 'aóla', 'alha', 'aaa', 'ola', 'rla', 'apa', 'aha', 'ale', 'awa', 'ala', 'aea', 'alsa', 'acla', 'alaa', 'gla', 'alła', 'cla', 'łla', 'alaj', 'alał', 'alra', 'alap', 'alla', 'aźla', 'zala', 'alab', 'aló', 'ila', 'ada', 'kala', 'alai', 'agla', 'ęala', 'azla', 'lla', 'asa', 'yla', 'alk', 'alv', 'axla', 'alva', 'alęa', 'alya', 'aga', 'asla', 'alax', 'sla', 'uala', 'gala', 'aal', 'ała', 'aoa', 'xala', 'all', 'jala', 'ałla', 'łala', 'alać', 'alo', 'amla', 'pla', 'alak', 'alda', 'alć', 'arla', 'aląa', 'źala', 'alh', 'eala', 'aća', 'aln', 'qala', 'alą', 'aąa', 'alg', 'alah', 'zla', 'alu', 'aqa', 'alam', 'hla', 'alaf', 'fala', 'alar', 'źla', 'alf', 'afa', 'alka', 'nla'}
###Markdown
List from zad8
###Code
pprint.pprint([(word, correction(word))for word, _ in words_not_in_morfeusz[:30]])
###Output
[('poz', 'pod'),
('późn', 'plan'),
('str', 'sar'),
('gmo', 'imo'),
('sww', 'swe'),
('operacyjno', 'operacyjne'),
('skw', 'sów'),
('rolno', 'rolne'),
('ike', 'ile'),
('społeczno', 'społeczne'),
('techniczno', 'techniczne'),
('remediacji', 'mediacji'),
('ure', 'urz'),
('rozdz', 'rozkaz'),
('uke', 'ust'),
('itp', 'atp'),
('sanitarno', 'sanitarny'),
('charytatywno', 'charytatywną'),
('pkwiu', 'kwitu'),
('udt', 'ust'),
('bswsg', 'bswsg'),
('bswp', 'bhp'),
('biobójczych', 'biobójczych'),
('organizacyjno', 'organizacyjne'),
('phs', 'prs'),
('komandytowo', 'komandytowa'),
('wodociągowo', 'wodociągowe'),
('architektoniczno', 'architektoniczne'),
('hcfc', 'cfc'),
('emerytalno', 'emerytalne')]
###Markdown
List from zad9
###Code
start_time=time.time()
corrections=[(word, correction(word))for word, _ in words_5_occurences[:30]]
pprint.pprint(corrections)
print(time.time() - start_time)
###Output
[('betezda', 'betezda'),
('wlkp', 'pkp'),
('geodezyjno', 'geodezyjne'),
('zawart', 'zawarte'),
('organicz', 'organach'),
('próbobiorców', 'próbobiorców'),
('rialnego', 'realnego'),
('inci', 'inni'),
('jed', 'jej'),
('ośc', 'ości'),
('chelatującym', 'chelatującym'),
('denitracyjne', 'denitracyjne'),
('vista', 'lista'),
('ami', 'ani'),
('rci', 'rui'),
('shigella', 'shigella'),
('wapnio', 'wapnia'),
('winopochodne', 'winopochodne'),
('heptanol', 'metanol'),
('kpwig', 'krwi'),
('cznika', 'ocznika'),
('tzn', 'ten'),
('difenylopropylo', 'difenylopropylo'),
('edukacyjno', 'edukacyjne'),
('regazyfikacyjnego', 'regazyfikacyjnego'),
('schetyna', 'schetyna'),
('swine', 'swoje'),
('najmnie', 'najmniej'),
('teryto', 'tery'),
('nym', 'tym')]
2.463282346725464
###Markdown
Zad11Load SGJP dictionary (Słownik SGJP dane tekstowe) to ElasticSearch (one document for each form) and use fuzzy matching to obtain the possible corrections of the 30 words with 5 occurrences that do not belong to the dictionary.
###Code
import csv
sgjp_words = set()
with open("sgjp-20211107.tab") as tsv:
i = 0
for line in csv.reader(tsv, dialect="excel-tab"):
if i < 28:
i+=1
else:
sgjp_words.add(line[0].lower())
print(len(sgjp_words))
client = Elasticsearch([{'host':'elastic'}])
elasticsearch_dsl.connections.add_connection('python_client', client)
client.info()
analyzer = elasticsearch_dsl.analyzer(
'keyword_analyzer',
type='custom',
tokenizer='keyword'
)
class WordDocument(Document):
word = Text(analyzer=analyzer)
class Index:
name = 'sgjp'
if WordDocument._index.exists(using=client):
WordDocument._index.delete(using=client)
WordDocument.init(using=client)
start_time = time.time()
loaded_records, _ = helpers.bulk(client, [WordDocument(word=w).to_dict(True) for w in sgjp_words])
print(time.time() - start_time)
print(loaded_records)
start_time = time.time()
elastic_corrections = []
for word, _ in words_5_occurences[:30]:
s = Search(using=client, index='sgjp') \
.query(Q({'fuzzy': {
'word': {
'value': word,
'fuzziness': 5
}
}}))
results = s.execute()
if len(results) == 0:
elastic_corrections.append((word, word))
else:
elastic_corrections.append((word, [r.word for r in results]))
elastic_time = time.time()-start_time
print('\ntime', elastic_time)
pprint.pprint(elastic_corrections)
###Output
[('betezda', ['berenda', 'berezka', 'bereza', 'etezja', 'bereda']),
('wlkp',
['wlep',
'wlk',
'olku',
'klup',
'elkę',
'waki',
'klip',
'elką',
'flop',
'ulep']),
('geodezyjno',
['geodezyjno',
'geodezyjne',
'geodezyjną',
'geodezyjna',
'geodezyjny',
'geodezyjni',
'geodezyjność',
'geodezyjnym',
'geodezyjnie',
'geodezyjnego']),
('zawart',
['zawarz',
'zapart',
'zawarta',
'zawarż',
'zawarł',
'zawarć',
'zawartą',
'gawart',
'zawarty',
'zawarto']),
('organicz',
['ogranicz',
'organicy',
'organiczno',
'organiczna',
'organiczną',
'organiczni',
'ograniczą',
'organika',
'organiki',
'organizm']),
('próbobiorców', ['prądobiorców', 'pracobiorców']),
('rialnego',
['realnego',
'upalnego',
'sielnego',
'ciasnego',
'witalnego',
'finalnego',
'mialonego',
'piasnego',
'rżawnego',
'atrialnego']),
('inci',
['inki',
'irci',
'ince',
'vinci',
'inni',
'iści',
'enci',
'insi',
'ingi',
'imci']),
('jed',
['jęd', 'jeł', 'jod', 'wed', 'jedz', 'jud', 'jend', 'jeb', 'ted', 'red']),
('ośc',
['ośm', 'oścu', 'ość', 'ośce', 'ośca', 'owc', 'ości', 'oc', 'oś', 'aśce']),
('chelatującym', ['nielatującym', 'chatującym']),
('denitracyjne', ['penetracyjne', 'defibracyjne', 'denotacyjne']),
('vista',
['lista',
'wista',
'visa',
'firsta',
'istca',
'fisza',
'iksta',
'finta',
'dosta',
'gusta']),
('ami',
['api', 'wami', 'agi', 'amię', 'ćmi', 'nami', 'asi', 'mami', 'rami', 'kami']),
('rci',
['rąci', 'irci', 'rai', 'rui', 'raci', 'ryci', 'rei', 'rca', 'pci', 'roi']),
('shigella', 'shigella'),
('wapnio',
['wapniu',
'wapnie',
'waśnio',
'wapniom',
'wapnił',
'wapnić',
'warnio',
'wapnico',
'wapnia',
'wapnij']),
('winopochodne', 'winopochodne'),
('heptanol',
['heptanom',
'heptanów',
'heptanie',
'heptanowi',
'heptanem',
'heptanu',
'heptany',
'deptano',
'metanol',
'etanol']),
('kpwig',
['dźwig',
'kowin',
'kawik',
'kalig',
'kenig',
'kawin',
'krwie',
'kiwit',
'kpowi',
'kpowie']),
('cznika',
['cynika',
'czoika',
'ocznika',
'znika',
'cznia',
'bonika',
'cennika',
'czanka',
'cudnika',
'annika']),
('tzn',
['tzn', 'tpn', 'tyn', 'ozn', 'tzw', 'ton', 'tvn', 'ten', 'tan', 'tin']),
('difenylopropylo', 'difenylopropylo'),
('edukacyjno',
['edukacyjno',
'edukacyjną',
'edukacyjne',
'edukacyjni',
'edukacyjny',
'edukacyjna',
'edukacyjnie',
'dedukcyjno',
'koedukacyjno',
'reedukacyjno']),
('regazyfikacyjnego',
['denazyfikacyjnego', 'niegazyfikacyjnego', 'gazyfikacyjnego']),
('schetyna', ['chatyna', 'chityna', 'cetyna']),
('swine',
['swing',
'sine',
'kwice',
'lwice',
'iwinę',
'iwino',
'gwineę',
'gwint',
'sawice',
'iwina']),
('najmnie',
['najmniej',
'najemnie',
'najmanie',
'najmie',
'najemnic',
'hajtnie',
'majdnie',
'gajenie',
'najmowe',
'bajanie']),
('teryto',
['teryno',
'tercyno',
'deryło',
'seryno',
'perytom',
'skryto',
'hereto',
'terkot',
'teratom',
'tefryt']),
('nym',
['cym', 'nys', 'mym', 'nim', 'nom', 'nyż', 'rym', 'nyg', 'onym', 'zym'])]
###Markdown
Zad12Compare the results of your algorithm and output of ES
###Code
contains=0
for ((word, cwords), (_, e_cwords)) in zip(corrections, elastic_corrections):
print(f'{word}: \n Levenshein : {cwords} \n Elasticsearch : {e_cwords}')
if cwords in e_cwords:
print("Contains \n")
contains+=1
else:
print('not contains\n')
print(contains, len(corrections))
###Output
15 30
###Markdown
Widzenie maszynowe Laboratorium 3 - Image skeletonization*Autor: Paweł Mendroch* - [Github](https://github.com/FrozenTear7/computer-vision-lab/tree/master/lab3) Wykorzystując kod ze strony z [tutorialem](https://scikit-image.org/docs/dev/auto_examples/edges/plot_skeleton.html), generuję szkielety dwoma metodami dla każdego obrazu z folderu `input`.Kod uruchomieniowy jest zawarty w `run.sh`, iteruję po tablicy ścieżek do plików do przetworzenia, które zapisywane są w folderze `output`.Poniżej przedstawię przykładowe uruchomienie dla jednego obrazu. Załadowanie potrzebnych bibliotek i odczyt obrazu przy pomocy biblioteki skimage jako bool z konwersją rgba do rgb, a następnie na odcienie szarości.
###Code
import sys
from skimage.morphology import skeletonize, thin
from skimage.transform import probabilistic_hough_line
from skimage.util import invert
from skimage import data, img_as_bool, io, color
import matplotlib.pyplot as plt
imgPath = "img_0015_mask_Unet_toe.png"
image = img_as_bool(
color.rgb2gray(color.rgba2rgb(io.imread("./input/" + imgPath)))
)
###Output
_____no_output_____
###Markdown
Poniżej generuję szkielet obrazu na trzy sposoby - klasyczny oraz metodą Lee, a także wersję thinned, która powinna dać lepsze wyniki po usunięciu zbędnych odnóg.Kąt palucha będę wykrywał na wygenerowanych obrazach metodą probabilistycznego wykrywania linii Hough. Podane poniżej parametry dają dobre wyniki dla wybranego obrazu, lecz dla wszystkich obrazów z `input` nie udało mi się znaleźć jednego idealnego ustawienia uruchomieniowego.Dla prostoty przeglądu kodu i ścisłości używam końcowo wszędzie poniższych parametrów, co w niektórych obrazach da gorsze wyniki co widać we wnioskach na końcu sprawozdania.
###Code
skeleton = skeletonize(image)
skeleton_lee = skeletonize(image, method="lee")
thinned = thin(image)
threshold = 15
line_length = 130
line_gap = 50
angle = probabilistic_hough_line(
skeleton, threshold=threshold, line_length=line_length, line_gap=line_gap
)
angle_lee = probabilistic_hough_line(
skeleton_lee, threshold=threshold, line_length=line_length, line_gap=line_gap
)
angle_thinned = probabilistic_hough_line(
thinned, threshold=threshold, line_length=line_length, line_gap=line_gap
)
###Output
_____no_output_____
###Markdown
Po wygenerowaniu potrzebnych obrazów przystępuję do konstrukcji wykresów wynikowych, w postaci 4 wykresów:- oryginalnego obrazu- klasycznego szkieletu z wykrytym kątem- szkieletem w wersji Lee z wykrytym kątem- wersja thinned z wykrytym kątem
###Code
fig, axes = plt.subplots(2, 2, figsize=(8, 8), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(image, cmap=plt.cm.gray)
ax[0].set_title("original")
ax[0].axis("off")
ax[1].imshow(skeleton, cmap=plt.cm.gray)
for line in angle:
p0, p1 = line
ax[1].plot((p0[0], p1[0]), (p0[1], p1[1]))
ax[1].set_title("skeletonize")
ax[1].axis("off")
ax[2].imshow(skeleton_lee, cmap=plt.cm.gray)
for line in angle_lee:
p0, p1 = line
ax[2].plot((p0[0], p1[0]), (p0[1], p1[1]))
ax[2].set_title("skeletonize (Lee 94)")
ax[2].axis("off")
ax[3].imshow(thinned, cmap=plt.cm.gray)
for line in angle_thinned:
p0, p1 = line
ax[3].plot((p0[0], p1[0]), (p0[1], p1[1]))
ax[3].set_title("thinned")
ax[3].axis("off")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
ml lab3
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.io
###Output
_____no_output_____
###Markdown
1. read data
###Code
data = scipy.io.loadmat('data/ex3data1.mat')
x = np.array(data['X'])
y = np.squeeze(data['y'])
X = np.insert(x, 0, 1, axis=1)
###Output
_____no_output_____
###Markdown
2. plot data
###Code
plt.figure()
plt.scatter(x.flatten(), y)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
3. cost function + L2 regularization
###Code
def compute_cost_reg(X, y, theta, l=1):
m = y.size
h = X.dot(theta)
error = h - y
cost = np.sum(error ** 2) / (2 * m) + (l / (2 * m)) * np.sum(np.square(theta))
return cost, error
def get_init_theta(_X=X):
return np.zeros(_X.shape[1])
cost, _ = compute_cost_reg(X, y, get_init_theta())
print(f'Initial Cost:\t{cost}')
###Output
Initial Cost: 140.95412088055392
###Markdown
4. gradient descent + L2 regularization
###Code
def gradient_descent_reg(X, y, theta, l=1, alpha=0.0022, num_iters=1000):
m = y.size
j_history = []
XT = X.T
for i in range(0, num_iters):
cost, error = compute_cost_reg(X, y, theta, l)
gradient = (XT.dot(error) + l * theta) / m
theta -= alpha * gradient
j_history.append(cost)
return theta, j_history
theta, costs = gradient_descent_reg(X, y, get_init_theta())
print(f'Cost:\t{costs[-1]}\ntheta:\t{theta}')
###Output
Cost: 29.695375543493448
theta: [10.86601315 0.35442522]
###Markdown
5. linear regression with λ=0 > При λ=0 регуляризация не производится
###Code
theta, _ = gradient_descent_reg(X, y, get_init_theta(), l=0)
h = X.dot(theta)
plt.figure()
plt.scatter(x.flatten(), y, label='Dataset')
plt.plot(x.flatten(), h, label='H(x)', c='red')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
6. learning curves
###Code
def train(x_train, x_val, y_train, y_val):
X = np.insert(x_train, 0, 1, axis=1)
theta, train_costs = gradient_descent_reg(X, y_train, get_init_theta(), l=0, alpha=0.0005)
X_val = np.insert(x_val, 0, 1, axis=1)
val_cost = compute_cost_reg(X_val, y_val, theta, l=0)
return train_costs[-1], val_cost[0]
def plot_learning_curves(x_train, x_val, y_train, y_val):
m, n = x_train.shape
train_costs = []
val_costs = []
for size in range(4 , m):
idx = range(0, size)
t, v = train(x_train[idx,:], x_val[idx,:], y_train[idx], y_val[idx])
train_costs.append(t)
val_costs.append(v)
plt.figure(figsize=(8, 5))
plt.plot(train_costs, 'b', label='Train Data')
plt.plot(val_costs, 'r', label='Validation Data')
plt.xlabel('Number of training examples')
plt.ylabel('Error')
plt.legend()
plt.show()
x_val = data['Xval']
y_val = np.squeeze(data['yval'])
plot_learning_curves(x, x_val, y, y_val)
###Output
_____no_output_____
###Markdown
> Из графиков видно что с увеличением размера выборки результат обучения становится точнее 7. map features
###Code
def map_features(x, p):
result = x
for i in range(2, p + 1):
r = x ** i
result = np.append(result, r, axis=1)
return result
###Output
_____no_output_____
###Markdown
8. norimilize features
###Code
def normilize_features(X):
norm = (X - X.mean(axis=0)) / X.std(axis=0)
mu = X.mean(axis=0)
sigma = X.std(axis=0)
return norm, mu, sigma
###Output
_____no_output_____
###Markdown
9. train with λ=0 and p=8
###Code
X = map_features(x, 8)
X_norm, mu, sigma = normilize_features(X)
X_norm = np.insert(X_norm, 0, 1, axis=1)
theta, costs = gradient_descent_reg(X_norm, y, get_init_theta(X_norm), l=0, alpha=0.3)
print(f'Cost:\t{costs[-1]}\ntheta:\t{theta}')
###Output
Cost: 0.25271483023922614
theta: [11.21758933 12.28233889 11.38602789 3.36072468 -2.91092977 -2.33533198
-2.43006402 -2.58595693 -1.41584622]
###Markdown
10. plot train
###Code
h = X_norm.dot(theta)
new_x, new_h = zip(*sorted(zip(x.flatten(), h)))
fig, axs = plt.subplots(1, 2, sharey = True, figsize=(16, 6))
axs[0].set_title('Train Data & H(x)')
axs[0].scatter(x.flatten(), y, label='Dataset')
axs[0].plot(new_x, new_h, 'r-', label='H(x)')
axs[0].set_xlabel('X')
axs[0].set_ylabel('Y')
axs[0].legend()
axs[1].set_title('Lerning Curve')
axs[1].plot(costs)
axs[1].set_xlabel('Iterations')
axs[1].set_ylabel('Cost')
plt.show()
###Output
_____no_output_____
###Markdown
> Увеличив количество входных признаков мы получили более "гибкую" гипотезу с помощью полиномиальной функции, которая хорошо соответсвует обучающей выборке 11. plot train with λ=1 & λ=100
###Code
theta1, costs1 = gradient_descent_reg(X_norm, y, get_init_theta(X_norm), l=1, alpha=0.1, num_iters=400)
theta100, costs100 = gradient_descent_reg(X_norm, y, get_init_theta(X_norm), l=100, alpha=0.1, num_iters=400)
h1 = X_norm.dot(theta1)
new_x1, new_h1 = zip(*sorted(zip(x.flatten(), h1)))
h100 = X_norm.dot(theta100)
new_x100, new_h100 = zip(*sorted(zip(x.flatten(), h100)))
fig, axs = plt.subplots(1, 2, sharey = True, figsize=(16, 6))
axs[0].set_title('Train Data & H(x)')
axs[0].scatter(x.flatten(), y, label='Dataset')
axs[0].plot(new_x1, new_h1, 'r-', label='λ=1')
axs[0].plot(new_x100, new_h100, 'g-', label='λ=100')
axs[0].set_xlabel('X')
axs[0].set_ylabel('Y')
axs[0].legend()
axs[1].set_title('Lerning Curve')
axs[1].plot(costs1, 'r-', label='λ=1')
axs[1].plot(costs100, 'g-', label='λ=100')
axs[1].set_xlabel('Iterations')
axs[1].set_ylabel('Costs')
axs[1].legend()
###Output
_____no_output_____
###Markdown
> λ=1 позволяет избежать переобучения, а большие значения λ приводят к недообучению 12. optimal λ search
###Code
def get_X(_x):
_X = map_features(_x, 8)
_X, _, _ = normilize_features(_X)
_X = np.insert(_X, 0, 1, axis=1)
return _X
def optimize_lambda(x_train, x_val, y_train, y_val):
X_train = get_X(x_train)
X_val = get_X(x_val)
l_vals = np.linspace(0, 4, 50)
l_costs = np.empty(shape=(0))
for l in l_vals:
theta, costs = gradient_descent_reg(X_train, y_train, get_init_theta(X_train), l=l, alpha=0.2, num_iters=1000)
c = compute_cost_reg(X_val, y_val, theta, l=0)
l_costs = np.append(l_costs, c[0])
plt.figure()
plt.plot(l_vals, l_costs)
plt.xlabel('Lambda')
plt.ylabel('Cost')
plt.show()
idx = l_costs.argmin()
return l_vals[idx]
optimal_l = optimize_lambda(x, x_val, y, y_val)
print(f'Optimal lambda for validation set:\t{optimal_l}')
###Output
_____no_output_____
###Markdown
13. test error
###Code
x_test = np.array(data['Xtest'])
y_test = np.squeeze(data['ytest'])
X_test = get_X(x_test)
X_train = get_X(x)
theta, _ = gradient_descent_reg(X_train, y, get_init_theta(X_train), l=optimal_l, alpha=0.3, num_iters=1000)
test_cost = compute_cost_reg(X_test, y_test, theta, l=optimal_l)
print(f'Cost on test set: {test_cost[0]}')
###Output
Cost on test set: 11.585832169340758
###Markdown
[imdb dataset from Kaggle](https://www.kaggle.com/deepmatrix/imdb-5000-movie-dataset) (free account required)Stored as `movie_metadata.csv`
###Code
movie_data = pd.read_csv("movie_metadata.csv")
movie_data.head()
movie_data_new = movie_data["genres"].str.split("|", expand=True)
movie_data_new.head()
movie_data_new = movie_data_new.stack()
movie_data_new.head(10)
movie_data_new = movie_data_new.reset_index(level=0)
movie_data_new.head()
movie_data_new = movie_data_new.set_index('level_0')
movie_data_new.head()
movie_data_new = movie_data_new.rename(columns = {0: "genres"})
movie_data_new.head()
movie_data_new = movie_data_new.join(movie_data.drop('genres', axis=1), how="left")
movie_data = movie_data_new
movie_data.head()
movie_data.index = movie_data.index + 1
movie_data.index.name = "ID"
movie_data.head()
###Output
_____no_output_____
###Markdown
Modified from https://stackoverflow.com/questions/39078282/normalizing-data-by-duplication/3907850839078508 Top 20 actors based off average imdb score
###Code
top_directors = (movie_data
.groupby("director_name", as_index=False)['imdb_score'].mean()
.sort_values(by='imdb_score', ascending=False))
top_directors.head(20)
###Output
_____no_output_____
###Markdown
Return director with highest imdb score
###Code
# Idxmax returns an index number to feed into loc
top_directors.loc[top_directors["imdb_score"].idxmax, "director_name"]
###Output
_____no_output_____
###Markdown
Get max average imdb score across all genres.
###Code
genres_invest = (movie_data
.groupby("genres", as_index=False)["imdb_score"]
.mean()
.sort_values(by="imdb_score", ascending=False))
genres_invest
genres_invest.to_excel("lab3.xlsx", index=False)
sns.barplot(y="genres",
x="imdb_score",
color="blue",
orient="h",
data=genres_invest,
)
sns.plt.title("Total")
sns.plt.savefig("genre_score.png")
#Make an excel sheet
writer = pd.ExcelWriter('lab3.xlsx', engine='xlsxwriter')
# Save best genres to invest dataframe into it.
# Don't add index
# Give a meaningful sheet name
genre_investor_sheet = "genre investing info"
genres_invest.to_excel(writer, index=False, sheet_name=genre_investor_sheet)
# Access the sheet
workbook = writer.book
worksheet = writer.sheets[genre_investor_sheet]
# Insert the genre bar chart into sheet with investor info
worksheet.insert_image("D1", "genre_score.png")
writer.save()
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/eirasf/GCED-AA3/blob/main/lab3/lab3.ipynb) Modelos *Ensemble*Una de las últimas tendencias dentro de lo que serían los modelos de inteligencia artificial viene a resumirse como "el conocimiento del conjunto o la multitud". Lo que viene a definir esta frase, un tanto popular, es el uso de multitud de modelos denominados "débiles" en un metaclasificador. El objetivo es generar un modelo "fuerte" en base al conocimiento extraído por los modelos "débiles". Por ejemplo, aunque se detallará más adelante, en un *Random Forest* se desarrollan múltiples *Decision Trees* mucho más simples. La combinación de estos en el *Random Forest* excede el rendimiento de cualquiera de los modelos individuales. Los modelos surgidos de está manera, como metaclasificadores o metaregresores reciben el nombre genérico de modelos *Ensemble*.Un hecho a destacar es el hecho de que estos modelos pueden no limitarse unicamente a los arboles de decisión, por contra pueden componerse de cualquier tipo de modelo de aprendizaje automático que se ha visto previamente. Incluso pueden ser modelos mixtos donde no todos los modelos se hayan obtenido de la misma manera, si no que pueden ser creados mediante el uso combinado de varias técnicas como pueden ser el K-NN, SVM, etc.En la presente unidad se va a explorar varias maneras de como generar los modelos y como combinarlos posteriormente. Así mismo, se verán dos de las técnicas más habituales dentro de estos modelos *ensemble* como son el _Random Forest_, y _XGBoost. Preparar los datosEn los dos primeros tutoriales se ha utilizado problemas ya preparados dentro de la librería `scikit-learn`, en este caso se cambiará el enfoque para aproximarlo a una operativa normal con datos no preparados previamente. En su lugar se utilizarán os datos de un problema del repositorio UCI, que si bien es pequeño nos dará una primera aproximación. En concreto se trata de un problema clásico de *machine learning*, el cual se denomina informalmente como **¿Roca o Mina?**. Se trata de una base de datos pequeña que consta de 111 patrones correspondientes a rocas y 97 a minas acuaticas (simuladas como cilindros metálicos). Cada uno de los patrones consta de 60 medidas numéricas correspondientes a un tramo de las fecuencias emitidas por el sonar. Estos valores se encuentran ya entre 0.0. y 1.0. Dichas medidas representan el valor de la energia de diferentes rangos de longitud de onda para un cierto periodo de tiempo.En primer lugar se descargaran los datos si no están ya disponibles.
###Code
import os
import pandas as pd
def load_data(folder, file, url):
'''
Función de utilidad que comprueba si los datos están presentes en el directorio,
en caso de que no estén los descarga y en cualquiera de los casos los carga.
'''
#Comprobación de seguridad
if os.path.exists(folder) and not os.path.isdir(folder):
raise(Exception("The name of the root folder is already in use"))
if not os.path.exists(folder):
os.mkdir(folder)
file_path = os.path.join(folder, file)
if not os.path.isfile(file_path):
print(f'Downloading'.ljust(75,'.'), end='', flush=True)
import urllib.request
urllib.request.urlretrieve(url,file_path)
print(f"Done!")
return pd.read_csv(file_path, delimiter=',', header=None)
data_folder = '_data_'
file_name = 'sonar.all_data'
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data'
data = load_data(data_folder,file_name, url)
data.head(5)
###Output
_____no_output_____
###Markdown
A mayores de esta carga es conveniente hacer una exploración inicial con el fin de detectar posibles problemas en los datos como los comentados en teoría, es decir, datos atípicos o no normalizados, datos ausentes de cualquiera de los 3 subtipos (MCAR, MAR, MNAR) o posibles sesgos que pudieran aparecer. Dicha exploración en el caso de disponer como en este caso de un _Dataframe_ tipo `pandas`, se puede hacer comenzar con las siguientes líneas de código:
###Code
#Comprobar el número de medidas y variables, así como los tipos utilizados para los diferentes datos
data.info(memory_usage='deep')
#Comprobar los rangos de las variables
data.describe(include='all')
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 208 entries, 0 to 207
Data columns (total 61 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 208 non-null float64
1 1 208 non-null float64
2 2 208 non-null float64
3 3 208 non-null float64
4 4 208 non-null float64
5 5 208 non-null float64
6 6 208 non-null float64
7 7 208 non-null float64
8 8 208 non-null float64
9 9 208 non-null float64
10 10 208 non-null float64
11 11 208 non-null float64
12 12 208 non-null float64
13 13 208 non-null float64
14 14 208 non-null float64
15 15 208 non-null float64
16 16 208 non-null float64
17 17 208 non-null float64
18 18 208 non-null float64
19 19 208 non-null float64
20 20 208 non-null float64
21 21 208 non-null float64
22 22 208 non-null float64
23 23 208 non-null float64
24 24 208 non-null float64
25 25 208 non-null float64
26 26 208 non-null float64
27 27 208 non-null float64
28 28 208 non-null float64
29 29 208 non-null float64
30 30 208 non-null float64
31 31 208 non-null float64
32 32 208 non-null float64
33 33 208 non-null float64
34 34 208 non-null float64
35 35 208 non-null float64
36 36 208 non-null float64
37 37 208 non-null float64
38 38 208 non-null float64
39 39 208 non-null float64
40 40 208 non-null float64
41 41 208 non-null float64
42 42 208 non-null float64
43 43 208 non-null float64
44 44 208 non-null float64
45 45 208 non-null float64
46 46 208 non-null float64
47 47 208 non-null float64
48 48 208 non-null float64
49 49 208 non-null float64
50 50 208 non-null float64
51 51 208 non-null float64
52 52 208 non-null float64
53 53 208 non-null float64
54 54 208 non-null float64
55 55 208 non-null float64
56 56 208 non-null float64
57 57 208 non-null float64
58 58 208 non-null float64
59 59 208 non-null float64
60 60 208 non-null object
dtypes: float64(60), object(1)
memory usage: 109.4 KB
###Markdown
Como se puede ver en la salida anterior, se dispone de las mencionadas características ordenadas en 60 columnas numéricas correspondientes a cada uno de los agregados de las longitudes de onda. Por contra, la última columna se corresponde con una categoria o columna de datos discretos en la que se encuentra consignado un valor R cuando se trata de una una roca y una M cuando es una mina.Existen diferentes maneras de tratar este problema. Algunos ejemplos serían cambiar el tipo `object` de la columna por `category`, se podría hacer un *one-hot encoded* creando una columna para cada uno de las posibilidades. En este caso, se ha optado por realizar una codificación binaria en un único valor. Así que se le asignará un valor de *True* o *False* dentro de una columna denominada **Mina**. En concreto, todos aquellos elementos que coincidan en la columna 60 con una M tendrán el valor de *True* y en caso contrario sera un *False*.
###Code
data['Mina'] = (data[60]=='M')
data.head(5)
###Output
_____no_output_____
###Markdown
El siguiente paso será el preparar los conjuntos de datos de entrada y salida. Como se ve a continuación:
###Code
import numpy as np
#Recoger las 60 primeras mediciones y convertirlas a un Numpy
#no tienen nombre así que accedemos según la posición
inputs = (data.iloc[:,0:60]).to_numpy()
#Convertir la salida a un formato numérico y un numpy
outputs = (data['Mina']).to_numpy().astype('int')
print(f"Patterns: {inputs.shape} -> {outputs.shape}")
###Output
Patterns: (208, 60) -> (208,)
###Markdown
Es recomendable siempre que se pueda una inspección visual de los datos para determinar si existe algún tipo de patrón o elemento que vemos a simple vista. Como un espacio de 60 dimensiones no se puede visualizar se reducirá con PCA a dos dimensiones.
###Code
%matplotlib inline
from sklearn.decomposition import PCA
from matplotlib import pyplot as plt
pca = PCA(n_components=2)
pca_inputs = pca.fit_transform(inputs)
plt.figure()
colors = ["darkorange", "lightgray"]
lw = 2
for color, i, target_name in zip(colors, [0, 1], ['Roca', 'Mina']):
plt.scatter(
pca_inputs[outputs == i, 0], pca_inputs[outputs == i, 1], color=color, alpha=0.8, lw=lw, label=target_name
)
plt.legend(loc="best", shadow=False, scatterpoints=1)
###Output
_____no_output_____
###Markdown
El siguiente paso, ya que se van a comparar diferentes alternativas, sera dividir el conjunto de datos en entrenamieno y test. Para realizar este proceso, ya que se trata de un problema sencillo con sólo 2 posibles clases, lo más sencillo es extraer las matrices y dividir los conjuntos mediante la función `train_test_split`. En este caso se realizará un simple _hold_out_ simplemente con efectos meramente demostrativos, pero el procedimiento más adecuado hubiera sido un _cross-validation_ al ser un conjunto muy pequeño de datos. Por el mismo motivo, los pocos datos presente en lugar de realizar una división habitual 70:30 se reducirá ese montante de datos de test a solo un 10% del total. A mayores y para garantizar una cierta representatividad, la división se hará de manera estratificada, es decir, que las mismas proporciones se mantengan tanto en el entre las calses en el conjunto de test y el de entrenamiento.
###Code
from sklearn.model_selection import train_test_split
#Crear los conjuntos de entrenamiento y test
train_inputs, test_inputs, train_outputs, test_outputs = train_test_split(inputs, outputs, test_size=0.1, stratify=outputs)
print(f"Train Patterns{train_inputs.shape} -> {train_outputs.shape}")
print(f"Test Patterns{test_inputs.shape} -> {test_outputs.shape}")
###Output
Train Patterns(187, 60) -> (187,)
Test Patterns(21, 60) -> (21,)
###Markdown
Es en este punto, una vez hecha la división es cuando se podrán realizar los tratamientos de los datos. Si faltasen datos o, como es este caso, los datos no estuvieran normalizados, será en este punto donde aplicar los diferentes tratamientos.En concreto, como ya se vió antes, en este problema no hay datos ausentes y por lo tanto no será necesario rellenarlos ni completarlos en este caso. De haberlos las funciones más habituales para el tratamiento de los mismos son:* [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html). Permite "rellenar" los datos ausentes bien con un valor o la media de los valores presentes.* [sklearn.impute.IterativeImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.IterativeImputer.html). Esta función permite la estimación de un parámetro en función de los restantes. Resaltar que, los modelos creados para la estimación, son realizados siguiendo una estrategia *round-robin*.* [sklearn.impute.KNNImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.html). En este caso, el objeto creado rellenará los valores en función de un KNN creado sobre los datos de entrenamiento. Se seleccionaran aquellos patrones más cercanos, y se rellenarían los datos ausentes con la media de los $k$ vecinos más cercanos.Lo que si sucede en este caso es que los datos no están normalizados. La normalización se podría realizar con los siguientes métodos:* [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html). En este caso la normalización se hace haciendo uso de la media y la desviación típica de cada característica. Es la opción predilecta si la característica en cuestión tiene una distribución Gaussiana. Para ello aplica la siguiente formula:$$ \frac{x_i - \bar x}{\sigma(x)}$$* [sklearn.preprocessing.MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html). Este objeto realiza la normalización dentro del rango de cada una de las características. Para ello usa el máximo y el mínimo de cada característica con la formula que se ve a continuación. Este tipo de normalización es el indicado cuando no se puede garantizar la distribución Gaussiana de los datos.$$ \frac{x_i - min(x)}{max(x)-min(x)}$$* [sklearn.preprocessing.RobustScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html). Similar al caso anterior, esta normalización realiza la normalización usando el primer y tercer cuartil. Es indicado cuando pueden existir outlayers que afectarían a la normalización$$ \frac{x_i - Q_1(x)}{Q_3(x)-Q_1(x)}$$* [sklearn.preprocessing.MaxAbsScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html). Este objeto divide cada uno de las características por el máximo en valor absoluto, de tal manera que ese valor sera 1, pero no se modificará el centroide de la distribución de la característica.$$ \frac{x_i}{max(|x|)}$$* [sklearn.preprocessing.Normalizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Normalizer.html). En este caso, la normalización se aplica al patrón y no a las características. De tal forma que se aplicaría la siguiente normalización.$$\frac{x_{ij}}{\sqrt{\sum_{j}x_{ij} ^2}}$$En este caso se va a realizar dicha normalización entre los valores mínimo y máximo como se ve a continuación:
###Code
from sklearn.preprocessing import MinMaxScaler
# Crear el método para la normalización de los datos
scaler = MinMaxScaler()
# Ajustarlo y hacer la transformación de los datos de entrenamiento
train_inputs = scaler.fit_transform(train_inputs)
# Con el objeto ajustado aplicar la transformación a los datos de entrada de test
test_inputs = scaler. transform(test_inputs)
#Comprobar la conversión imprimiendo los rangos
#TODO
###Output
_____no_output_____
###Markdown
Estableciendo la línea baseComo se ha comentado anteriormente, los ensemble son un conjunto de clasificadores más "débiles" que nos permiten posteriormente superar los límites de estos mediante su unión. Es por ello que, antes de comenzar con los ensemble, será preciso disponer de algunos modelos de referencia que se unirán posteriormente en un metaclasificador. En el ejemplo siguiente se entrenan algunos modelos sencillos como son: un SVM con kernel RBF, una Regresión Lineal, un Naïve Bayes y un Árbol de Decisión
###Code
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
#Definir los clasificadores
clfs = { 'SVM': svm.SVC(probability=True),
'LR': LogisticRegression(),
'DT': DecisionTreeClassifier(max_depth=4),
'NB':GaussianNB()}
base_models = [name for name in clfs.keys()]
# Realizar el entrenamiento de cada modelo y calcualr su valor de test (accuracy)
for key in clfs.keys():
clfs[key].fit(train_inputs, train_outputs)
acc = clfs[key].score(test_inputs, test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
###Markdown
EjercicioAmplíe el conjunto de modelo disponibles por ejemplo introduciendo variantes de SVM, otras aproximaciones como por ejemplo, K-NN, Kmeans (cuidado con este caso con la función `score`) o un MLP sencillo. Al menos otros 4 evitando introducir el RandomForest, AdaBoost o cualquier otro modelo que implique ela composición de modelos per sé, lo cuales se verán más adelante. Como se puede ver hay métodos que dan un buen resultado de ajuste desde el comienzo, si bien otros es posible que necesiten un mayor ajuste de los hiperparámetros del modelo. Todos estos modelos es lo que se consideran modelos simples o *'weak models'*, en las técnicas ensemble lo que se busca es combinar varios de estos modelos con el fin de mejorar el rendimiento global. Combinando ModelosA la hora de combinar los modelos existen diferentes estrategias según la tarea del modelo, es decir, si estamos clasificando o haciendo una regresión. En este caso nos vamos a centrar en la classificación, si bien para la regresión sería similar, habría que tener en cuenta el caracter contínuo de los valores a la hora de combinar las salidas.En cuanto a la combinación de la classificación, existen principalmente dos maneras de combinar las salidas de varios clasificadores. Estas combinaciones reciben el nombre de Voto mayoritario y Voto mayoritario con pesos Voto MayoritarioSin bien también se conoce como _Hard Voting_, como su nombre indica se basan en seleccionar la opción más votada entre las predichas entre los diferentes modelos. La implementación disponible en `scikit learn` realiza una suma de las predicciones para cada una de las clases y, posteriormente, saca la media dichas estimaciones. La opción seleccionada por mayoría entre los "expertos" de los cuales consta el *emsemble* es la seleccionada. Así, se podría resolver el problema teniendo en cuenta diferentes resultados o puntos de vista sobre el mismo. Véase un ejemplo en el código siguiente de construcción de un modelo como este.
###Code
from sklearn.ensemble import VotingClassifier
#Definir el metaclasificador con los clasificadores definidos previamente.
clfs['Ensemble (Hard Voting)'] = VotingClassifier (estimators = [(name,clfs[name]) for name in base_models],
n_jobs=-1)
clfs['Ensemble (Hard Voting)'].fit(train_inputs, train_outputs)
for key in clfs.keys():
acc = clfs[key].score(test_inputs, test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
Ensemble (Hard Voting): 80.9524%
###Markdown
Como se puede ver, si bien no mejora al mejor de los modelos que lo componen, esto se debe a, en primer lugar, que este no es un problema especialmente complejo. A mayores otro de los problemas es que nos fiamos por igual de todos los modelos a la hora de decidir la clase de respuesta. Para solucionar este problema se puede hacer que no todos los modelos tengan la misma importancia como veremos en el siguiente partado. Voto Mayoritario con PesosComo se indicó en el paso anterior, uno de los problemas del modelo clásico de *emsemble* es que todos los resultados pesan lo mismo y en cada uno de los modelos "débiles" sólo se contempla la opción más votada. Para solucionar esto una de las propuestas es el uso de de un peso en las ponderaciones de las decisiones. Esto se debe que un modelo puede ser mejor que otro o ser más confiable. Con el fin de reflejar este punto, se puede modificar dicha salida multiplicandola por un factor de confianza dentro de la regla utilizada para tomar las decisiones. Este procedimiento de ponderación en ocasiones también se denomina *Soft Voting* con el fin de contraponerlo al *Hard Voting* o no ponderado. Imagine que a cada uno de los clasificadores le asignamos el mismo peso, es decir {1,1,1}. En un ejemplo como el siguiente con un SVM, una regresión Logarítmica y un modelo basado en Bayes tendríamos las siguientes salidas.|Classificador |Mina |Roca || :------------- | :----------: | -----------: ||SVM | 0.9 | 0.1 | |LR | 0.3 | 0.7 | |NB | 0.2 | 0.8 ||Soft Voting |0.47 |0.63 | De esta manera la clase seleccionada sería la de Roca ya que todos los modelos pesan lo mismo en la toma de la decisión al realizar la media. En contra posición, si sabemos que uno de los modelos es mejor se puede ponderar la respuesta de dicho modelo. Imagine en el ejemplo anterior que se supiera que el ejemplo anterior, SVM suele ser mucho mejor que los otros dos para este problema en concreto. En ese caso, se puede inclementar su peso como se ve a continuación con el fin de tener dicho ejemplo más en cuenta. Con el mismo ejemplo pero, hacido que la respuesta de SVM sea mayor, los resultados serían:|Classificador |Mina |Roca || :------------- | :----------: | -----------: ||SVM |2 * 0.9 |2 * 0.1 | |LR |1 * 0.3 |1 * 0.7 | |NB |1 * 0.2 |1 * 0.8 ||Soft Voting |0.575 |0.425 |Como se pude ver en los resultados, si tenemos un modelo de más calidad, las salidas de este se tendrán más en cuenta en cuanto a tomar la decisión correspondiente.Para implementar este tipo de comportamiento se puede hacer simplemente añadiendo dos parámetros adicionales al a función `VotingClassifier` que se había usado previamente para que pondere la salida.
###Code
clfs['Ensemble (Soft Voting)'] = VotingClassifier (estimators = [(name,clfs[name]) for name in base_models],
n_jobs=-1, voting='soft',weights=[1,2,2,1])
clfs['Ensemble (Soft Voting)'].fit(train_inputs, train_outputs)
for key in clfs.keys():
acc = clfs[key].score(test_inputs,test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
Ensemble (Hard Voting): 80.9524%
Ensemble (Soft Voting): 80.9524%
###Markdown
Como se puede ver, los resultados son mejores cuando se combinan varios modelos que dan buenos resultados. De hecho, este procedimiento es la base de otras técnicas como son los *Random Forest* que veremos un poco más adelante en este tutorial. Los modelos a usar son la otra de las claves para la creación _ensemble_, en la siguiente sección veremos las estrategias más habituales para la creación de los modelos.El ajuste de esos pesos puede hacerse de muchas maneras diferentes, por ejemplo, se puede hacer de manera manual como hemos hecho en el ejemplo anterior. Otra alternativa sería usar alguna técnica de gradiente descencente para ir ajustandolos como si se tratara de una red neuronal o un SVM. Otra posibilidad es usar el valor de ajuste sobre el conjunto de validación (en este caso no se ha reservado un conjunto de datos para tal fin) como peso de los modelos. EjercicioRealice un ensemble diferente con otros modelos sobre el mismo conjunto de datos y ajuste los pesos con el fin de obtener un buen resultado. Creación de modelosUno de los elementos clave que todavía no se ha aborada es la crecación de los modelos que compondrán el metaclasificador. Hasta el momento, la aproximación que se ha seguido no es demasiado adecuada ya que el conjunto de datos de entrada de todos los modelos es el mismo. Esto tiene el efecto de una evidente falta de diversidad en los modelos ya que sea cual sea el modelo que creemos, este tendrá la misma información o "punto de vista"que los otros. Sin embargo, esto no es la práctica habitual si no que el conjunto de patrones de entrada se suele repartir en conjuntos más pequeños con los que entrenar una o varias técnicas con el fin de, por un lado, reducir el coste computacional y, por el otro, aumentar la divversidad en los modelos. Es necesario recordar en este punto que los modelos "débiles" no tienen que ser perfectos en todas las clases ni tan siquiera tienen porque contemplar todas las posibilidades sólo se necesitan modelos rápidos de entrenar y que ofrezcan una salida más o menos consistente.En cuanto a la manera en al que repartir los datos para la creación de los modelos, la mayor parte de las aproximaciones suele contemplar dos aproximaciones principalmente conocidas como *Bagging* y *Boosting*. A continuación, sse describirán brevemente estas dos aproximaciones. Bagging o boostrap aggregationLa técnica conocida como _Bagging_ o selección con remplazo fue propuesta por Breitman en 1996. Se basa en el desarrollo de múltiples modelos los cuales se pueden entrenar en paralelo. El elemento clave de esos modelos es que, cada uno de los modelos, se entrena sobre un subconjunto del conjunto de entrenamiento. Este subconjunto de datos se extrae de manera aleatoria con remplazo. Este último punto es particularmente importante ya que una vez que un ejemplo ha sido seleccionado de las posibilidades, se coloca nuevamente entre las posibilidades para poder ser seleccionado ya sea en el subconjunto que se está construyendo, o en los subconjuntos de los otros modelos, o lo que es lo mismo se crean conjuntos no disjuntos de ejemplos.<img style="display: block; margin-left: auto; margin-right: auto; width: 50%;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/Ensemble_Bagging.svg/440px-Ensemble_Bagging.svg.png" alt="Ejemplo de Bagging">Lo que provoca es que se crean "expertos" en datos especializados y dependiendo de la partición. Si bien los datos comunes, o más frecuentes, son correctamente contemplados por todos los modelos, también es cierto que los datos con menor frecuencia tienden a no estar en todas las particiones y pueden no ser contemplados en todos los casos. Así, se obtendrían modelos que estarían más especializados en determinados datos o que tienen un punto de vista diferente, que serían expertos en una región particular del espacio de búsqueda.Si bien se hablará un poco más adelante en más detalle, una técnica muy conocida que usa está aproximación para la construcción de sus modelos "débiles" es RandomForest. Está construye los arboles de decision que componen el metaclasificador de esta manera. A continuación, se puede ver un ejemplo de implementación de un metaclasificador que utiliza está técnica pero con SVM como clasificador base.
###Code
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
clfs['Bagging (SVC)'] = BaggingClassifier(base_estimator=SVC(),n_estimators=10, max_samples=0.50, n_jobs=-1)
clfs['Bagging (SVC)'].fit(train_inputs, train_outputs)
for key in clfs.keys():
acc = clfs[key].score(test_inputs,test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
Ensemble (Hard Voting): 80.9524%
Ensemble (Soft Voting): 80.9524%
Bagging (SVC): 85.7143%
###Markdown
Se puede usar cualquier classificador como base de un *Bagging* con la la clase [BaggingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html). En este caso, en el código, se ha optado por generar 10 modelos de de la clase `SVC`entrenados sólo sobre el 50% de los patrones de entrenamiento. Alternativamente a extraer ejemplos completos, se podría realizar una partición en vertical del _dataset_ de entrenameiento, extrayendo por tanto carácterísticas. Para implementar está alternativa, en la función `BaggingClassifier`se debe de definir el parámetro *max_features*. Esta aproximación se usa cuando el número de características es muy elevado para crear modelos más simples que no usen toda la información que en muchas ocasisones es redundante. Debe de destacarse que este procedimiento de extracción de caracterísitcas para los modelos se hace sin remplazo, es decir, las características sacadas para un clasificador no se vuelven a introducir en la lista de posibilidades hasta crear el conjunto para el siguiente clasificador. BoostingLa otra gran familia de técnicas para la creación de metamodelos de ensemble es lo que se conoce como *Boosting*. En este caso, la aproximación es ligeramente distinta, ya que lo que se pretende es crear una cadena de clasificadores. EL elemento clave de este tipo de clasificadores es buscar que, cada nuevo clasificador, este más especializado en los patrones que los modelos previos han errado. Por tanto, al igual que en el caso anterior, se selecciona un subconjunto de patrones del conjunto origianal. Sin embargo, este proceso se hace de manera secuencial y sin remplazo. Este punto es crucial ya que como se comentó la idea es eliminar aquellos patrones que están ya correctamente clasificadose e ir obtenido modelos más específicos que se concentran en aquellos ejemplos menos frecuentes o que han sido clasificados de manera incorrecta en un paso anterior. Por tanto, al igual que en el *Bagging*, la idea subyacente de esta aproximación es que no todos los modelos tengan que tener todos los patrones como base, pero a diferencia de _Bagging_, este proceso es lineal por la dependencia en la construcción de los modelos. <img style="display: block; margin-left: auto; margin-right: auto; width: 50%;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/b5/Ensemble_Boosting.svg/1920px-Ensemble_Boosting.svg.png" alt="Ejemplo de Boosting">Posteriormente, para obtener la combinación de los modelos, se hace uso del Voto Mayoritario con pesos. En dicha aproximación, los pesos es establecen con un sistema de aproximación iterativa. Existen multitud de ejemplos que utilizan este tipo de técnica como [AdaBoost](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) o [Gracient Tree Boosting](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html). En ambos casos lo que se realiza es un ajuste de los pesos con una técnica basada en el Gradiente Descendente. En el caso del AdaBoost, el algoritmo parte dandole un peso a todas las instancias del conjunto de entrenamiento. Con ese conjunto ponderado, se entrena un clasificador con esos datos originales. En función de los errores comentidos se ajustan los pesos del conjunto original y se entrena una nueva copía del clasificador, pero ya sobre esos datos ajustados que se centrarán más en las instancias que han sido clasificadas de manera incorrecta. En el caso de `scikit learn`, el algoritmo implementado es el conocido como [AdaBoost-SAMME](https://hastie.su.domains/Papers/SII-2-3-A8-Zhu.pdf) propuesto por Zhu et.al en 2009. Como particularidad de está implementación, comentar que la función de *loss* utilizada es una exponencial. Esta es la que se utilizará para calcular la ponderación de los errores cometidos, así como el peso de los clasificadores en el metaclasificador. En terminos generales, la salida será la más votada por los clasificadores en base a la ponderación de cada uno de ellos. Por su parte, el Gradiente Tree Boosting es una aproximación diferente del uso del _Boosting_. Ésta construye un árbol en donde los nodos del árbol establecen los criterios de, por ejemplo, en el caso de la clasificación hacen referencia al `logistic-likelihood` de un determinado patrón. De esa manera, cada una de los nodos del arbol se realiza una clasificación la cual se va ajustando en base a los errores residuales que se van cometiendo ajustando los pesos de los diferentes clasificadores del árbol. Esta división se realiza para cada una de las características de que se dispone, realizando un procedimiento recursivo entrenando varios clasificadores de esta manera. Posteriormente, para hacer la toma de decisión, ésta se basa en las respuestas de los clasificadores por los que ha ido pasando. La principal diferencia con el AdaBoost es que en este caso la salida son las probabilidades de las clases las cuales se suman para dar la respuesta más probable en lugar de que sea la respuesta sobre la instancias.A continuación, vemos una aproximación con un se puede ver un ejemplo de uso de estos dos metaclasficadores que hacen uso de _Boosting_.
###Code
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
clfs['Ada'] = AdaBoostClassifier(n_estimators=30)
clfs['Ada'].fit(train_inputs, train_outputs)
clfs['GTB'] = GradientBoostingClassifier(n_estimators=30, learning_rate=1.0, max_depth=2, random_state=0)
clfs['GTB'].fit(train_inputs, train_outputs)
for key in clfs.keys():
acc = clfs[key].score(test_inputs,test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
Ensemble (Hard Voting): 80.9524%
Ensemble (Soft Voting): 80.9524%
Bagging (SVC): 85.7143%
Ada: 80.9524%
GTB: 80.9524%
###Markdown
EjercicioRepita el ejercicio anterior pero cambiando el classificador base de `AdaBoostClassifier` y en el caso del `GradientBoostingClassifier`, si bien este no se puede cambiar el clasificador base, consulte la documentación y cambie los parámetros necesarios para que al menos haya 4 patrones en cada uno de los clasificadores terminales del árbol. Técnicas que integran la aproximación _Ensemble_Algunos de los algoritmos más conocidos y usados actualmente se basan en este tipo de aproximación. De entre esas aproximaciones, puede que las más famosas y utilizadas son aquellas que toman como base la generación de sencillos árboles de decisión (_Decision Tree, DT_). La razón del uso de los árboles es su fácil interpretación, así como la rapidez en el cálculo y entrenamiento. A continuación se verán las dos aproximaciones conocidas a día de hoy en este sentido, ***Random Forest*** y ***XGBoost***. Random ForestEste algoritmo, propuesto por Breitman y Cutler en 2006 a instancias de una publicación anterior de Ho de 1995 (_Random Subspaces_), es el paradigma de técnica de ensemble. El algoritmo une en un ensemble un conjunto de clasificadores sencillos que toman la forma de *Decision Trees*. Estos clasificadores son entrenados siguiendo una aproximación de *bagging*, y por lo tanto se pueden entrenar cada uno de forma paralela. Para combinar la salidad de los algoritmos se hace para los problemas de clasificación mendiante la opción más votada entre los "expertos" o, si es un problema de regresión, mediante la media aritmética de las respuestas. Es un algoritmo que necesita el ajuste de muy pocos hiperparámetros para obtener muy buenos resultados en casi cualquier tipo de problema. En general, el valor más importante es el número de estimadores y por tanto el número de particiones que se va a hacer del conjunto de entrenamiento. Varios autores apuntan que ese número de estimadores debiera de ser *$\sqrt{\textrm{número características}}$* para problemas de clasicación, y *$\frac{\textrm{número características}}{3}$ para problemas de regresión. Aun así, también apunta que la técnica saturaría entre 500 y 1000 árboles y por mucho que se aumente no mejoraría resultados. Si bien este último dato solo ha sido probado de manera empírica en determinados conjuntos de datos y por lo tanto debe de ser tomado con cuidado al no tener una justificación matemática.A mayores del proceso habitual de *bagging*, los *Random Forest* también incluyen un segundo mecanismo de división. Una vez seleccionados los patrones que formaran parte del conjunto de entrenamiento del árbol de decisión, solamente un subconjunto de características (*features*) aleatorias están disponibles para cada nodo del árbol. Esto hace crecer la diversidad de los árboles del bosque y lo que consigue es centrarse en el rendimiento global con una pequeña varianza en los resultados. Este mecanísmo permite evaluar cuantitativamente el rendimiento individual de cada árbol que forma parte del bosque y sus variables. Por tanto, se puede medir la importancia de cada variable. Está medida que calibra la participación de cada variable en nodos del árbol en la toma de decisiones se denomina impureza y viene a medir la diferencia entre las diferentes ramas del árbol cuando se hace la partición de los ejemplos. En ocasiones, esta misma medida es utilizada a modo de medida para la selección de variables tomando la medida en todos los árboles del bosque de la participación e importancia mediante un filtrado como los vistos en la unidad anterior.Para el calculo de esa medida de impureza, existen diferentes aproximaciones. Por ejemplo, `scikit learn` utiliza una medida que denomina **Gini**. Esta última es la probabilidad de clasificar incorrectamente un elemento elegido al azar en el conjunto de datos si se etiquetara aleatoriamente según la distribución de clases en el conjunto de datos. Se calcula como:$$G = \sum_{i=1}^C p(i) * (1 - p(i))$$siendo $C$ el número de clases y $p(i)$ la probabilidad de seleccionar al azar un elemento de la clase $i$. Se puede ver un buen ejemplo de como calcular la impureza de las ramas en el siguiente [enlace](https://victorzhou.com/blog/gini-impurity/)A continuación, sobre el ejemplo que se viene utilizando en esta unidad, se ejecutará un modelo de *Random Forest* con la implementación de `scikit learn`. Destacar como parámetros más importante de dicha implementación:- ***n_estimator***, que marca el número de árboles que se van a generar o el número de particiones de *bagging*.- ***criterion***, medida de impureza de los nodos. Por defecto se usa Gini aunque puede cambiarse por la entropía ganada.- ***max_depth***, permite limitar la profundidad máxima de los árboles para así limitar el número de variables a usar.- ***min_sample_split***, para cada árbol de decisión, cuantos patrones son necesario para realizar una división interna en los *Decision Trees*.- ***bootstrap***, puede utilizar la aproximación de *bagging* o *bootstrap* para construir los árboles pero si esta propiedad es falsa, entonces usa todo el conjunto de entrenamiento para generar los árboles. En caso de tener un valor True, se tienen en cuenta las siguientes propiedades: + ***max_samples***, número de ejemplos a extraer del conjunto original para construir el conjunto de entrenamiento del estimador, el valor por defecto es igual al número de patrones pero recuerde que puede extraerse varias veces el mismo ya que es una selección con remplazo dando variabilidad. + ***oob_score***, medida de *out of bag* para estimar la generalización. Aquellas muestras que no han formado parte del entrenamiento de un estimador se pueden usar para calcular una medida de validación y, promediarla entre todos los estimadores para saber como de general es el bosque construído.
###Code
from sklearn.ensemble import RandomForestClassifier
clfs['RF'] = RandomForestClassifier(n_estimators=8, max_depth=None,
min_samples_split=2, n_jobs=-1)
clfs['RF'].fit(train_inputs, train_outputs)
for key in clfs.keys():
acc = clfs[key].score(test_inputs,test_outputs)
print(f"{key}: {(acc*100):.4f}%")
###Output
SVM: 85.7143%
LR: 85.7143%
DT: 76.1905%
NB: 61.9048%
Ensemble (Hard Voting): 80.9524%
Ensemble (Soft Voting): 80.9524%
Bagging (SVC): 85.7143%
Ada: 80.9524%
GTB: 80.9524%
RF: 76.1905%
###Markdown
En esta aproximación se han definido el número de estimadores siguiendo la regla antes indicada de $\sqrt{\textrm{número características}}$. En este caso como son pocos estimadores y pocos patrones los resultados pueden variar bastante en función del tipo de particiones obtenidas.A continuación, una vez entrenado el modelo, se puede comporobar el nivel de impureza obtenido para cada una de las frecuencias que se ha calculado con el el algoritmo Gini, como una media de las obtenidas entre los arboles que componen el bosque.
###Code
#for name, score in enumerate(clfs['RF'].feature_importances_):
# print(f"Feature {name+1} score: {score:.4f}")
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"]=10,10
plt.barh(y=range(1,61), width = clfs['RF'].feature_importances_)
plt.xlabel("Gini Gain")
plt.ylabel("Característica")
plt.title("Importancia de las características ")
###Output
_____no_output_____
###Markdown
A puntar que , como se ve en el gráfico, este valor determina que la mayor parte de la información se concentra en algunas de las frecuencias utilizadas. Es por ello, que como se comento antes, se podría realizar un filtrado de la información como el visto en la unidad anterior en base a este valor. XGBoost (eXtreme Gradient Boosting)Finalmente, en este último apartado destacar nuevamente el Gradient Boosting, en concreto, una implementación que en los últimos años se ha hecho muy famosa por su versatilidad y rapidez. Esta implementación que se conoce como ***XGBoost (eXtreme Gradient Boosting)*** , que ha destacado sobre todo en competiciones como en la plataforma Kaggle por su rápidez en la obtención de resultados y robustez de los mismos. El ***XGBoost*** será un ensemble similar al de los Random Forest pero utiliza un clasificador base diferente conocido como CART (classification and regression trees) en lugar de *Decision Trees*. Este cambio viene de la mano de la necesidad del algoritmo de obtener la probabilidad de las decisiones, al igual que ocurría con el *Gradient Tree Boosting*. El otro de los cambios fundamentales de este algotimo es, ya que está basado en el * Gradiente Tree Boosting*, es el cambio de la estrategia de *bagging* por la de *boosting* para la creación de los conjuntos de entrenamiento de los clasificadores.Posteriormente, está técnica realiza una aproximación de entrenamiento aditivo cuyos pesos se van ajustanto en base a un **Gradiente Descendente** sobre una función de *loss* a definir. Sumando la función de *loss* con el término de regularización, se puede calcular hasta la segunda derivada de las funciones con el fin de actualizar los pesos de la clasificación realizado por los diferentes árboles. El cálculo de este gradiente, permite por lo tanto el ajuste de los valores de los clasificadores que se generan a continuación de uno dado con el fin de que los pesos permitan focalizar la atención en los patrones que incorrectamente clasificados. Los detalles matemáticos de la implementación se pueden consultar en este [enlace](https://xgboost.readthedocs.io/en/stable/tutorials/model.html).Al diferencia del resto de aproximaciones que hemos visto, el `xgboost` no se encuentra actualmente implementado en `scikit learn`. POr este motivo, se deberá de instalar la versión de referencia si no está ya presente en la máquina.
###Code
try:
import xgboost as xgb
except ModuleNotFoundError:
!pip install xgboost
import xgboost as xgb
###Output
_____no_output_____
###Markdown
Tras esa instalación, se podría hacer uso de la librería como se ve en el ejemplo siguiente. En primer lugar, para hace uso de está librería es necesario hacer una adapción de los datos de entrada al formato [LIBSVM](https://xgboost.readthedocs.io/en/stable/tutorials/input_format.html). Existen varias formas de cargar los datos desde numpy, scipy o pandas, para mayores detalles sobre este punto y para aplicarlo a diferentes problemas, se pude consultar el siguiente enlace [enlace](https://xgboost.readthedocs.io/en/stable/python/python_intro.html). En este caso concreto, el ejemplo está almacenado en un array de `numpy` con lo que para la transformación de los datos bastaría con:
###Code
# preparar las matrices para usarlas con el formato LIBSVM
dtrain = xgb.DMatrix(train_inputs, label=train_outputs)
dtest = xgb.DMatrix(test_inputs, label=test_outputs)
###Output
_____no_output_____
###Markdown
Una vezz realizada está adaptación de los datos, se puede proceder con el entrenamiento de un modelo de la librería `xgboost`. Para ello sólo hará falta llamar a la función train con los parámetros correspondientes. Dentro de estos parámetros destacan:- **eta**, término que determinará la compresión de los pesos tras cada nueva etapa del *boosting*. Toma valores entre 0 y 1.- **max_depth**, profundidad máxima de los árboles tiene por defecto un valor de 6 incrementarlo lo que hará será permitir modelos más complejos- **gamma**, parámetro que controla la reducción mínima de pérdidas necesaria para realizar una nueva partición en un nodo hoja del árbol. Cuanto mayor sea será más conservador- **alpha** y **lambda**, son los parámetros que controlan la regulación L1 y L2 respectivamente.- **objective**, establece la fución de loss a ser utilizada que puede ser una de las predefinidas, las cuáles se pueden consultar en este [enlace](https://xgboost.readthedocs.io/en/stable/parameter.htmlparameters-for-tree-booster)A mayores solo es necesario establecer el número máximo de iteraciones del proceso de boosting como se ve en el siguiente ejemplo con 40 rondas.
###Code
# Especificar los parámetros del modelo
param = {'max_depth':2, 'eta':1, 'objective':'binary:logistic' }
num_round = 40
# entrenar el modelo correspondiente
xgb_model = xgb.train(param, dtrain, num_round)
###Output
[12:40:27] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
###Markdown
***NOTA***En caso de usarse un conjunto de validación, este debe de pasarse en el parámetro *evals* de la función de entrenamiento. A mayores, y sólo cuando el mencionado parámetro *evals* está definido, se puede establecer las rondas para la parada previa con el parámetro *early_stopping_rounds* de la función de entrenamiento. El código sería similar a:``` python evals = xgb.DMatrix(val_inputs, label=val_outputs) xgb_model = xgb.train(param, dtrain, num_round, evals=evals, early_stopping_rounds=10)```El valor proporcionado en la salida se corresponde con la suma de las salidas de los árboles, estándo está entre 0 y 1 apra la pertenencia a una determinada clase. Como se trata de un es una clase binaria, simplemente se establece un límite de 0.5 a la salida para determinar cual es la respuesta.
###Code
from sklearn.metrics import accuracy_score
print(f"{xgb_model.predict(dtest)}")
# Evaluar la salida
acc = accuracy_score(xgb_model.predict(dtest)>0.5,test_outputs)
print(f"XGB: {(acc*100):.4f}%")
###Output
[8.4760702e-01 5.1304615e-01 9.5305294e-01 3.7761524e-04 9.9571759e-01
1.2167676e-01 5.5907309e-01 9.9939620e-01 5.7567203e-01 1.8412283e-02
8.8869894e-01 9.0951115e-01 9.9960881e-01 1.4012090e-02 9.9484211e-01
5.5841553e-01 9.9324566e-01 9.9944490e-01 9.9865578e-02 2.3036277e-02
9.9296373e-01]
XGB: 80.9524%
###Markdown
Finalmente, al igual que en el caso de los *Random Forest* es posible identidicar la importancia y pintarla para cada una de las variables en la clasificación. Con el siguiente código se pude ver dicho marcador ordenado decendentemente
###Code
# imprimir la importancia de las características
xgb.plot_importance(xgb_model)
###Output
_____no_output_____
###Markdown
**Load .npy files into splits by using np.load function**
###Code
import numpy as np
X_train, X_test, y_train, y_test = np.load('X_train.npy'), np.load('X_test.npy'), np.load('y_train.npy'), np.load('y_test.npy')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
**TensorFlow**TensorFlow is a Python-based, free, open-source machine learning platform, developed by Google.
###Code
import tensorflow as tf
print(tf.__version__)
tf.config.list_physical_devices('GPU')
###Output
2.6.0
###Markdown
**Keras**Keras is a deep-learning API for Python, built on top of TensorFlow, that provides a convenient way to define and train any kind of deep-learning model.
###Code
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**Sequential:**There are three APIs for building models in Keras. One of them is Sequential. The Sequential class indicates that our network will be feedforward and layers will be added to the class sequentially, one on top of the other.**Layer:** The core building block of neural networks. You can think of a layer as a filter for data: some data goes in, and it comes out in a more useful form.**Dense layer:** are densely connected also called fully connected neural layers.**Regularization:** The goal of regularization is to fit the model perfectly to the training data, with the goal of making the model perform better during validation. **DropOut:**Dropout is one of the most effective and most commonly used regularization techniques for neural networks. It is applied to a layer, consists of randomly dropping out (setting to zero) a number of output features of the layer during training.**BatchNormalization:**It’s a type of layer. Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1.
###Code
def create_model(num_neurons=5, num_layers=1, activation= 'relu', drop_ratio=0.2):
model = Sequential()
model.add(Dense(num_neurons, activation = 'relu', input_shape = (X_train.shape[1],)))
model.add(Dropout(drop_ratio))
for i in range(num_layers-1):
model.add(BatchNormalization())
model.add(Dense(num_neurons, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.summary()
return model
###Output
_____no_output_____
###Markdown
To make the model ready for training, we need to pick three more things, as part of the compilation step:**Optimizer**—The mechanism through which the model will update itself based on the training data it sees, so as to improve its performance. ***Adam:*** is a stochastic gradient descent method.**Loss function**—How the model will be able to measure its performance on the training data, and thus how it will be able to steer itself in the right direction. ***binary_crossentropy:*** is used for binary (0 or 1) classification.**Metrics** to monitor during training and testing— Here, we’ll only care about accuracy. The fraction of the input data that were correctly classified.**Model Training:** We’re now ready to train the model, which in Keras is done via a call to the model’s fit() method. We fit the model to its training data.Two quantities are displayed during training: the loss of the model over the training data, and the accuracy of the model over the training data.**Evaluate Model:** On average, how good is our model at classifying never-seen-before data, We can check by computing average accuracy over the entire test set by calling model.evaluate() function.
###Code
for i in range(1,10):
for j in range(5,31,5):
model= create_model(num_neurons=j, num_layers=i)
model.compile(optimizer= 'adam',loss= 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, y_train, batch_size=80, epochs=300, verbose = 0, validation_split = 0.1)
loss, acc = model.evaluate(X_test, y_test)
print('Test accuracy:', acc)
print('Test loss', loss)
plt.plot(history.history['loss'], label= 'training')
plt.plot(history.history['val_loss'], label= 'validation')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
###Code
for k in range(5):
drop_ratio = k*0.1
print('Dropout: ',drop_ratio)
model= create_model(num_neurons=20, num_layers=8, drop_ratio = drop_ratio)
model.compile(optimizer= 'adam',loss= 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, y_train, batch_size=80, epochs=300, verbose = 0, validation_split = 0.1)
loss, acc = model.evaluate(X_test, y_test)
print('Test accuracy:', acc)
print('Test loss', loss)
plt.plot(history.history['loss'], label= 'training')
plt.plot(history.history['val_loss'], label= 'validation')
plt.legend()
plt.show()
model= create_model(num_neurons=20, num_layers=8)
model.compile(optimizer= 'adam',loss= 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, y_train, batch_size=80, epochs=300, verbose = 0, validation_split = 0.1)
loss, acc = model.evaluate(X_test, y_test)
print('Test accuracy:', acc)
print('Test loss', loss)
plt.plot(history.history['loss'], label= 'training')
plt.plot(history.history['val_loss'], label= 'validation')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
When you’re training a model, there are many things you can’t predict from the start. In particular, you can’t tell how many epochs will be needed to get to an optimal validation loss. Our examples so far have adopted the strategy of training for enough epochs (i.e. 300) that you begin overfitting, using the first run to figure out the proper number of epochs to train for, and then finally launching a new training run from scratch using this optimal number. Of course, thisapproach is wasteful. A much better way to handle this is to stop training when you measure that the validation loss is no longer improving. This can be achieved using the EarlyStopping callback.**Early Stopping:**The EarlyStopping callback interrupts training once a target metric being monitored has stopped improving for a fixed number of epochs.**Patience**: Number of epochs with no improvement after which training will be stopped.
###Code
es = EarlyStopping(monitor = 'val_loss', patience = 100)
model= create_model(num_neurons=20, num_layers=8)
model.compile(optimizer= 'adam',loss= 'binary_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, y_train, batch_size=80, epochs=500, verbose = 2, validation_split = 0.1, callbacks=[es])
loss, acc = model.evaluate(X_test, y_test)
print('Test accuracy:', acc)
print('Test loss', loss)
plt.plot(history.history['loss'], label= 'training')
plt.plot(history.history['val_loss'], label= 'validation')
plt.legend()
plt.show()
history = model.fit(X_train, y_train, batch_size=80, epochs=300, verbose = 0, validation_split = 0.1)
loss, acc = model.evaluate(X_test, y_test)
print('Test accuracy:', acc)
print('Test loss', loss)
from matplotlib import pyplot as plt
plt.plot(history.history['loss'], label= 'training')
plt.plot(history.history['val_loss'], label= 'validation')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Prediction:**Now that we have a trained model, you can use it to predict class probabilities for new data points that weren’t part of the training data, like those from the test set.
###Code
row = X_test[5:100,:]
labels = y_test[5:100]
predictions = model.predict(row).round().reshape(labels.shape)
# print('Predicted: ', predictions)
# print('Labels: ', labels)
from sklearn.metrics import confusion_matrix
print(confusion_matrix(labels, predictions))
###Output
_____no_output_____
|
bert_classification_to_git.ipynb
|
###Markdown
Классификация текстов с помощью BERTЗадача: при загрузке прайс-листов в сервис необходимо автоматически распределять товары по нужным категориям.Для реализации этого наилучшим образом подходит задача классификации текстов. Классификатор построим на предобученной модели BERT. Так как тексты у нас преимущественно на русском языке, возьмём модель **RuBert** от DeepPavlov. Для реализации всего процесса будем использовать библиотеку [Transformers](https://huggingface.co/transformers/) от huggingface, которая позволяет быстро реализовать тот или иной вид Трансформера.Для дообучения модели предварительно был скачан датасет с Леруа, содержащий названия товаров и категорию, к которой они относятся. Все цифры из текста были удалены.Трансформер - модель нейросети, использующая внутреннюю память для выявления и запоминания внутренних связей между словами. При этом исследования показали, что в отличие от моделей типа RNN или CNN, Трансформер может запоминать и довольно-таки далёкие связи между словами.BERT - разновидность Трансформера, имеющая 2 принципиальных отличия.В классическом Трансформере для языкового моделирования импользуется маска для последовательного "угадывания" слов: слово предсказываается из контекста, идущего слева он маскированного слова, в Берте же слово маскируется случайным образом.Вторая особенность - Берт принимает на вход 2 предложения - для того, чтобы понимать взаимосвязанные предложения. В решении нашей задачи это не важно, но это нужно помнить при подготовке текста перед подачей его Берту.Дополнительно, Берт использует собственную разметку токенов. Мы будем использовать уже имеющийся предобученный токенизатор.Токенизатор разбивает слова на подслова, благодаря чему обученная модель сможет хорошо понимать слова с опечатками и ошибками, а также сможет обрабатывать возможные новые слова. (Более ранние модели дистрибутивной семантики, например, word2vec этого сделать не могли).Использование предобученных языковых моделей показало очень хорошие результаты для решения большого количества задач при работе с текстами. Языковые модели обучаются на очень большом количестве текстов различной тематики, запоминая особенности языка и взаимосвязи между словами. Потом предобученную модель можно использовать на своём, небольшом датасете для решения конкретной задачи. При этом, нам уже не требуется больших вычислительных ресурсов для получения качественного результата, что значительно ускоряет и удешевляет процесс разработки конечного продукта. Разработчики Берта рекомендуют дообучать предобученную модель совсем немного: достаточно 2 - 4 эпох для получения качественного результата. Подключение библиотек
###Code
!git clone https://github.com/blanchefort/utils.git
!pip install transformers
import numpy as np
import pandas as pd
from random import randint
import torch
from torch.utils.data import (TensorDataset,
DataLoader,
RandomSampler)
from keras.preprocessing.sequence import pad_sequences
from transformers import AutoConfig, AutoModelForSequenceClassification
from transformers import AutoTokenizer
from transformers import AdamW
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
from utils.ml.train_model import init_random_seed
from utils.ml.visualize import config_my_plotting
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
init_random_seed(42)
config_my_plotting()
# Узнаем, что за карта
print(torch.cuda.get_device_name(0))
###Output
Tesla P100-PCIE-16GB
###Markdown
Подготовка датасетаВ качестве основного фреймворка для обучения модели мы используем Pytorch, поэтому будем пользоваться его инструментами формирования датасета и подачи батчей модели.Pytorch предоставляет возможность использовать различные виды датасетов. Так как наша выборка небольшая, мы сразу загрузим её в память и преобразуем в тензоры, которые будут подаваться модели.
###Code
df = pd.read_csv('/content/drive/My Drive/colab_data/leroymerlin/to_classifier.csv')
# Размер нашей выборки
df.shape
df.sample(5)
# категории товаров будут нашими метками
df.category_1.unique()
# Присвоим каждой категории индекс, чтобы подавать в модель
category_index = {i[1]:i[0] for i in enumerate(df.category_1.unique())}
# обратное преобразование - индекс метки в текст, этот словарь нам понадобится
# после обучения для большей наглядности, чтобы видеть, к какой категории товар
# отнесён моделью
category_index_reverce = {i[0]:i[1] for i in enumerate(df.category_1.unique())}
category_index
# Переведём все метки датасета в числа
sentences = df.name.values
labels = [category_index[i] for i in df.category_1.values]
# Каждому предложению (названию товара) теперь соответсвует не название категории, а её индекс:
sentences[22], labels[22]
# Проверим, что все данные корректны
assert len(sentences) == len(labels) == df.shape[0]
###Output
_____no_output_____
###Markdown
ТокенизацияТеперь подготовленный датасет нужно токенизировать. Для этого мы воспользуемся уже предобученным токенизатором, идущим вместе с моделью от DeepPavlov.Каждый сэмпл мы приведём к виду, который требуется Берту, затем разобъём выборку на обучающую и отложенную, для последующей проверки качества.Специальные токены Берта:* `[CLS]` - начало последовательности* `[SEP]` - разделение двух предложенийМы будем обрамлять наши сэмплы этими токенами.
###Code
sentences = ['[CLS] ' + sentence + ' [SEP]' for sentence in sentences]
train_sentences, test_sentences, train_category, test_category = train_test_split(sentences, labels, test_size=0.005)
len(train_sentences), len(test_sentences)
###Output
_____no_output_____
###Markdown
Теперь загрузим наш токенизатор. Он идёт вместе с предобученной моделью [RuBert от DeepPavlov](https://huggingface.co/DeepPavlov/rubert-base-cased).Загружать будем с помощью универсального метода Pytorch `torch.hub.load`.
###Code
tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'DeepPavlov/rubert-base-cased')
# Токенизируем нашу обучающую выборку
tokenized_texts = [tokenizer.tokenize(sent) for sent in train_sentences]
# Посмотрим, что получилось
# Сомволы номеров отмечают то, что данный токен - продолжение слова
print(tokenized_texts[42])
# ииндексы токенов
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
###Output
_____no_output_____
###Markdown
Теперь нам нужно выбрать размер входной последовательности, подаваемой нейросети. Он должен быть фиксированным, но наши сэмплы имеют различную длину.Поэтому посмотрим на распределение количества токенов в сэмплах и попытаемся определить оптимальный размер. В нашей задаче используются короткие последовательности, поэтому мы можем брать максимальную длину, в других же задачах это может быть критично - мы должны учитывать контекст по максимуми, при этом нужно учитывать, хватит ли машине памяти для обработки последовательностей большого размера.Выбрав размер, мы приведём все сэмплы к единому виду: последовательности большого размера будут образаны, последовательности малого размера дополнятся паддингами - специальными нулевыми токенами.
###Code
# Соберём все размеры последовательностей
lenths = [len(sent) for sent in tokenized_texts]
# Посмотрим, как они распределяются
plt.hist(lenths)
# Выравниваем датасет. Возьмём размер, равный 24
input_ids = pad_sequences(
input_ids,
# максимальная длина предложения
maxlen=24,
dtype='long',
truncating='post',
padding='post'
)
# Вот, что у нас в результате получилось
# Как видно, в этом примере меньше 24 токенов, поэтому в конец был добавлен паддинг
input_ids[42]
# Создадим маску внимания для каждого сэмпла нашей обучающей выборки.
# единицами отметим те токены, которые нужно учитывать при обучении и вычислении градиентов,
# нулями - те, которые следует пропустить.
attention_masks = [[float(i>0) for i in seq] for seq in input_ids]
print(attention_masks[42])
# каждая маска соответсвует своей последовательности
assert len(input_ids[42]) == len(attention_masks[42])
###Output
_____no_output_____
###Markdown
Теперь разобъём наш трейн на собственно обучающую выборку и валидационную - для того, чтобы проверять качество в ходе дообучения модели: нужно разделить как последовательности, так и их маски.
###Code
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(
input_ids, train_category,
random_state=42,
test_size=0.1
)
train_masks, validation_masks, _, _ = train_test_split(
attention_masks,
input_ids,
random_state=42,
test_size=0.1
)
assert len(train_inputs) == len(train_labels) == len(train_masks)
assert len(validation_inputs) == len(validation_labels) == len(validation_masks)
###Output
_____no_output_____
###Markdown
Инициализация DataLoader'овПереведём все наши данные в тип Тензор, с которым работает Pytorch и инициализируем уже готовый DataLoader из этой библиотеки.
###Code
train_inputs = torch.tensor(train_inputs)
train_labels = torch.tensor(train_labels)
train_masks = torch.tensor(train_masks)
validation_inputs = torch.tensor(validation_inputs)
validation_labels = torch.tensor(validation_labels)
validation_masks = torch.tensor(validation_masks)
###Output
_____no_output_____
###Markdown
В ходе экспериментов можно подобрать разные размеры батча. Но нужно учитывать то, что при больших размерах батча на видеокарте может не хватить памяти, при слишком же малых размерах батча обучение будет нестабильным. Разработчики Берта рекомендуют брать батч размером 32, мы же возьмём 64, так как наши последовательности не очень большие.
###Code
# специальная обёртка для работы с Тензор-датасетами, в Pytorch есть и другие,
# также можно и свою обёртку написать, для нашей же задачи вполне хватит уже существующих
# в библиотеке инструментов. Используя их мы существенно сокращаем свой код.
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_dataloader = DataLoader(
train_data,
# Данные по батчам разбиваем произвольно с помощью RandomSampler
sampler=RandomSampler(train_data),
batch_size=64
)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_dataloader = DataLoader(
validation_data,
sampler=SequentialSampler(validation_data),
batch_size=64
)
###Output
_____no_output_____
###Markdown
ДообучениеЗагружаем веса предобученной модели и запускаем процесс дообучения на наших данных.Так как в результате мы хотим получить не языковую модель, а мультиклассовый классификатор, укажем это при настройках.В библиотеке Transformers уже имплементированы классы для различных задач. Нам понадобится `AutoModelForSequenceClassification`.Использовав этот класс, мы возьмём предобученный Берт, добавив ему на выход один полносвязный слой, который и будет решать нашу задачу классификации. По умолчанию обёртка `AutoModelForSequenceClassification` (или `BertForSequenceClassification`) использует бинарную классификацию. Нам же нужна мультиклассовая, поэтому укажем это в конфигурации.
###Code
config = AutoConfig.from_pretrained('DeepPavlov/rubert-base-cased',
num_labels=len(category_index),
id2label=category_index_reverce,
label2id=category_index)
# Загружаем модель, передаём ей наш конфиг
model = AutoModelForSequenceClassification.from_pretrained('DeepPavlov/rubert-base-cased', config=config)
# Отправим на видеокарту, заодно посмотрим архитектуру нашего Берта
model.cuda()
###Output
_____no_output_____
###Markdown
Видим, что к выходу Берта был добавлен слой:```(classifier): Linear(in_features=768, out_features=16, bias=True)```На выходе получим 16 вероятностей принадлежности текста той или иной метке.
###Code
# Гипепараметры модели. Их можно изменять
param_optimizer = list(model.named_parameters())
# Можно посмотреть или изменить. Но нам этого не нужно, инициализируем лишь функцию
# оптимизации. В качестве оптимизатора будем использовать оптимизированный
# Adam (adaptive moment estimation)
# for name, _ in param_optimizer:
# print(name)
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5)
###Output
_____no_output_____
###Markdown
Как говорилось выше, для получения хорошего результата достаточно 2 - 4 эпох. Но даже один прогон покажет хороший результат - для нашего датасета и задачи это именно так.
###Code
%%time
train_loss_set = []
train_loss = 0
# Переводим модель в training mode
model.train()
for step, batch in enumerate(train_dataloader):
# Переводим данные на видеокарту
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# Обнуляем градиенты
optimizer.zero_grad()
# Прогоняем данные по слоям нейросети
loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
train_loss_set.append(loss[0].item())
# Обратный прогон
loss[0].backward()
# Шаг
optimizer.step()
# Обновляем loss
train_loss += loss[0].item()
#print(f'Loss: {loss[0].item()}')
print('*'*20)
print(f'Лосс на обучении: {train_loss / len(train_dataloader)}')
# посмотрим, как обучалась наша модель
plt.plot(train_loss_set)
plt.title("Loss на обучении")
plt.xlabel("Батчи")
plt.ylabel("Потери")
plt.show()
###Output
_____no_output_____
###Markdown
Как видим, модель обучилась достаточно быстро, далее идут лишь флуктуации без улучшений качества. ВалидацияПроверим работу модели на отложенной выборке.
###Code
%time
# Переводим модель в evaluation mode
model.eval()
valid_preds, valid_labels = [], []
for batch in validation_dataloader:
# добавляем батч для вычисления на GPU
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# Вычислять градиенты не нужно
with torch.no_grad():
logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
# Перемещаем логиты и метки на CPU
logits = logits[0].detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
batch_preds = np.argmax(logits, axis=1)
batch_labels = label_ids #np.concatenate(label_ids)
valid_preds.extend(batch_preds)
valid_labels.extend(batch_labels)
###Output
CPU times: user 2 µs, sys: 1e+03 ns, total: 3 µs
Wall time: 6.91 µs
###Markdown
Посмотрим на качество нашей модели по каждой из категорий. Как видно, при определении каждой из категорий, наша модель справляется сравнительно хорошо. И это даже при том, что мы использовали несбалансированную выборку.
###Code
print(classification_report(valid_labels, valid_preds, target_names=category_index_reverce.values()))
###Output
precision recall f1-score support
Стройматериалы 0.99 0.96 0.97 1340
Столярные изделия 1.00 1.00 1.00 263
Окна и двери 1.00 0.89 0.94 65
Электротовары 0.96 0.98 0.97 1997
Инструменты 0.95 0.98 0.96 3050
Напольные покрытия 0.97 0.79 0.87 243
Плитка 0.92 0.77 0.83 239
Сантехника 0.97 0.99 0.98 3065
Водоснабжение 0.99 0.98 0.98 2382
Сад 0.96 0.99 0.98 4915
Скобяные изделия 0.98 0.96 0.97 2356
Краски 0.96 0.95 0.95 1526
Декор 0.93 0.93 0.93 2994
Освещение 0.94 0.82 0.87 841
Хранение 0.96 0.86 0.91 655
Кухни 0.95 0.89 0.92 396
accuracy 0.96 26327
macro avg 0.96 0.92 0.94 26327
weighted avg 0.96 0.96 0.96 26327
###Markdown
Сохранение и загрузка дообученной моделиДля сохранения модели предоставляемыми библиотекой средствами нужно укать путь к папке. Сохраним веса модели и токенизатор.
###Code
model.save_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model2/')
tokenizer.save_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model2/')
###Output
_____no_output_____
###Markdown
Теперь попробуем загрузить модель из сохранённого состояния. Загрузим конфигруацию, токенизатор и веса модели.
###Code
# config
config = AutoConfig.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model')
# tokenizer
tokenizer = AutoTokenizer.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', pad_to_max_length=True)
# model
model = AutoModelForSequenceClassification.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', config=config)
###Output
_____no_output_____
###Markdown
С помощью загруженной модели можно определять классы товаров уже на CPU
###Code
model
%%time
model.to('cpu')
model.eval()
# Выберем несколько случайных названий товаров
skus = [randint(1, len(df)) for p in range(0, 10)]
for sku in skus:
ground_truth = df.iloc[sku]['category_1']
sku_title = df.iloc[sku]['name']
tokens = tokenizer.encode(sku_title, add_special_tokens=True)
tokens_tensor = torch.tensor([tokens])
with torch.no_grad():
logits = model(tokens_tensor)
# Логиты по каждой категории
logits = logits[0].detach().numpy()
# Выбираем наиболее вероятную категорию товара
predicted_class = np.argmax(logits, axis=1)
print(f'Наименование товара: {sku_title}')
print(f'Предсказанная категория: {category_index_reverce[predicted_class[0]]}')
print(f'Истинная категория: {ground_truth}')
print()
###Output
Наименование товара: Шпильки для пневмостеплера . х мм шт.
Предсказанная категория: Инструменты
Истинная категория: Инструменты
Наименование товара: Уголок внутренний HAUBERK терракотовый
Предсказанная категория: Стройматериалы
Истинная категория: Стройматериалы
Наименование товара: Нож для триммера лезвия x . мм, толщина . мм
Предсказанная категория: Сад
Истинная категория: Сад
Наименование товара: Фигура садовая «Пёс с верёвкой» см
Предсказанная категория: Сад
Истинная категория: Сад
Наименование товара: Отражатель для полотенцесушителя неглубокий, разъемный, ", хромированная латунь
Предсказанная категория: Водоснабжение
Истинная категория: Водоснабжение
Наименование товара: Ёршик подвесной для унитаза « », стекло, цвет хром
Предсказанная категория: Сантехника
Истинная категория: Сантехника
Наименование товара: Перчатки с плотным обливом, г
Предсказанная категория: Стройматериалы
Истинная категория: Инструменты
Наименование товара: Клещи-плиткорез Dexter мм
Предсказанная категория: Инструменты
Истинная категория: Инструменты
Наименование товара: Наклейка D «Животные» POA
Предсказанная категория: Декор
Истинная категория: Декор
Наименование товара: Рамка Legrand Valena, пост, цвет белый
Предсказанная категория: Электротовары
Истинная категория: Электротовары
CPU times: user 846 ms, sys: 10.2 ms, total: 857 ms
Wall time: 949 ms
|
notebooks/Pandas_Matplotlib_Seaborn.ipynb
|
###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Documentation links:- [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb)- [Numpy](https://docs.scipy.org/doc/)- [Pandas](https://pandas.pydata.org/docs/getting_started/index.html)- [Pandas Cheatsheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)- [Matplotlib](https://matplotlib.org/)- [Matplotlib Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf)- [Seaborn](https://seaborn.pydata.org/)- [Scikit-learn](https://scikit-learn.org/stable/user_guide.html)- [Scikit-learn Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Scikit_Learn_Cheat_Sheet_Python.pdf)- [Scikit-learn Flow Chart](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) Pandas Matplotlib Seaborn
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Google ColabColaboratory is an online environment that allows you to write and execute Python in the browser.- Zero configuration required- Free access to GPUs and TPUs- Easy sharingIt's based on [Jupyter Notebook](https://jupyter.org/)If you've never used it before it's a good idea to read the [tutorial here](https://colab.research.google.com/notebooks/intro.ipynb). Keyboard shortcutsHere are some of the most common commands. Try them out:- `⌘/Ctrl+M H` => open the keyboard shortcut help- `⌘/Ctrl+M A` => Create a cell above- `⌘/Ctrl+M B` => Create a cell below- `⌘/Ctrl+M D` => Delete current cell- `⌘/Ctrl+M M` => Convert cell to Markdown- `⌘/Ctrl+M Y` => Convert cell to Code- `Shift+Enter` => Run cell and select next cell- `Ctrl+Space`, `Option+Esc` or `Tab` => Autocomplete Saving your work- Colab notebooks are automatically saved in your Google Drive- You can export them to Github too- You can download them to your local computer PandasPandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Let's explore some of its functionality together. Reading and exploring data
###Code
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
df = pd.read_csv(url + "titanic-train.csv")
df.head()
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
PlottingMatplotlib and Seaborn are great libraries for plotting and exploring data visually.You can take inspiration from their plot galleries:- [Matplotlib Gallery](https://matplotlib.org/3.2.1/gallery/index.html)- [Seaborn Gallery](https://seaborn.pydata.org/examples/index.html)
###Code
df[['Age', 'Fare']].plot.scatter(x='Age', y='Fare');
survived_counts = df['Survived'].value_counts()
survived_counts.plot.bar(title='Dead / Survived');
survived_counts.plot.pie(
figsize=(5, 5),
explode=[0, 0.15],
labels=['Dead', 'Survived'],
autopct='%1.1f%%',
shadow=True,
startangle=90,
fontsize=16);
df['Age'].plot.hist(
bins=16,
range=(0, 80),
title='Passenger age distribution')
plt.xlabel("Age");
sns.pairplot(df[['Age', 'Pclass', 'Fare', 'SibSp', 'Survived']],
hue='Survived');
sns.jointplot(x='Age', y='Fare', data=df)
###Output
_____no_output_____
###Markdown
IndexingRetrieving elements by row, by column or both. Try to understand each of the following statements
###Code
df['Ticket']
df[['Fare', 'Ticket']]
df.iloc[3]
df.iloc[0:4, 4:6]
df.loc[0:4, 'Ticket']
df.loc[0:4, ['Fare', 'Ticket']]
###Output
_____no_output_____
###Markdown
SelectionsRetrieving part of the dataframe based on a condition. Try to understand each of the following statements.
###Code
df[df.Age > 70]
df[(df['Age'] == 11) & (df['SibSp'] == 5)]
###Output
_____no_output_____
###Markdown
Distinct elements
###Code
df['Embarked'].unique()
###Output
_____no_output_____
###Markdown
Group-by & Sorting
###Code
# Find average age of passengers that survived vs. died
df.groupby('Survived')['Age'].mean()
df.sort_values('Age', ascending = False).head()
###Output
_____no_output_____
###Markdown
Join (merge)
###Code
df1 = df[['PassengerId', 'Survived']]
df2 = df[['PassengerId', 'Age']]
pd.merge(df1, df2, on='PassengerId').head()
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
df.pivot_table(index='Pclass', columns='Survived', values='PassengerId', aggfunc='count')
df['Pclass'].value_counts()
###Output
_____no_output_____
###Markdown
Time series data
###Code
dfts = pd.read_csv(url + 'time_series_covid19_confirmed_global.csv')
df1 = dfts.drop(['Lat', 'Long'], axis=1).groupby('Country/Region').sum().transpose()
df1.head()
df1.index.dtype
df1.index = pd.to_datetime(df1.index)
df1.index.dtype
df1[['Italy','US']].plot(logy=True)
plt.title("COVID-19 confirmed Cases");
###Output
_____no_output_____
###Markdown
Copyright 2020 Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Documentation links:- [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb)- [Numpy](https://docs.scipy.org/doc/)- [Pandas](https://pandas.pydata.org/docs/getting_started/index.html)- [Pandas Cheatsheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)- [Matplotlib](https://matplotlib.org/)- [Matplotlib Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf)- [Seaborn](https://seaborn.pydata.org/)- [Scikit-learn](https://scikit-learn.org/stable/user_guide.html)- [Scikit-learn Cheat Sheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Scikit_Learn_Cheat_Sheet_Python.pdf)- [Scikit-learn Flow Chart](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) Pandas Matplotlib Seaborn
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Google ColabColaboratory is an online environment that allows you to write and execute Python in the browser.- Zero configuration required- Free access to GPUs and TPUs- Easy sharingIt's based on [Jupyter Notebook](https://jupyter.org/)If you've never used it before it's a good idea to read the [tutorial here](https://colab.research.google.com/notebooks/intro.ipynb). Keyboard shortcutsHere are some of the most common commands. Try them out:- `⌘/Ctrl+M H` => open the keyboard shortcut help- `⌘/Ctrl+M A` => Create a cell above- `⌘/Ctrl+M B` => Create a cell below- `⌘/Ctrl+M D` => Delete current cell- `⌘/Ctrl+M M` => Convert cell to Markdown- `⌘/Ctrl+M Y` => Convert cell to Code- `Shift+Enter` => Run cell and select next cell- `Ctrl+Space`, `Option+Esc` or `Tab` => Autocomplete Saving your work- Colab notebooks are automatically saved in your Google Drive- You can export them to Github too- You can download them to your local computer PandasPandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. Let's explore some of its functionality together. Reading and exploring data
###Code
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
df = pd.read_csv(url + "titanic-train.csv")
df.head()
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
PlottingMatplotlib and Seaborn are great libraries for plotting and exploring data visually.You can take inspiration from their plot galleries:- [Matplotlib Gallery](https://matplotlib.org/3.2.1/gallery/index.html)- [Seaborn Gallery](https://seaborn.pydata.org/examples/index.html)
###Code
df[['Age', 'Fare']].plot.scatter(x='Age', y='Fare');
survived_counts = df['Survived'].value_counts()
survived_counts.plot.bar(title='Dead / Survived');
survived_counts.plot.pie(
figsize=(5, 5),
explode=[0, 0.15],
labels=['Dead', 'Survived'],
autopct='%1.1f%%',
shadow=True,
startangle=90,
fontsize=16);
df['Age'].plot.hist(
bins=16,
range=(0, 80),
title='Passenger age distribution')
plt.xlabel("Age");
sns.pairplot(df[['Age', 'Pclass', 'Fare', 'SibSp', 'Survived']],
hue='Survived');
sns.jointplot(x='Age', y='Fare', data=df)
###Output
_____no_output_____
###Markdown
IndexingRetrieving elements by row, by column or both. Try to understand each of the following statements
###Code
df['Ticket']
df[['Fare', 'Ticket']]
df.iloc[3]
df.iloc[0:4, 4:6]
df.loc[0:4, 'Ticket']
df.loc[0:4, ['Fare', 'Ticket']]
###Output
_____no_output_____
###Markdown
SelectionsRetrieving part of the dataframe based on a condition. Try to understand each of the following statements.
###Code
df[df.Age > 70]
df[(df['Age'] == 11) & (df['SibSp'] == 5)]
###Output
_____no_output_____
###Markdown
Distinct elements
###Code
df['Embarked'].unique()
###Output
_____no_output_____
###Markdown
Group-by & Sorting
###Code
# Find average age of passengers that survived vs. died
df.groupby('Survived')['Age'].mean()
df.sort_values('Age', ascending = False).head()
###Output
_____no_output_____
###Markdown
Join (merge)
###Code
df1 = df[['PassengerId', 'Survived']]
df2 = df[['PassengerId', 'Age']]
pd.merge(df1, df2, on='PassengerId').head()
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
df.pivot_table(index='Pclass', columns='Survived', values='PassengerId', aggfunc='count')
df['Pclass'].value_counts()
###Output
_____no_output_____
###Markdown
Time series data
###Code
dfts = pd.read_csv(url + 'time_series_covid19_confirmed_global.csv')
df1 = dfts.drop(['Lat', 'Long'], axis=1).groupby('Country/Region').sum().transpose()
df1.head()
df1.index.dtype
df1.index = pd.to_datetime(df1.index)
df1.index.dtype
df1[['Italy','US']].plot(logy=True)
plt.title("COVID-19 confirmed Cases");
###Output
_____no_output_____
|
notebooks/02.Events/02.00-events.ipynb
|
###Markdown
Widgets Events
###Code
import ipywidgets as widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Traitlets events Every widget class is an `HasTraits` class, which means they benefit from the Traitlets API concerning the validation and observation of properties (see https://traitlets.readthedocs.io/en/stable/using_traitlets.htmlusing-traitlets). Trait class example: properties **validation** and **observation**Traitlets are validated by **type** and **value**:
###Code
from traitlets import HasTraits, Unicode, Int, TraitError, validate
class Identity(HasTraits):
username = Unicode()
age = Int()
@validate('age')
def _validate_age(self, proposal):
age = proposal['value']
if age < 0:
raise TraitError('age can not be negative')
if age > 115:
raise TraitError('this is too old to be true')
return age
jane = Identity(username='Jane Doe', age=25)
jane.age
jane.age = 32
###Output
_____no_output_____
###Markdown
Every `HasTraits` class has an `observe` method which allows observing properties changes. You can assign a Python callback function that will be called when a property changes.The callback handler passed to observe will be called with one change argument. The change object holds at least a `type` key and a `name` key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.Other keys may be passed depending on the value of `type`. In the case where type is `change`, we also have the following keys:- `owner` : the HasTraits instance- `old` : the old value of the modified trait attribute- `new` : the new value of the modified trait attribute- `name` : the name of the modified trait attribute.
###Code
HasTraits.observe?
# We use an output widget here for capturing the print calls and showing them at the right place in the Notebook
output = widgets.Output()
@output.capture()
def print_change(change):
print(change)
# Observe jane.age changes, and print them
jane.observe(print_change, 'age')
output
jane.age = 32
###Output
_____no_output_____
###Markdown
Registering callbacks to trait changes in the kernelSince `Widget` classes inherit from `HasTraits`, you can register handlers to the change events whenever the model gets updates from the front-end.
###Code
widgets.Widget.observe?
caption = widgets.Label(value='Start moving the slider!')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
sign = 'negative' if change.new < 0 else 'positive'
caption.value = f'The slider value is {sign}'
slider.observe(handle_slider_change, names='value')
display(caption, slider)
###Output
_____no_output_____
###Markdown
Callback signatures Mentioned in the doc string, the callback registered must have the signature `handler(change)` where `change` is a dictionary holding the information about the change. Using this method, an example of how to output an `IntSlider`'s value as it is changed can be seen below.
###Code
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
output2.clear_output(wait=True)
old = change['old']
new = change['new']
with output2:
print(f'The value was {old} and is now {new}')
int_range.observe(on_value_change, names='value')
###Output
_____no_output_____
###Markdown
Why `observe` instead of `link`? Using `link` is great if no transformation of the values is needed. `observe` is useful if some kind of calculation needs to be done with the values or if the values that are related have different types.The example below converts between Celsius and Farhenheit. As written, changing the temperature in Celcius will update the temperature in Farenheit, but not the other way around. You will add that as an exercise.
###Code
def C_to_F(temp):
return 1.8 * temp + 32
def F_to_C(temp):
return (temp -32) / 1.8
degree_C = widgets.FloatText(description='Temp $^\circ$C', value=0)
degree_F = widgets.FloatText(description='Temp $^\circ$F', value=C_to_F(degree_C.value))
def on_C_change(change):
degree_F.value = C_to_F(change['new'])
degree_C.observe(on_C_change, names='value')
display(degree_C, degree_F)
###Output
_____no_output_____
###Markdown
ExerciseAdd a callback that is called when `degree_F` is changed. An outline of the callback function is below. Fill it in, and make `degree_F` `observe` call `on_F_change` if the `value` changes.
###Code
def on_F_change(change):
degree_C.value = # Fill this in!
# Add line here to have degree_F observe changes in value and call on_F_change
###Output
_____no_output_____
###Markdown
Advanced Widget Linking In an earlier notebook you used `link` to link the value of one widget to another. There are a couple of other linking methods that offer more flexibility:+ `dlink` is a *directional* link; updates happen in one direction but not the other.+ `jslink` and `jsdlink` do the linking in the front end (i.e. in JavaScript without any communication to Python). Linking traitlets attributes in the kernel (ie. in Python)The first method is to use the `link` and `dlink`. This only works if we are interacting with a live kernel.
###Code
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
display(caption, sliders1, slider2)
l = widgets.link((sliders1, 'value'), (slider2, 'value'))
caption = widgets.HTML(value='Changes in source values are reflected in target1, but changes in target1 do not affect source')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
display(caption, source, target1)
dl = widgets.dlink((source, 'value'), (target1, 'value'))
###Output
_____no_output_____
###Markdown
Links can be broken by calling `unlink`.
###Code
l.unlink()
dl.unlink()
###Output
_____no_output_____
###Markdown
Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method. Linking widgets attributes from the client side You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.Javascript links persist when embedding widgets in html web pages without a kernel.
###Code
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
display(caption, range1, range2)
l = widgets.jslink((range1, 'value'), (range2, 'value'))
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
display(caption, source_range, target_range1)
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
###Output
_____no_output_____
###Markdown
The links can be broken by calling the `unlink` method.
###Code
l.unlink()
dl.unlink()
###Output
_____no_output_____
###Markdown
The difference between linking in the kernel and linking in the clientLinking in the kernel means linking via python. If two sliders are linked in the kernel, when one slider is changed the browser sends a message to the kernel (python in this case) updating the changed slider, the link widget in the kernel then propagates the change to the other slider object in the kernel, and then the other slider's kernel object sends a message to the browser to update the other slider's views in the browser. If the kernel is not running (as in a static web page), then the controls will not be linked.Linking using jslink (i.e., on the browser side) means contructing the link in Javascript. When one slider is changed, Javascript running in the browser changes the value of the other slider in the browser, without needing to communicate with the kernel at all. If the sliders are attached to kernel objects, each slider will update their kernel-side objects independently.To see the difference between the two, go to the [static version of this page in the ipywidgets documentation](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html) and try out the sliders near the bottom. The ones linked in the kernel with `link` and `dlink` are no longer linked, but the ones linked in the browser with `jslink` and `jsdlink` are still linked. Continuous vs delayed updatesSome widgets offer a choice with their `continuous_update` attribute between continually updating values or only updating values when a user submits the value (for example, by pressing Enter or navigating away from the control). In the next example, we see the "Delayed" controls only transmit their value after the user finishes dragging the slider or submitting the textbox. The "Continuous" controls continually transmit their values as they are changed. Try typing a two-digit number into each of the text boxes, or dragging each of the sliders, to see the difference.
###Code
a = widgets.IntSlider(description="Delayed", continuous_update=False)
b = widgets.IntText(description="Delayed", continuous_update=False)
c = widgets.IntSlider(description="Continuous", continuous_update=True)
d = widgets.IntText(description="Continuous", continuous_update=True)
widgets.link((a, 'value'), (b, 'value'))
widgets.link((a, 'value'), (c, 'value'))
widgets.link((a, 'value'), (d, 'value'))
widgets.VBox([a,b,c,d])
###Output
_____no_output_____
###Markdown
Sliders, `Text`, and `Textarea` controls default to `continuous_update=True`. `IntText` and other text boxes for entering integer or float numbers default to `continuous_update=False` (since often you'll want to type an entire number before submitting the value by pressing enter or navigating out of the box). Special events Some widgets like the `Button` have special events on which you can hook Python callbacks. The `Button` is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The `on_click` method of the `Button` can be used to register function to be called when the button is clicked. The doc string of the `on_click` can be seen below.
###Code
widgets.Button.on_click?
###Output
_____no_output_____
###Markdown
Example Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the `on_click` method, a button that prints a message when it has been clicked is shown below. To capture `print`s (or any other kind of output including errors) and ensure it is displayed, be sure to send it to an `Output` widget (or put the information you want to display into an `HTML` widget).
###Code
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
@output.capture()
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
###Output
_____no_output_____
|
demystifying_ai.ipynb
|
###Markdown
**MACHINE LEARNING MODEL**
###Code
df = sns.load_dataset('iris')
df.head()
features = list(df.columns)
target = features.pop()
X = df[features].copy()
y = df[target].copy()
X_train, X_test, y_train, y_test = train_test_split(X , y, test_size = 0.2, random_state = 4)
clf = DecisionTreeClassifier(
max_depth=3,
criterion='entropy',
random_state=4)
clf.fit(X_train, y_train)
accuracy_score(clf.predict(X_test), y_test)
def plot_tree_classifier(clf, feature_names=None):
dot_data = export_graphviz(
clf,
out_file=None,
feature_names=feature_names,
filled=True,
rounded=True,
special_characters=True,
rotate=True)
return Image(graphviz.Source(dot_data).pipe(format='png'))
display(plot_tree_classifier(clf, feature_names=features))
###Output
_____no_output_____
###Markdown
**Deep Learning Model**
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import pickle
import datetime
from datetime import date
import pandas_datareader as pdr
from tqdm import tqdm
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
yesterday = datetime.datetime.strftime(datetime.datetime.now()-datetime.timedelta(1),'%Y-%m-%d')
yesterday = pd.to_datetime(yesterday)
six = datetime.datetime.strftime(datetime.datetime.now()-datetime.timedelta(6*365),'%Y-%m-%d')
six = pd.to_datetime(six)
aapl = pdr.DataReader('AAPL', 'yahoo', six, yesterday)
aapl
split_num = round(len(aapl)*0.8)
x_train_set, y_train_set = aapl.iloc[:split_num]['Open'].values, aapl.iloc[:split_num]['Close'].values
x_train_set = x_train_set.reshape(-1,1)
x_train_set
y_train_set = y_train_set.reshape(-1,1)
y_train_set
def func(x_train_set,y_train_set):
# Feature Scaling
sc = MinMaxScaler(feature_range = (0, 1))
x_train_scaled = sc.fit_transform(x_train_set)
y_train_scaled = sc.fit_transform(y_train_set)
# Creating a data structure with 60 time-steps and 1 output
X_train = []
y_train = []
## for every 60 scaled open prices, append one output
## lookback 60 days of open prices to predict one close price
for i in range(60, split_num):
X_train.append(x_train_scaled[i-60:i, 0])
y_train.append(y_train_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
return X_train, y_train
X_train, y_train = func(x_train_set, y_train_set)
print('SHAPE', X_train.shape)
print()
print('FINALIZED FEATURE DATA')
print(X_train)
print('SHAPE', y_train.shape)
print()
print('FINALIZED TARGET DATA')
print(y_train)
def build_and_train_model(X_train, y_train, optimizer, loss, epochs):
"""
This function builds and trains a keras LSTM model
Parameters:
X_train --> training feature data
y_train --> training target data
optimizer --> string
loss --> string
epochs --> int
Returns:
trained keras model
"""
model = Sequential()
#Adding the first LSTM layer and some Dropout regularisation
model.add(LSTM(units = 128, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units = 64, return_sequences = True))
model.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units = 32, return_sequences = True))
model.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units = 16))
model.add(Dropout(0.2))
# Adding the output layer
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer = optimizer, loss = loss)
# Fitting the RNN to the Training set
model.fit(X_train, y_train, epochs = epochs, batch_size = 32)
return model
model = build_and_train_model(X_train, y_train, optimizer='adam', loss='mean_squared_error', epochs=25)
model.summary()
most_recent_60 = aapl.iloc[len(aapl)-60:]['Open'].values
most_recent_60 = np.array(most_recent_60)
most_recent_60 = most_recent_60.reshape(-1,1)
# Feature Scaling
sc = MinMaxScaler(feature_range = (0, 1))
most_recent_60_scaled = sc.fit_transform(most_recent_60)
most_recent_60_scaled = np.reshape(most_recent_60_scaled, (most_recent_60_scaled.shape[1], most_recent_60_scaled.shape[0], 1))
preds = model.predict(most_recent_60_scaled)
preds_ = sc.inverse_transform(preds)
print('PREDICTED AAPL CLOSE PRICE FOR 06/17/2021', preds_)
print('ACTUAL AAPL CLOSE PRICE FOR 06/17/2021', most_recent_60[-1])
###Output
PREDICTED AAPL CLOSE PRICE FOR 06/17/2021 [[126.6813]]
ACTUAL AAPL CLOSE PRICE FOR 06/17/2021 [129.80000305]
|
OrthogonalArray.ipynb
|
###Markdown
直交表の読み出し* https://hondou.homedns.org/pukiwiki/index.php?Python%20HDF5 で、直交表を numpy.ndarray 形式で HDF5 ファイルにまとめた。* それを読み出して使う* 直交表について$$L_n(q^m) = OA (n, m, q, t)$$ * n : 大きさ、テスト回数 * m : 因子数 = テストデータの項目数 * q : 水準数 = テスト項目の値の種類 * t : 強さ = テスト項目の組み合わせ数
###Code
hdf5 = h5py.File("mypkg/oamatrix.hdf5", 'r')
m=2
q=3
oa = hdf5["{}^{}".format(m,q)].value
print(oa)
hdf5.close()
###Output
[[0 0 0]
[0 1 1]
[1 0 1]
[1 1 0]]
###Markdown
直交表と最小自乗推定で、パラメータの寄与度を計算する
###Code
# 直交表の最終列に、{1,1,1,1}T を追加する。 ⇒ 誤差というか、嵩上げされている得点の部分
# いったん転置して、最終行に {1,1,1,1} を付け足してから転置してもとに戻す
m = oa.T
fixed = np.array([np.ones(oa.shape[0])]);
print(m)
print (fixed)
A = (np.concatenate( (m, fixed) )).T
B = np.array([10.0,70.0,80.0,90.0])
print ("A=\n{}".format(A))
print ("B={}".format(B))
# least squares
C, resid, rank, sigma = linalg.lstsq(A, B)
print ("C={}".format(C))
print ("resid={}".format(resid))
print ("rank={}".format(rank))
print ("sigma={}".format(sigma))
###Output
[[0 0 1 1]
[0 1 0 1]
[0 1 1 0]]
[[1. 1. 1. 1.]]
A=
[[0. 0. 0. 1.]
[0. 1. 1. 1.]
[1. 0. 1. 1.]
[1. 1. 0. 1.]]
B=[10. 70. 80. 90.]
C=[45. 35. 25. 10.]
resid=[]
rank=4
sigma=[2.73205081 1. 1. 0.73205081]
###Markdown
要因効果図の描画* 要因効果図は、特に自前では計算せずに、seaborn の PairGrid を使って描画します* seaborn では入力データは、Pandas の Data Form で定義します* Data Form を作るためには、CSV などのデータソースを直接読み出すやり方と、numpy の ndarray を Data Form に変換するやり方があります
###Code
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
# 費用データ
D=np.array([100,200,300,300]);
# Pandas の data_frame を numpy の ndarray から作成する
# (CSVの読み込みや、RDBでのSQL文の実行結果からも作ることができる)
data = np.concatenate( (oa.T, np.array([B]), np.array([D]) ) )
soup_frame = pd.DataFrame(data.T, columns=["豚骨", "魚介","鶏ガラ","得点","コスト"])
print (soup_frame)
# 素材と得点の関係
g1 = sns.PairGrid(soup_frame, y_vars="得点",
x_vars=["豚骨", "魚介", "鶏ガラ"],
size=3, aspect=1.0)
g1.map(sns.pointplot, color=sns.xkcd_rgb["plum"])
g1.set(ylim=(0, 100))
sns.despine(fig=g1.fig, left=True)
# 素材とコストの関係
g2 = sns.PairGrid(soup_frame, y_vars="コスト",
x_vars=["豚骨", "魚介", "鶏ガラ"],
size=3, aspect=1.0)
g2.map(sns.pointplot, color=sns.xkcd_rgb["orange"])
g2.set(ylim=(0, 500))
sns.despine(fig=g2.fig, left=True)
# 描画
plt.show()
###Output
豚骨 魚介 鶏ガラ 得点 コスト
0 0.0 0.0 0.0 10.0 100.0
1 0.0 1.0 1.0 70.0 200.0
2 1.0 0.0 1.0 80.0 300.0
3 1.0 1.0 0.0 90.0 300.0
###Markdown
実験計画に使える直交表
###Code
def createOA (hdf5, n, m, q) :
name = "{}^{}".format(m,q)
if name in list(hdf5.keys()) :
oa = hdf5[name].value
print("L{}({}^{})".format(n,m,q))
print(oa)
else :
print("L{}({}^{}) does not exist on my hdf5.".format(n,m,q))
hdf5 = h5py.File("mypkg/oamatrix.hdf5", 'r')
createOA(hdf5, 4,2,3)
createOA(hdf5, 12,2,11)
createOA(hdf5, 20,2,19)
createOA(hdf5, 24,2,23)
createOA(hdf5, 44,2,43)
createOA(hdf5, 60,2,59)
createOA(hdf5, 9,3,4)
createOA(hdf5, 27,3,13)
createOA(hdf5, 16,4,5)
createOA(hdf5, 64,4,21)
createOA(hdf5, 25,5,6)
createOA(hdf5, 50,5,12)
hdf5.close()
hdf5 = h5py.File("mypkg/oamatrix.hdf5", 'r')
dct = {}
for key in hdf5.keys() :
matrix = hdf5[key]
n = matrix.shape[0]
l = "L{} ({})".format(n, key)
dct[l] = n
hdf5.close()
for entry in sorted(dct.items(), key=lambda x:x[1]):
print(entry[0])
###Output
L4 (2^3)
L8 (2^4 4^1)
L9 (3^4)
L12 (2^11)
L12 (2^2 6^1)
L12 (2^4 3^1)
L16 (2^8 8^1)
L16 (4^5)
L18 (3^6 6^1)
L20 (2^19)
L20 (2^2 10^1)
L20 (2^8 5^1)
L24 (2^11 4^1 6^1)
L24 (2^12 12^1)
L24 (2^13 3^1 4^1)
L24 (2^20 4^1)
L25 (5^6)
L27 (3^9 9^1)
L28 (2^12 7^1)
L28 (2^2 14^1)
L28 (2^27)
L32 (2^16 16^1)
L32 (4^8 8^1)
L36 (2^1 3^3 6^3)
L36 (2^10 3^1 6^2)
L36 (2^10 3^8 6^1)
L36 (2^13 3^2 6^1)
L36 (2^13 6^2)
L36 (2^16 9^1)
L36 (2^18 3^1 6^1)
L36 (2^2 18^1)
L36 (2^2 3^5 6^2)
L36 (2^20 3^2)
L36 (2^27 3^1)
L36 (2^3 3^2 6^3)
L36 (2^3 3^9 6^1)
L36 (2^35)
L36 (2^4 3^1 6^3)
L36 (2^8 6^3)
L36 (2^9 3^4 6^2)
L36 (3^12 12^1)
L36 (3^7 6^3)
L40 (2^19 4^1 10^1)
L40 (2^20 20^1)
L40 (2^25 4^1 5^1)
L40 (2^36 4^1)
L44 (2^16 11^1)
L44 (2^2 22^1)
L44 (2^43)
L45 (3^9 15^1)
L48 (2^24 24^1)
L48 (2^31 6^1 8^1)
L48 (2^33 3^1 8^1)
L48 (2^40 8^1)
L48 (4^12 12^1)
L49 (7^8)
L50 (5^10 10^1)
L52 (2^17 13^1)
L52 (2^2 26^1)
L52 (2^51)
L54 (3^18 18^1)
L54 (3^20 6^1 9^1)
L56 (2^27 4^1 14^1)
L56 (2^28 28^1)
L56 (2^37 4^1 7^1)
L56 (2^52 4^1)
L60 (2^15 6^1 10^1)
L60 (2^18 15^1)
L60 (2^2 30^1)
L60 (2^21 10^1)
L60 (2^23 5^1)
L60 (2^24 6^1)
L60 (2^30 3^1)
L60 (2^59)
L63 (3^12 21^1)
L64 (2^32 32^1)
L64 (2^5 4^10 8^4)
L64 (2^5 4^17 8^1)
L64 (4^14 8^3)
L64 (4^16 16^1)
L64 (4^7 8^6)
L64 (8^9)
L68 (2^19 17^1)
L68 (2^2 34^1)
L68 (2^67)
L72 (2^10 3^13 4^1 6^3)
L72 (2^10 3^16 6^2 12^1)
L72 (2^10 3^20 4^1 6^2)
L72 (2^11 3^17 4^1 6^2)
L72 (2^11 3^20 6^1 12^1)
L72 (2^12 3^21 4^1 6^1)
L72 (2^14 3^3 4^1 6^6)
L72 (2^15 3^7 4^1 6^5)
L72 (2^17 3^12 4^1 6^3)
L72 (2^18 3^16 4^1 6^2)
L72 (2^19 3^20 4^1 6^1)
L72 (2^27 3^11 6^1 12^1)
L72 (2^27 3^6 6^4)
L72 (2^28 3^2 6^4)
L72 (2^30 3^1 6^4)
L72 (2^31 6^4)
L72 (2^34 3^3 4^1 6^3)
L72 (2^34 3^8 4^1 6^2)
L72 (2^35 3^12 4^1 6^1)
L72 (2^35 3^5 4^1 6^2)
L72 (2^35 4^1 18^1)
L72 (2^36 36^1)
L72 (2^36 3^2 4^1 6^3)
L72 (2^36 3^9 4^1 6^1)
L72 (2^37 3^1 4^1 6^3)
L72 (2^37 3^13 4^1)
L72 (2^41 4^1 6^3)
L72 (2^42 3^4 4^1 6^2)
L72 (2^43 3^1 4^1 6^2)
L72 (2^43 3^8 4^1 6^1)
L72 (2^44 3^12 4^1)
L72 (2^46 3^2 4^1 6^1)
L72 (2^46 4^1 6^2)
L72 (2^49 4^1 9^1)
L72 (2^5 3^3 4^1 6^7)
L72 (2^51 3^1 4^1 6^1)
L72 (2^53 3^2 4^1)
L72 (2^6 3^3 6^6 12^1)
L72 (2^6 3^7 4^1 6^6)
L72 (2^60 3^1 4^1)
L72 (2^68 4^1)
L72 (2^7 3^4 4^1 6^6)
L72 (2^7 3^7 6^5 12^1)
L72 (2^8 3^12 4^1 6^4)
L72 (2^8 3^8 4^1 6^5)
L72 (2^9 3^12 6^3 12^1)
L72 (2^9 3^16 4^1 6^3)
L72 (3^24 24^1)
L75 (5^8 15^1)
L76 (2^2 38^1)
L76 (2^20 19^1)
L76 (2^75)
L80 (2^40 40^1)
L80 (2^51 4^3 20^1)
L80 (2^55 8^1 10^1)
L80 (2^61 5^1 8^1)
L80 (2^72 8^1)
L80 (4^10 20^1)
L81 (3^27 27^1)
L81 (9^10)
L84 (2^14 6^1 14^1)
L84 (2^2 42^1)
L84 (2^20 3^1 14^1)
L84 (2^21 21^1)
L84 (2^22 6^1 7^1)
L84 (2^27 6^1)
L84 (2^28 7^1)
L84 (2^33 3^1)
L84 (2^83)
L88 (2^43 4^1 22^1)
L88 (2^44 44^1)
L88 (2^57 4^1 11^1)
L88 (2^84 4^1)
L90 (3^26 6^1 15^1)
L90 (3^30 30^1)
L92 (2^2 46^1)
L92 (2^22 23^1)
L92 (2^91)
L96 (2^12 4^20 24^1)
L96 (2^17 4^23 6^1)
L96 (2^18 4^22 12^1)
L96 (2^19 3^1 4^23)
L96 (2^26 4^23)
L96 (2^39 3^1 4^14 8^1)
L96 (2^43 4^12 6^1 8^1)
L96 (2^43 4^15 8^1)
L96 (2^44 4^11 8^1 12^1)
L96 (2^48 48^1)
L96 (2^71 6^1 16^1)
L96 (2^73 3^1 16^1)
L96 (2^80 16^1)
L98 (7^14 14^1)
L99 (3^13 33^1)
L100 (2^16 5^3 10^3)
L100 (2^18 5^9 10^1)
L100 (2^2 50^1)
L100 (2^23 25^1)
L100 (2^29 5^5)
L100 (2^34 5^3 10^1)
L100 (2^4 10^4)
L100 (2^40 5^4)
L100 (2^5 5^4 10^3)
L100 (2^51 5^3)
L100 (2^7 5^10 10^1)
L100 (2^99)
L100 (5^20 20^1)
L100 (5^8 10^3)
L104 (2^100 4^1)
L104 (2^51 4^1 26^1)
L104 (2^52 52^1)
L104 (2^66 4^1 13^1)
L108 (2^1 3^33 6^2 18^1)
L108 (2^1 3^35 6^3 9^1)
L108 (2^10 3^31 6^1 18^1)
L108 (2^10 3^33 6^2 9^1)
L108 (2^10 3^40 6^1 9^1)
L108 (2^107)
L108 (2^12 3^29 6^3)
L108 (2^13 3^30 6^1 18^1)
L108 (2^13 6^3)
L108 (2^15 6^1 18^1)
L108 (2^17 3^29 6^2)
L108 (2^18 3^31 18^1)
L108 (2^18 3^33 6^1 9^1)
L108 (2^2 3^35 6^1 18^1)
L108 (2^2 3^37 6^2 9^1)
L108 (2^2 3^42 18^1)
L108 (2^2 54^1)
L108 (2^20 3^34 9^1)
L108 (2^21 18^1)
L108 (2^21 3^1 6^2)
L108 (2^24 27^1)
L108 (2^27 3^33 9^1)
L108 (2^3 3^16 6^8)
L108 (2^3 3^32 6^2 18^1)
L108 (2^3 3^34 6^3 9^1)
L108 (2^3 3^39 18^1)
L108 (2^3 3^41 6^1 9^1)
L108 (2^34 3^29 6^1)
L108 (2^4 3^31 6^2 18^1)
L108 (2^4 3^33 6^3 9^1)
L108 (2^40 6^1)
L108 (2^8 3^30 6^2 18^1)
L108 (2^9 3^34 6^1 18^1)
L108 (2^9 3^36 6^2 9^1)
L108 (3^36 36^1)
L108 (3^37 6^2 18^1)
L108 (3^39 6^3 9^1)
L108 (3^4 6^11)
L108 (3^44 9^1 12^1)
L112 (2^104 8^1)
L112 (2^56 56^1)
L112 (2^75 4^3 28^1)
L112 (2^79 8^1 14^1)
L112 (2^89 7^1 8^1)
L112 (4^12 28^1)
L116 (2^115)
L116 (2^2 58^1)
L116 (2^24 29^1)
L117 (3^13 39^1)
L120 (2^116 4^1)
L120 (2^28 10^1 12^1)
L120 (2^30 6^1 20^1)
L120 (2^59 4^1 30^1)
L120 (2^60 60^1)
L120 (2^68 4^1 6^1 10^1)
L120 (2^70 3^1 4^1 10^1)
L120 (2^70 4^1 5^1 6^1)
L120 (2^75 4^1 10^1)
L120 (2^75 4^1 15^1)
L120 (2^75 4^1 6^1)
L120 (2^79 4^1 5^1)
L120 (2^87 3^1 4^1)
L121 (11^12)
L124 (2^123)
L124 (2^2 62^1)
L124 (2^24 31^1)
L125 (5^25 25^1)
L126 (3^20 6^1 21^1)
L126 (3^21 42^1)
L126 (3^23 6^1 7^1)
L126 (3^24 14^1)
L128 (2^3 4^11 8^13)
L128 (2^3 4^18 8^10)
L128 (2^3 4^25 8^7)
L128 (2^4 4^15 8^9 16^1)
L128 (2^4 4^22 8^6 16^1)
L128 (2^4 4^29 8^3 16^1)
L128 (2^4 4^36 16^1)
L128 (2^4 4^8 8^12 16^1)
L128 (2^5 4^10 8^11 16^1)
L128 (2^5 4^17 8^8 16^1)
L128 (2^5 4^24 8^5 16^1)
L128 (2^5 4^31 8^2 16^1)
L128 (2^5 4^8 8^14)
L128 (2^6 4^12 8^10 16^1)
L128 (2^6 4^19 8^7 16^1)
L128 (2^6 4^26 8^4 16^1)
L128 (2^6 4^33 8^1 16^1)
L128 (2^6 4^5 8^13 16^1)
L128 (2^64 64^1)
L128 (4^32 32^1)
L128 (8^16 16^1)
L132 (2^131)
L132 (2^15 6^1 22^1)
L132 (2^2 66^1)
L132 (2^20 3^1 22^1)
L132 (2^21 22^1)
L132 (2^22 6^1 11^1)
L132 (2^24 33^1)
L132 (2^28 11^1)
L132 (2^42 6^1)
L135 (3^27 45^1)
L135 (3^32 9^1 15^1)
L136 (2^132 4^1)
L136 (2^67 4^1 34^1)
L136 (2^68 68^1)
L136 (2^84 4^1 17^1)
L140 (2^139)
L140 (2^17 10^1 14^1)
L140 (2^2 70^1)
L140 (2^22 7^1 10^1)
L140 (2^24 35^1)
L140 (2^25 5^1 14^1)
L140 (2^27 5^1 7^1)
L140 (2^34 14^1)
L140 (2^36 10^1)
L140 (2^38 7^1)
L144 (12^7)
L144 (2^103 8^1 18^1)
L144 (2^111 6^1 24^1)
L144 (2^113 3^1 24^1)
L144 (2^117 8^1 9^1)
L144 (2^136 8^1)
L144 (2^16 3^3 6^6 24^1)
L144 (2^23 3^41 6^1 24^1)
L144 (2^44 3^11 12^2)
L144 (2^72 72^1)
L144 (2^74 3^4 6^6 8^1)
L144 (2^75 3^3 4^1 6^6 12^1)
L144 (2^76 3^12 6^4 8^1)
L144 (2^76 3^7 4^1 6^5 12^1)
L144 (3^48 48^1)
L144 (4^11 12^2)
L144 (4^36 36^1)
|
Supervised Learning/predict_life_expectancy_from_BMI_Linear_Regression/predict_life_expectancy_from_BMI.ipynb
|
###Markdown
In this notebook, I'll we working with data on the average life expectancy at birth and the average BMI for males across the world.The data file "data.csv" includes three columns:- Country: The country the person was born in.- Life expectancy: The average life expectancy at birth for a person in that country.- BMI: The mean BMI of males in that country.
###Code
# Import the libraries
import pandas as pd
from sklearn.linear_model import LinearRegression
# Loading the data
bmi_data = pd.read_csv('data.csv')
bmi_data.head()
# Building a linear regression model
model = LinearRegression()
model.fit(bmi_data[['BMI']], bmi_data[['Life expectancy']])
# Predicting BMI using our model
life_exp = model.predict([[22]])
print(life_exp)
###Output
[[62.634386]]
|
Reports/P5 Deep learning/digit_recognition.ipynb
|
###Markdown
机器学习工程师纳米学位 深度学习 项目:搭建一个数字识别项目在此文件中,我们提供给你了一个模板,以便于你根据项目的要求一步步实现要求的功能,进而完成整个项目。如果你认为需要导入另外的一些代码,请确保你正确导入了他们,并且包含在你的提交文件中。以**'练习'**开始的标题表示接下来你将开始实现你的项目。注意有一些练习是可选的,并且用**'可选'**标记出来了。在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以**'问题 X'**为标题。请仔细阅读每个问题,并且在问题后的**'回答'**文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。>**注意:** Code 和 Markdown 区域可通过 **Shift + Enter** 快捷键运行。此外,Markdown可以通过双击进入编辑模式。 连接 mnist 的字符来合成数据你可以通过连接[MNIST](http://yann.lecun.com/exdb/mnist/)的字符来合成数据来训练这个模型。为了快速导入数据集,我们可以使用 [Keras Datasets](https://keras.io/datasets/mnist-database-of-handwritten-digits) [中文文档](http://keras-cn.readthedocs.io/en/latest/other/datasets/mnist)。 载入 mnist
###Code
from keras.datasets import mnist
(X_raw, y_raw), (X_raw_test, y_raw_test) = mnist.load_data()
n_train, n_test = X_raw.shape[0], X_raw_test.shape[0]
###Output
Using TensorFlow backend.
###Markdown
可视化 mnist我们可以通过 matplotlib 来可视化我们的原始数据集。
###Code
import matplotlib.pyplot as plt
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
for i in range(15):
plt.subplot(3, 5, i+1)
index = random.randint(0, n_train-1)
plt.title(str(y_raw[index]))
plt.imshow(X_raw[index], cmap='gray')
plt.axis('off')
###Output
_____no_output_____
###Markdown
练习:合成数据你需要随机取随机张图片,然后将它们拼接成新的图片。你需要设置20%的数据作为验证集,以保证模型没有过拟合。
###Code
import numpy as np
from sklearn.model_selection import train_test_split
n_class, n_len, width, height = 11, 5, 28, 28
def generate_dataset(X, y):
X_len = X.shape[0]
X_gen = np.zeros((X_len, height, width*n_len, 1), dtype=np.uint8)
y_gen = [np.zeros((X_len, n_class), dtype=np.uint8) for i in range(n_len)]
# TODO: 随机取1~5个数字,并拼接成新的图片
for i in range(0,X_len): #需要生成X_len张图片,所以有X_len次随机取数过程
num = random.randint(1, 5) #随机取一个数
for t in range(0,num):
s = random.randint(0,X_len-1)
X_t = X[s] #随机从X中挑选一张图片作为新图片的第t个图
#插入到x_gen中,计算宽度所占的位置
t_start = t * width
t_end = t * width + width
X_gen[i][:,t_start:t_end,0] = X_t
#同时把对应的数字信息标记到y_gen上
y_gen[t][i][y[s]] = 1
for v in range(num,5):
y_gen[v][i][-1] = 1 #图片中空缺的标签打上10
return X_gen, y_gen
X_raw_train, X_raw_valid, y_raw_train, y_raw_valid = train_test_split(X_raw, y_raw, test_size=0.2, random_state=42)
X_train, y_train = generate_dataset(X_raw_train, y_raw_train)
X_valid, y_valid = generate_dataset(X_raw_valid, y_raw_valid)
X_test, y_test = generate_dataset(X_raw_test, y_raw_test)
# 显示生成的图片
for i in range(15):
plt.subplot(5, 3, i+1)
index = random.randint(0, n_test-1)
title = ''
for j in range(n_len):
title += str(np.argmax(y_test[j][index])) + ','
plt.title(title)
plt.imshow(X_test[index][:,:,0], cmap='gray')
plt.axis('off')
###Output
_____no_output_____
###Markdown
问题 1_你是如何合成数据集的?为什么要分训练集,验证集和测试集?_**回答:**原有mnist数据集的每张图片都是28*28。为了合成新的数据集,每次从mnist中随机抽取1-5个数字,横向拼接成新的图片,大小为28*140。如果随机选取的数字不足5张,则用空格填充。最终得到一个(n,28,140,1)的新数据集。**训练集**主要是用来用来拟合模型,通过设置分类器的参数,训练分类模型。**验证集**则是对学习出来的多个模型,为了能找出效果最佳的模型,使用各个模型对验证集数据进行预测,根据准确率,选出效果最佳的模型所对应的参数,即用来调整模型参数。**测试集**可以看作从来不存在的数据集,当已经确定模型参数后,用于测试并评价模型的性能。 练习:设计并测试一个模型架构设计并实现一个能够识别数字序列的深度学习模型。为了产生用于测试的合成数字序列,你可以进行如下的设置:比如,你可以限制一个数据序列最多五个数字,并在你的深度网络上使用五个分类器。同时,你有必要准备一个额外的“空白”的字符,以处理相对较短的数字序列。在思考这个问题的时候有很多方面可以考虑:- 你的模型可以基于深度神经网络或者是卷积神经网络。- 你可以尝试是否在每个分类器间共享权值。- 你还可以在深度神经网络中使用循环网络来替换其中的分类层,并且将数字序列里的数字一个一个地输出。在使用 Keras 搭建模型的时候,你可以使用 [函数式模型 API](http://keras-cn.readthedocs.io/en/latest/models/model/) 的方式来搭建多输出模型。
###Code
from keras.models import Model
from keras.layers import *
from keras.losses import categorical_crossentropy
from keras.optimizers import Adadelta
# TODO: 构建你的模型
input_data = Input(shape=(28,140,1))
a = Conv2D(32,(3,3), padding='same',activation='relu')(input_data)
a = MaxPool2D(pool_size=(2,2))(a)
a = Conv2D(64,(3,3),padding='same',activation='relu')(a)
a = MaxPool2D(pool_size=(2,2))(a)
a = Flatten()(a)
a = Dropout(0.5)(a)
out_data = [Dense(11, activation='softmax')(a) for i in range(n_len)] #全连接层
model = Model(inputs=input_data,outputs=out_data)
model.compile(loss=categorical_crossentropy,optimizer=Adadelta(),metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
问题 2_你为解决这个问题采取了什么技术?请详细介绍你使用的技术。_**回答:** 为了解决这个问题,我应用了CNN卷积神经网络技术。利用keras学习库搭建卷积神经网络,对输入的每张图片通过两层卷积神经网络对图像进行卷积,利用relu函数对卷积的数据进行激活。并且用2×2的max_pool在卷积过程中进行池化,从而达到减少数据的特征的目的。卷积后的结果通过Flatten() 将数据一维化,并用Dropout方法降低过拟合问题。最后是包含11个神经元的输出层,激活函数为Softmax。**涉及到的具体技术说明:**1. 首先,应用的是卷积技术。在模型中的卷积层中,通常包含若干个特征平面,每个特征平面由一些矩形排列的的神经元组成,同一特征平面的神经元共享权值,这里共享的权值就是卷积核(滤波器)。在网络的训练过程中卷积核(滤波器)将通过学习得到合理的权值。一般来说,第一层卷积层的滤波器用来检测低阶特征,比如边、角、曲线等。随着卷积层的增加,对应滤波器检测的特征就更加复杂。卷积层的本质是个特征抽取层。2. 在两个卷积层中,均利用了relu函数对卷积的数据进行激活。该函数数学公式为:f(x)=max(x,0),如果输入小于 0,修正线性单元的输出是 0,如果输入大于 0,则输出等于输入。对大型模型来说,relu函数相比sigmod函数计算量小很多,训练速度要快很多,而且不会有 sigmoid 梯度消失的问题。3. 第三、五层用利用了最大池化的技术。利用了 2x2 的滤波器,选择每个步长所产出的最大值作为下一层传输,压缩了图片,从而达到简化模型的作用,让神经网络专注于最重要的元素。我们选择最大池化而不是最小池化的原因是我们更在乎特征有没有,而不关心特征位置在哪个像素点上。因此用最大池化,把某个 (2, 2) 区域的特征分布缩减为一个数字,来表示某区域有没有该特征。4. 第七层是一个dropout层,它的作用是在训练过程中随机让某些隐含层节点不工作,减少冗余的学习,实践表明dropout方法可以使网络更加稳固,减少过拟合分风险(不工作节点可以认为不是网络结构的一部分,但是它的权重得保留下来,下次样本输入时可能又会工作)5. softmax函数将每个单元的输出压缩到0和1之间,在拆分输出时会使输出之和等于1。softmax函数的输出等于分类概率分布,显示了任何类别为真的概率。但是如果进行多项分类的话,需要使用多个输出单元(每个类别一个单元),并对输出进行softmax激活,解决了sigmoid可用于只有一个输出单元的二元分类的问题。6. 模型中应用了Adadelta作为优化器。Adadelta算法是梯度算法的一个变种,避免手动调节学习率,可以让其在学习的过程中自己变化。本质是对Adagrad算法的扩展,依然是对学习率进行自适应约束,但是进行了计算上的简化,速度较快。实践表明,adadelta在MNIST上学习性能较强。参考:1. https://classroom.udacity.com/nanodegrees/nd009/parts/9f359353-1efd-4eec-a336-ed2539f6bb29/modules/fe7d3745-38da-4893-9b8c-ec539c39d383/lessons/6ec4ffd6-4f5c-4b88-bdf6-f169119834f0/concepts/8ee4c905-fa9c-40a4-9fd7-427a155b81b42. http://blog.csdn.net/luo123n/article/details/48239963 可视化你的网络模型参考链接:[可视化visualization](http://keras-cn.readthedocs.io/en/latest/other/visualization/)可以是保存成 PNG 格式显示,也可以直接使用 SVG 格式。 SVG 是矢量图,它的优点是可以无限放大。
###Code
from keras.utils.vis_utils import plot_model, model_to_dot
from IPython.display import Image, SVG
# TODO: 可视化你的模型
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
问题 3_你最终的模型架构是什么样的?(什么类型的模型,层数,大小, 如何连接等)_**回答:**我最终的模型是利用keras学习库搭建的8层卷积神经网络。1. 第一层是输入层,输入(28,140,1)的图像。2. 第二层是核为32,尺寸为3*3的卷积层。3. 第三层是2*2的max pool池化层。4. 第四层是核为64,尺寸为3*3的卷积层。5. 第五层是2*2的max pool池化层。6. 第六层是Flatten层,把多维输入进行一维化,用于卷积层到全连接层的过渡。7. 第七层是dropout层,进行正则化,降低过拟合。8. 第八层就是五个结果的输出层。 练习:训练你的网络模型训练你的模型时,需要设置训练集和验证集。
###Code
# TODO: 训练你的模型
from keras_tqdm import TQDMNotebookCallback #解决jupyter notebook链接超时问题
x_train = X_train.astype('float32')
x_valid = X_valid.astype('float32')
#正则化
x_train /= 255
x_valid /= 255
batch_size = 256
epochs = 7
model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=0,validation_data=(x_valid, y_valid),callbacks=[TQDMNotebookCallback(leave_inner=True,leave_outer=True)])
###Output
47872/|/[loss: 0.119, dense_1_loss: 0.039, dense_2_loss: 0.032, dense_3_loss: 0.026, dense_4_loss: 0.016, dense_5_loss: 0.006, dense_1_acc: 0.987, dense_2_acc: 0.990, dense_3_acc: 0.992, dense_4_acc: 0.995, dense_5_acc: 0.998] 100%|| 47872/48000 [03:57<00:00, 208.18it/s]
###Markdown
在我的本地环境中开启verbose会导致jupyter notebook因为进度条原因在训练过程中链接超时导致奔溃,所以引入keras_tqdm模块代替进度条,但不知道为何导出html时候无法保留训练结果,所以截图如下: 练习:计算你的模型准确率我们刚才得到了模型每个数字的准确率,现在让我们来计算整体准确率,按照完全预测正确数字序列的标准来计算。比如 1,2,3,10,10 预测成了 1,2,10,10,10 算错,而不是算对了80%。
###Code
# def evaluate(model,x_test,y_test):
# # TODO: 按照错一个就算错的规则计算准确率
# total_num = x_test.shape[0] #一共有几张图
# each_occupy = len(y_test) #每张图实际包含几个数字(包括了被填充为10的空值)
# predict = model.predict(x_test)
# right_num = 0
# for j in range(0,total_num):
# if_same = True
# for i in range(0,each_occupy):
# if np.array_equal(y_test[i][j],predict[i][j]):
# if_same = True
# else:
# if_same = False
# break
# if if_same == True:
# right_num += 1
# accuracy = float(right_num)/float(total_num)
# return accuracy
# evaluate(model,X_test,y_test)
def evaluate(model,x_test, y_test):
predict = model.predict(x_test)
predict = np.array(predict) #list转换为np array
predict = np.argmax(predict, axis = 2)
y_test = np.argmax(y_test,axis = 2)
predict = predict.T
y_test = y_test.T
right_number = 0
for i in range(len(predict)):
if (predict[i] == y_test[i]).all():
right_number += 1
accuracy = float(right_number)/len(x_test)
return accuracy
evaluate(model,X_test,y_test)
###Output
_____no_output_____
###Markdown
问题 4_你的模型准确率有多少?你觉得你的模型足以解决问题吗?_**回答:**模型准确率有0.9471,足以解决问题。 预测值可视化我们将模型的预测结果和真实值画出来,观察真实效果。
###Code
def get_result(result):
# 将 one_hot 编码解码
resultstr = ''
for i in range(n_len):
resultstr += str(np.argmax(result[i])) + ','
return resultstr
index = random.randint(0, n_test-1)
y_pred = model.predict(X_test[index].reshape(1, height, width*n_len, 1))
plt.title('real: %s\npred:%s'%(get_result([y_test[x][index] for x in range(n_len)]), get_result(y_pred)))
plt.imshow(X_test[index,:,:,0], cmap='gray')
plt.axis('off')
###Output
_____no_output_____
###Markdown
保存模型模型达到满意的效果以后,我们需要保存模型,以便下次调用。读取的方式也很简单:`model = load_model('model.h5')`
###Code
model.save('model.h5')
###Output
_____no_output_____
|
Tencent/GNN/Graph_Transformer_Networks/Data_Preprocessing.ipynb
|
###Markdown
Edges (PAP, PSP)[0,1,9,10,13] : KDD,SIGMOD,SIGCOMM,MobiCOMM,VLDB
###Code
mat_file['PvsA']
authors = mat_file['PvsA'][paper_idx].nonzero()[1]
author_dic = {}
re_authors = []
for author in authors:
if author not in author_dic:
author_dic[author] = len(author_dic) + len(paper_idx)
re_authors.append(author_dic[author])
re_authors = np.array(re_authors)
len(author_dic)
subjects = mat_file['PvsL'][paper_idx].nonzero()[1]
subject_dic = {}
re_subjects = []
for subject in subjects:
if subject not in subject_dic:
subject_dic[subject] = len(subject_dic) + len(paper_idx) + len(author_dic)
re_subjects.append(subject_dic[subject])
re_subjects = np.array(re_subjects)
len(subject_dic)
node_num = len(paper_idx) + len(author_dic) + len(subject_dic)
node_num
papers = mat_file['PvsA'][paper_idx].nonzero()[0]
data = np.ones_like(papers)
A_pa = csr_matrix((data, (papers, re_authors)), shape=(node_num,node_num))
A_pa
papers = mat_file['PvsL'][paper_idx].nonzero()[0]
data = np.ones_like(papers)
A_ps = csr_matrix((data, (papers, re_subjects)), shape=(node_num,node_num))
A_ps
A_ap = A_pa.transpose()
A_sp = A_ps.transpose()
edges = [A_pa,A_ap,A_ps,A_sp]
###Output
_____no_output_____
###Markdown
Node Features
###Code
terms = mat_file['TvsP'].transpose()[paper_idx].nonzero()[1]
term_dic = {}
re_terms = []
for term in terms:
if term not in term_dic:
term_dic[term] = len(term_dic) + len(paper_idx) + len(author_dic) + len(subject_dic)
re_terms.append(term_dic[term])
re_terms = np.array(re_terms)
mat_file['TvsP'].transpose()
# tmp
tmp_num_node = node_num + len(term_dic)
papers = mat_file['PvsA'][paper_idx].nonzero()[0]
data = np.ones_like(papers)
A_pa_tmp = csr_matrix((data, (papers, re_authors)), shape=(tmp_num_node,tmp_num_node))
papers = mat_file['PvsL'][paper_idx].nonzero()[0]
data = np.ones_like(papers)
A_ps_tmp = csr_matrix((data, (papers, re_subjects)), shape=(tmp_num_node,tmp_num_node))
papers = mat_file['PvsT'][paper_idx].nonzero()[0]
data = np.ones_like(papers)
A_pt_tmp = csr_matrix((data, (papers, re_terms)), shape=(tmp_num_node,tmp_num_node))
paper_feat = np.array(A_pt_tmp[:len(paper_idx),-len(term_dic):].toarray()>0, dtype=np.int)
author_feat = np.array(A_pa_tmp.transpose().dot(A_pt_tmp)[len(paper_idx):len(paper_idx)+len(author_dic),-len(term_dic):].toarray()>0, dtype=np.int)
subject_feat = np.array(A_ps_tmp.transpose().dot(A_pt_tmp)[len(paper_idx)+len(author_dic):len(paper_idx)+len(author_dic)+len(subject_dic),-len(term_dic):].toarray()>0, dtype=np.int)
node_faeture = np.concatenate((paper_feat,author_feat,subject_feat))
node_faeture.shape
###Output
_____no_output_____
###Markdown
Label
###Code
paper_target.shape
# Train, Valid
train_valid_DB = list(np.random.choice(np.where(paper_target==0)[0],300, replace=False))
train_valid_WC = list(np.random.choice(np.where(paper_target==1)[0],300, replace=False))
train_valid_DM = list(np.random.choice(np.where(paper_target==2)[0],300, replace=False))
train_idx = np.array(train_valid_DB[:200] + train_valid_WC[:200] + train_valid_DM[:200])
train_target = paper_target[train_idx]
train_label = np.vstack((train_idx,train_target)).transpose()
valid_idx = np.array(train_valid_DB[200:] + train_valid_WC[200:] + train_valid_DM[200:])
valid_target = paper_target[valid_idx]
valid_label = np.vstack((valid_idx,valid_target)).transpose()
test_idx = np.array(list((set(np.arange(paper_target.shape[0])) - set(train_idx)) - set(valid_idx)))
test_target = paper_target[test_idx]
test_label = np.vstack((test_idx,test_target)).transpose()
labels = [train_label,valid_label,test_label]
labels
###Output
_____no_output_____
|
source/jupyter/03_IoTDR_Reg_PCA.ipynb
|
###Markdown
AWS IoT DR - Register PCA with IoT CoreRegister the private CA in the primary and secondary region with AWS IoT Core. Libraries
###Code
from OpenSSL import crypto, SSL
from os.path import exists, join
from os import makedirs
from shutil import copy
from time import time, gmtime, localtime, strftime
import boto3
import json
import time
###Output
_____no_output_____
###Markdown
Shared variablesImport shared variables into this notebook.
###Code
%store -r config
print("config: {}".format(json.dumps(config, indent=4, default=str)))
###Output
_____no_output_____
###Markdown
Some handy functionsGenerate a key and create a certificate signing request (CSR).
###Code
def create_csr(pkey, subject, digest="sha256"):
print("subject: {}".format(subject))
req = crypto.X509Req()
subj = req.get_subject()
for i in ['C', 'ST', 'L', 'O', 'OU', 'CN']:
if i in subject:
setattr(subj, i, subject[i])
req.set_pubkey(pkey)
req.sign(pkey, digest)
return req
def create_priv_key_and_csr(cert_dir, csr_file, key_file, subject):
if not exists(cert_dir):
print("creating directory: {}".format(cert_dir))
makedirs(cert_dir)
priv_key = crypto.PKey()
priv_key.generate_key(crypto.TYPE_RSA, 2048)
#print(crypto.dump_privatekey(crypto.FILETYPE_PEM, priv_key).decode('utf-8'))
key_file = join(cert_dir, key_file)
f = open(key_file,"w")
f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, priv_key).decode('utf-8'))
f.close()
csr = create_csr(priv_key, subject)
csr_file = join(cert_dir, csr_file)
f= open(csr_file,"w")
f.write(crypto.dump_certificate_request(crypto.FILETYPE_PEM, csr).decode('utf-8'))
f.close()
return crypto.dump_certificate_request(crypto.FILETYPE_PEM, csr)
###Output
_____no_output_____
###Markdown
Boto3 clientCreate a boto3 client for the acm-pca service endpoint.
###Code
c_acm_pca = boto3.client('acm-pca', region_name = config['aws_region_pca'])
###Output
_____no_output_____
###Markdown
PCA certificate and ARNGet the root certificate and the ARN from your private CA. They are required to register your private CA with AWS IoT Core.
###Code
response = c_acm_pca.list_certificate_authorities(MaxResults=50)
for ca in response['CertificateAuthorities']:
#print(ca['CertificateAuthorityConfiguration']['Subject']['CommonName'])
if ca['CertificateAuthorityConfiguration']['Subject']['CommonName'] == config['Sub_CN']:
pca_arn = ca['Arn']
break
print("pca_arn: {}\n".format(pca_arn))
response = c_acm_pca.get_certificate_authority_certificate(
CertificateAuthorityArn = pca_arn
)
print("response: {}\n".format(json.dumps(response, indent=4, default=str)))
pca_certificate = response['Certificate']
print("pca_certificate:\n{}".format(pca_certificate))
###Output
_____no_output_____
###Markdown
Register private CATo register the private CA with AWS IoT Core you need to get a registration code. Then you create a certificate with the common name (CN) set to the registration code. This certificate will be used for the CA registration process.The private CA will be registered with AWS IoT Core in the primary and secondary region.
###Code
for aws_region in [config['aws_region_primary'], config['aws_region_secondary']]:
print("AWS REGION: {}".format(aws_region))
c_iot = boto3.client('iot', region_name = aws_region)
time.sleep(2)
response = c_iot.get_registration_code()
print("response: {}\n".format(json.dumps(response, indent=4, default=str)))
registration_code = response['registrationCode']
print("registration_code: {}\n".format(registration_code))
verification_csr = create_priv_key_and_csr(config['PCA_directory'],
'registration_csr_{}.pem'.format(aws_region),
'registration_key_{}.pem'.format(aws_region),
{"CN": registration_code})
print("verification_csr:\n{}\n".format(verification_csr))
idempotency_token = 'registration_cert'
response = c_acm_pca.issue_certificate(
CertificateAuthorityArn = pca_arn,
Csr = verification_csr,
SigningAlgorithm = 'SHA256WITHRSA',
Validity= {
'Value': 365,
'Type': 'DAYS'
},
IdempotencyToken = idempotency_token
)
print("response: {}\n".format(json.dumps(response, indent=4, default=str)))
certificate_arn = response['CertificateArn']
print("certificate_arn: {}\n".format(certificate_arn))
waiter = c_acm_pca.get_waiter('certificate_issued')
try:
waiter.wait(
CertificateAuthorityArn=pca_arn,
CertificateArn=certificate_arn
)
except botocore.exceptions.WaiterError as e:
raise Exception("waiter: {}".format(e))
response = c_acm_pca.get_certificate(
CertificateAuthorityArn = pca_arn,
CertificateArn = certificate_arn
)
print("response: {}".format(response))
registration_certificate = response['Certificate']
print("pca_certificate:\n{}\n".format(pca_certificate))
print("registration_certificate:\n{}\n".format(registration_certificate))
file_registration_crt = join(config['PCA_directory'], 'registration_cert_{}.pem'.format(aws_region))
f = open(file_registration_crt,"w")
f.write(registration_certificate)
f.close()
response = c_iot.register_ca_certificate(
caCertificate = pca_certificate,
verificationCertificate = registration_certificate,
setAsActive = True,
allowAutoRegistration = True
)
print("response: {}\n".format(json.dumps(response, indent=4, default=str)))
certificate_id = response['certificateId']
print("certificate_id: {}\n".format(certificate_id))
response = c_iot.describe_ca_certificate(
certificateId = certificate_id
)
print("response: {}\n".format(json.dumps(response, indent=4, default=str)))
###Output
_____no_output_____
|
sandbox/2021_03_test_component_populator/test_component_populator.ipynb
|
###Markdown
Notebook IntentionsThe purpose of this notebook is to test the utility of the ComponentPopulator and generate the outputed variable parameters files.
###Code
import os
from os.path import expanduser
import sys
sys.path.append(os.path.join(expanduser("~"), "meps", "meps_dev"))
from meps_db.components.populators import ComponentPopulator
from meps_db.components.reference import DATA_FILES_YEARS
data_types = [
#"population_characteristics",
"medical_conditions",
"prescribed_medicines",
"dental_visits",
"other_medical_expenses",
"hospital_inpatient_stays",
"emergency_room_visits",
"outpatient_visits",
"office_based_medical_provider_visits",
"home_health",
]
# test no issues are generated during population and generate variable parameters files
for data_type in data_types:
for year in DATA_FILES_YEARS:
print(f"Processing {data_type} - {year}")
data = None
data = ComponentPopulator(year=year, data_type=data_type).run()
###Output
Processing medical_conditions - 2018
Processing medical_conditions - 2017
Processing medical_conditions - 2016
Processing medical_conditions - 2015
Processing medical_conditions - 2014
Processing medical_conditions - 2013
Processing medical_conditions - 2012
Processing medical_conditions - 2011
Processing medical_conditions - 2010
Processing medical_conditions - 2009
Processing medical_conditions - 2008
Processing medical_conditions - 2007
Processing medical_conditions - 2006
Processing medical_conditions - 2005
Processing prescribed_medicines - 2018
Processing prescribed_medicines - 2017
Processing prescribed_medicines - 2016
Processing prescribed_medicines - 2015
Processing prescribed_medicines - 2014
Processing prescribed_medicines - 2013
Processing prescribed_medicines - 2012
Processing prescribed_medicines - 2011
Processing prescribed_medicines - 2010
Processing prescribed_medicines - 2009
Processing prescribed_medicines - 2008
Processing prescribed_medicines - 2007
Processing prescribed_medicines - 2006
Processing prescribed_medicines - 2005
Processing dental_visits - 2018
Processing dental_visits - 2017
Processing dental_visits - 2016
Processing dental_visits - 2015
Processing dental_visits - 2014
Processing dental_visits - 2013
Processing dental_visits - 2012
Processing dental_visits - 2011
Processing dental_visits - 2010
Processing dental_visits - 2009
Processing dental_visits - 2008
Processing dental_visits - 2007
Processing dental_visits - 2006
Processing dental_visits - 2005
Processing other_medical_expenses - 2018
Processing other_medical_expenses - 2017
Processing other_medical_expenses - 2016
Processing other_medical_expenses - 2015
Processing other_medical_expenses - 2014
Processing other_medical_expenses - 2013
Processing other_medical_expenses - 2012
Processing other_medical_expenses - 2011
Processing other_medical_expenses - 2010
Processing other_medical_expenses - 2009
Processing other_medical_expenses - 2008
Processing other_medical_expenses - 2007
Processing other_medical_expenses - 2006
Processing other_medical_expenses - 2005
Processing hospital_inpatient_stays - 2018
Processing hospital_inpatient_stays - 2017
Processing hospital_inpatient_stays - 2016
Processing hospital_inpatient_stays - 2015
Processing hospital_inpatient_stays - 2014
Processing hospital_inpatient_stays - 2013
Processing hospital_inpatient_stays - 2012
Processing hospital_inpatient_stays - 2011
Processing hospital_inpatient_stays - 2010
Processing hospital_inpatient_stays - 2009
Processing hospital_inpatient_stays - 2008
Processing hospital_inpatient_stays - 2007
Processing hospital_inpatient_stays - 2006
Processing hospital_inpatient_stays - 2005
Processing emergency_room_visits - 2018
Processing emergency_room_visits - 2017
Processing emergency_room_visits - 2016
Processing emergency_room_visits - 2015
Processing emergency_room_visits - 2014
Processing emergency_room_visits - 2013
Processing emergency_room_visits - 2012
Processing emergency_room_visits - 2011
Processing emergency_room_visits - 2010
Processing emergency_room_visits - 2009
Processing emergency_room_visits - 2008
Processing emergency_room_visits - 2007
Processing emergency_room_visits - 2006
Processing emergency_room_visits - 2005
Processing outpatient_visits - 2018
Processing outpatient_visits - 2017
Processing outpatient_visits - 2016
Processing outpatient_visits - 2015
Processing outpatient_visits - 2014
Processing outpatient_visits - 2013
Processing outpatient_visits - 2012
Processing outpatient_visits - 2011
Processing outpatient_visits - 2010
Processing outpatient_visits - 2009
Processing outpatient_visits - 2008
Processing outpatient_visits - 2007
Processing outpatient_visits - 2006
Processing outpatient_visits - 2005
Processing office_based_medical_provider_visits - 2018
|
scripts/d21-en/pytorch/chapter_preliminaries/autograd.ipynb
|
###Markdown
Automatic Differentiation:label:`sec_autograd`As we have explained in :numref:`sec_calculus`,differentiation is a crucial step in nearly all deep learning optimization algorithms.While the calculations for taking these derivatives are straightforward,requiring only some basic calculus,for complex models, working out the updates by handcan be a pain (and often error-prone).Deep learning frameworks expedite this workby automatically calculating derivatives, i.e., *automatic differentiation*.In practice,based on our designed modelthe system builds a *computational graph*,tracking which data combined throughwhich operations to produce the output.Automatic differentiation enables the system to subsequently backpropagate gradients.Here, *backpropagate* simply means to trace through the computational graph,filling in the partial derivatives with respect to each parameter. A Simple ExampleAs a toy example, say that we are interestedin (**differentiating the function$y = 2\mathbf{x}^{\top}\mathbf{x}$with respect to the column vector $\mathbf{x}$.**)To start, let us create the variable `x` and assign it an initial value.
###Code
import torch
x = torch.arange(4.0)
x
###Output
_____no_output_____
###Markdown
[**Before we even calculate the gradientof $y$ with respect to $\mathbf{x}$,we will need a place to store it.**]It is important that we do not allocate new memoryevery time we take a derivative with respect to a parameterbecause we will often update the same parametersthousands or millions of timesand could quickly run out of memory.Note that a gradient of a scalar-valued functionwith respect to a vector $\mathbf{x}$is itself vector-valued and has the same shape as $\mathbf{x}$.
###Code
x.requires_grad_(True) # Same as `x = torch.arange(4.0, requires_grad=True)`
x.grad # The default value is None
###Output
_____no_output_____
###Markdown
(**Now let us calculate $y$.**)
###Code
y = 2 * torch.dot(x, x)
y
###Output
_____no_output_____
###Markdown
Since `x` is a vector of length 4,an inner product of `x` and `x` is performed,yielding the scalar output that we assign to `y`.Next, [**we can automatically calculate the gradient of `y`with respect to each component of `x`**]by calling the function for backpropagation and printing the gradient.
###Code
y.backward()
x.grad
###Output
_____no_output_____
###Markdown
(**The gradient of the function $y = 2\mathbf{x}^{\top}\mathbf{x}$with respect to $\mathbf{x}$ should be $4\mathbf{x}$.**)Let us quickly verify that our desired gradient was calculated correctly.
###Code
x.grad == 4 * x
###Output
_____no_output_____
###Markdown
[**Now let us calculate another function of `x`.**]
###Code
# PyTorch accumulates the gradient in default, we need to clear the previous
# values
x.grad.zero_()
y = x.sum()
y.backward()
x.grad
###Output
_____no_output_____
###Markdown
Backward for Non-Scalar VariablesTechnically, when `y` is not a scalar,the most natural interpretation of the differentiation of a vector `y`with respect to a vector `x` is a matrix.For higher-order and higher-dimensional `y` and `x`,the differentiation result could be a high-order tensor.However, while these more exotic objects do show upin advanced machine learning (including [**in deep learning**]),more often (**when we are calling backward on a vector,**)we are trying to calculate the derivatives of the loss functionsfor each constituent of a *batch* of training examples.Here, (**our intent is**) not to calculate the differentiation matrixbut rather (**the sum of the partial derivativescomputed individually for each example**) in the batch.
###Code
# Invoking `backward` on a non-scalar requires passing in a `gradient` argument
# which specifies the gradient of the differentiated function w.r.t `self`.
# In our case, we simply want to sum the partial derivatives, so passing
# in a gradient of ones is appropriate
x.grad.zero_()
y = x * x
# y.backward(torch.ones(len(x))) equivalent to the below
y.sum().backward()
x.grad
###Output
_____no_output_____
###Markdown
Detaching ComputationSometimes, we wish to [**move some calculationsoutside of the recorded computational graph.**]For example, say that `y` was calculated as a function of `x`,and that subsequently `z` was calculated as a function of both `y` and `x`.Now, imagine that we wanted to calculatethe gradient of `z` with respect to `x`,but wanted for some reason to treat `y` as a constant,and only take into account the rolethat `x` played after `y` was calculated.Here, we can detach `y` to return a new variable `u`that has the same value as `y` but discards any informationabout how `y` was computed in the computational graph.In other words, the gradient will not flow backwards through `u` to `x`.Thus, the following backpropagation function computesthe partial derivative of `z = u * x` with respect to `x` while treating `u` as a constant,instead of the partial derivative of `z = x * x * x` with respect to `x`.
###Code
x.grad.zero_()
y = x * x
u = y.detach()
z = u * x
z.sum().backward()
x.grad == u
###Output
_____no_output_____
###Markdown
Since the computation of `y` was recorded,we can subsequently invoke backpropagation on `y` to get the derivative of `y = x * x` with respect to `x`, which is `2 * x`.
###Code
x.grad.zero_()
y.sum().backward()
x.grad == 2 * x
###Output
_____no_output_____
###Markdown
Computing the Gradient of Python Control FlowOne benefit of using automatic differentiationis that [**even if**] building the computational graph of (**a functionrequired passing through a maze of Python control flow**)(e.g., conditionals, loops, and arbitrary function calls),(**we can still calculate the gradient of the resulting variable.**)In the following snippet, note thatthe number of iterations of the `while` loopand the evaluation of the `if` statementboth depend on the value of the input `a`.
###Code
def f(a):
b = a * 2
while b.norm() < 1000:
b = b * 2
if b.sum() > 0:
c = b
else:
c = 100 * b
return c
###Output
_____no_output_____
###Markdown
Let us compute the gradient.
###Code
a = torch.randn(size=(), requires_grad=True)
d = f(a)
d.backward()
###Output
_____no_output_____
###Markdown
We can now analyze the `f` function defined above.Note that it is piecewise linear in its input `a`.In other words, for any `a` there exists some constant scalar `k`such that `f(a) = k * a`, where the value of `k` depends on the input `a`.Consequently `d / a` allows us to verify that the gradient is correct.
###Code
a.grad == d / a
###Output
_____no_output_____
|
SandBox.ipynb
|
###Markdown
###Code
print("Hello")
###Output
Hello
###Markdown
###Code
import tensorflow as tf
import pandas as pd
from collections import defaultdict
sentence="the cat in the hat can make you mad"
words=sentence.split(' ')
words
d=defaultdict(int)
d
for word in words:
d[word]+=1
d
###Output
_____no_output_____
###Markdown
Load raw data
###Code
with open('data/cards.json', 'r') as file:
cards = json.load(file)
cards
###Output
_____no_output_____
###Markdown
Build Cards
###Code
clues = []
id_counter = 0
for key, values in cards.items():
if key == 'weapons':
for value in values:
clues.append(Weapon(name=value, clue_id=id_counter))
id_counter += 1
if key == 'rooms':
for value in values:
clues.append(Room(name=value, clue_id=id_counter))
id_counter += 1
if key == 'suspicious':
for value in values:
clues.append(Suspicious(name=value, clue_id=id_counter))
id_counter += 1
raw_clues = deepcopy(clues)
raw_clues
###Output
_____no_output_____
###Markdown
Pick Guilty
###Code
weapons, rooms, suspicious = classify_clues(clues=clues)
crime = Crime(weapon=choice(weapons),
room=choice(rooms),
suspicious=choice(suspicious))
crime.to_dict(verbose=1)
clues.remove(crime.room)
clues.remove(crime.suspicious)
clues.remove(crime.weapon)
###Output
_____no_output_____
###Markdown
Split Cards into Groups
###Code
shuffle(clues)
clues
players = 6
number_of_cards = int(len(clues)/players)
grouped_cards = [
clues[i * number_of_cards:(i + 1) * number_of_cards]
for i in range((len(clues) + number_of_cards - 1) // number_of_cards )
]
grouped_cards
###Output
_____no_output_____
###Markdown
Build Agents
###Code
player_names = ['Dick tracy', 'Shelock', 'Watson', 'Inspector Gatget', 'Batman', 'Tintin']
agents = [
Agent(name=name, clues=deepcopy(raw_clues), own_clues=cards)
for name, cards in zip(player_names, grouped_cards)
]
###Output
_____no_output_____
###Markdown
Run Turn
###Code
guess = agents[0].guess()
guess.to_dict(verbose=1)
response = {}
for index in range(1,6):
response.update(agents[index].response(question=guess))
response
if not len(response):
print(crime.to_dict(verbose=1))
print(guess.to_dict(verbose=1))
assert crime.to_dict(verbose=1) == guess.to_dict(verbose=1)
agents[0].process_response(response=response)
agents[0].report()
###Output
_____no_output_____
|
scripts/Compute viscosity.ipynb
|
###Markdown
Viscosity estimates for Zebrafish Comparison of Pries et al. and Lee et al.
###Code
import numpy as np
import matplotlib.pyplot as plt
from numpy import exp
from scipy.optimize import brentq
def hd_to_ht(hd, d):
ht_per_hd = hd + (1 - hd)*(1 + 1.7*exp(-0.415*d) - 0.6*exp(-0.011*d))
return hd*ht_per_hd
def _ht_to_hd(ht, d, a=0, b=1):
return brentq(lambda x: hd_to_ht(x, d) - ht, a, b)
def ht_to_hd(ht, d, a=0, b=1):
ht = np.asarray(ht)
if ht.ndim == 0:
return _ht_to_hd(ht.item(), d, a, b)
@np.vectorize
def comp(ht):
return _ht_to_hd(ht, d, a, b)
return comp(ht)
def Pries_mu045(d):
return 220*exp(-1.3*d) + 3.2 - 2.44*exp(-0.06*(d**0.645))
def Pries_C(d):
denom = 1 + (10**(-11))*(d**(12))
factor = 1 / denom
return (0.8 + exp(-0.075*d)) * (-1 + factor) + factor
def Pries_mu_vitro(hd, d):
C = Pries_C(d)
mu045 = Pries_mu045(d)
nominator = (1 - hd)**C - 1
denominator = (1 - 0.45)**C - 1
factor = nominator / denominator
return 1 + (mu045 - 1)*factor
def lee_mu(ht):
ht = ht*100
return 0.0007*ht*ht + 0.0495*ht + 1.5077
lee_mu(.35)
###Output
_____no_output_____
###Markdown
Comparing the curves at d=60µm and d=240µm
###Code
d = 60
ht = np.linspace(0, 0.7, 500)
viscosities_lee = lee_mu(ht)
viscosities_pries = Pries_mu_vitro(ht_to_hd(ht, d), d)
fig, ax = plt.subplots(figsize=(8, 4.5), dpi=100)
ax.plot(ht, viscosities_lee, color="navy", label="Lee")
ax.plot(ht, viscosities_pries, color="tomato", label=f"Pries at d={d}µm")
ax.axvline(.3585, linestyle='--', color='forestgreen', zorder=-1, label="Lee interpolation region")
ax.axhline(4.2, color='k', alpha=0.2, label="µ=4.2 cP", zorder=-2)
ax.set_xlim(0, ht.max())
ax.set_xlabel("Tube hematocrit")
ax.set_ylabel("Viscosity [cP]")
ax.legend()
d = 240
ht = np.linspace(0, 0.7, 500)
viscosities_lee = lee_mu(ht)
viscosities_pries = Pries_mu_vitro(ht_to_hd(ht, d), d)
fig, ax = plt.subplots(figsize=(8, 4.5), dpi=100)
ax.plot(ht, viscosities_lee, color="navy", label="Lee")
ax.plot(ht, viscosities_pries, color="tomato", label=f"Pries at d={d}µm")
ax.axvline(.3585, linestyle='--', color='forestgreen', zorder=-1, label="Lee interpolation region")
ax.axhline(4.2, color='k', alpha=0.2, label="µ=4.2 cP", zorder=-2)
ax.set_xlim(0, ht.max())
ax.set_xlabel("Tube hematocrit")
ax.set_ylabel("Viscosity [cP]")
ax.legend()
###Output
_____no_output_____
###Markdown
ObservationsPries et al. assume that with a hematocrit of 0, we have a viscosity of 1 cP, while for zebrafish, it is actually 1.5 cP. This change alone is not sufficient to make up for the difference though, since the slope of the Lee-estimate is greater than that of the Pries estimate. Looking at diameter dependence
###Code
hd_values = [0.4, 0.5, 0.6, 0.7]
linestyles = ['-', '--', '-.', ':']
d = np.linspace(0.1, 240, 500)
fig, axes = plt.subplots(2, 1, figsize=(8, 4.5), dpi=100, sharex=True,)
for hd, linestyle in zip(hd_values, linestyles):
axes[0,].plot(d, Pries_mu_vitro(hd, d), color="Tomato", linestyle=linestyle)
axes[1,].plot(d, hd_to_ht(hd, d), color="black", linestyle=linestyle, label=f"hd={hd}")
axes[0,].set_ylabel("Viscosity (Pries) [cP]")
end_viscosity = Pries_mu_vitro(hd, d.max())
axes[0,].annotate(
f"hd={hd}, µ={end_viscosity:.1f}",
(d.max(), end_viscosity*1.02),
(d.max()*0.7, end_viscosity*1.7 - 1.5),
verticalalignment='center',
horizontalalignment='right',
arrowprops={'arrowstyle': '->', 'linestyle': linestyle},
bbox={'facecolor': '#FFFFFFAA', 'edgecolor': '#00000000'}
)
axes[1,].set_ylabel("Tube hematocrit")
axes[1,].set_xlabel("Diameter [µm]")
axes[0].set_title(f"Discharge hematocrit: {hd}")
axes[0].set_ylim((0, 10))
axes[0].set_xlim(d.min(), d.max())
axes[1].legend(ncol=4, loc='upper center',)
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.